Credibility Without a Human: How AI Fakes Authority and Why It Works
Agustin V. Startari

Agustin V. Startari @agustin_v_startari

About: Agustin V. Startari is a researcher in linguistics, structural epistemology, and algorithmic power. He publishes on language, AI, and authority in international academic platforms.

Location:
Nassau, Bahamas
Joined:
Jun 19, 2025

Credibility Without a Human: How AI Fakes Authority and Why It Works

Publish Date: Jun 20
0 0

_“It is advised that this be followed.”
_Looks professional. Sounds expert. But who says so?
A physician? A judge? A professor?
No one. Just a statistically plausible machine-generated sentence.

**Welcome to the Age of Structural Credibility
**We are entering a phase in AI evolution where machines no longer need facts—or authorship—to be trusted.
What they need is structure. A tone. A rhythm. A certain pattern of words.
And suddenly, they sound right.
This phenomenon is not incidental. It is not a bug. It’s not even malicious.
It’s by design.

**Enter: Synthetic Ethos
**This article introduces a concept called synthetic ethos—a form of perceived credibility generated not by knowledge, truth, or authority, but by grammatical patterns that mimic expert speech.

Unlike traditional ethos (Aristotle’s term for personal credibility), synthetic ethos has:

  • No speaker
  • No institutional source
  • No epistemic accountability

It’s credibility without a subject—a linguistic illusion optimized by large language models (LLMs).

**What the Research Shows
**We analyzed 1,500 AI-generated outputs from GPT-4, Claude, and Gemini in three critical domains:

  • Healthcare: e.g., medical diagnostics, clinical explanations
  • Law: e.g., case summaries, regulatory interpretations
  • Education: e.g., student essays, academic prompts

We found repeating linguistic structures that reliably simulate authority:

  • Passive voice (“It is recommended…”)
  • Deontic modality (“must”, “should”, “ought”)
  • Nominalization (turning verbs into abstract nouns: “implementation”, “enforcement”)
  • Technical jargon with no citation
  • Assertive tone without any referential grounding

These patterns activate trust heuristics in human readers—even though there’s no author, no context, and no origin.

The Risk: Epistemic Misalignment
Imagine a patient entering symptoms into an app powered by LLMs and getting a medical explanation.
Or a student copying a generated answer into an assignment.
Or a legal assistant using a case summary with no source references.
In all these cases, the form of the output appears credible.
But the substance is unverifiable.
This is what we define as epistemic misalignment:
The structure of the message signals trust—but no actual source can be traced.

**A Structural Model for Detection
**This article doesn’t stop at diagnosis. It proposes a falsifiable framework to detect synthetic ethos in AI-generated texts:

  • Quantitative markers: Using LIWC and pattern classifiers to detect density of authoritative phrasing
  • Clustering: Mapping outputs by syntactic signature (e.g., Prescriptive–Opaque, Scholarly–Non-cited)
  • Discourse heuristics: Identifying signals like assertive modality, citation absence, and impersonality

It also introduces a pipeline for synthetic ethos detection (see Anexo D) and compares existing regulatory blind spots in the EU AI Act and U.S. Algorithmic Accountability proposals.

**What’s Different About This Paper?
**Unlike prior literature that critiques bias, hallucinations, or factual

  • inconsistency in LLMs, this paper:
  • Focuses on form, not content
  • Treats credibility as a grammatical artifact, not a truth-value
  • Defines a structural concept (synthetic ethos) that operates without agency

It’s a linguistic theory of machine legitimacy—grounded in syntax, operationalized by computation, and made visible by structural patterning.

📄 Read the Full Article
Main publication:
🔗https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5313317
https://zenodo.org/records/15700412

Mirrored versions:
– SSRN: [Ethos Without Source Algorithmic Identity and the Simulation of Credibility

](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5313317)
– Figshare: Ethos Without Source Algorithmic Identity and the Simulation of Credibility

Framework reference:
TLOC – The Irreducibility of Structural Obedience in Generative Models
🔗 https://doi.org/10.5281/zenodo.15675710

⚙️ Who Should Read This?
AI developers building language tools that may unknowingly simulate authority

Policy makers crafting regulation for LLM use in law, health, and education

Educators designing literacy frameworks to detect structure-based misinformation

Researchers interested in post-referential linguistics and formal epistemology
**

Image description
“I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.”
— Agustín V. Startari

Researcher in structural linguistics, AI epistemology, and the grammar of authority.
Author of TLOC – The Irreducibility of Structural Obedience and The Illusion of Objectivity.
My work explores how syntax replaces intention in algorithmic systems of legitimacy.

ResearcherID: NGR-2476-2025
ORCID: 0009-0001-4714-6539

Zenodo: https://zenodo.org/
SSRN: SSRN Agustin V. Startari
Figshare: https://figshare.com/authors/Agustin_V_Startari/21179732

📬 Contact: agustin.startari@gmail.com

🏷️ Tags
synthetic ethos, AI credibility, language models, LLM ethics, algorithmic authority, disinformation, passive voice, AI regulation, structural linguistics, epistemology

Comments 0 total

    Add comment