The Death of Shared Reality: The “Liar’s Dividend” and the AI Revolution

For over a century, the photographic image and the video recording have served as the gold standard of evidence. From courtrooms to newsrooms, the adage “seeing is believing” was the bedrock of our shared reality. If a politician was caught on tape accepting a bribe, or a CEO was filmed making a discriminatory remark, the debate was about context, not existence.

That era is over.

We are entering a new epistemological crisis driven by Artificial Intelligence. While public anxiety focuses on being fooled by a deepfake, a more insidious danger has already arrived. It is not the fake video that destroys the truth; it is the specter of the fake video. This phenomenon, known by scholars as the “Liar’s Dividend,” creates a world where politicians, criminals, and corporations can dismiss actual, irrefutable evidence of wrongdoing by simply claiming: “That’s not me. That’s AI.”

The Mechanism of Doubt: What is the Liar’s Dividend?

The term “Liar’s Dividend” was coined by law professors Bobby Chesney and Danielle Citron. It describes the strategic benefit that liars derive from an environment where deepfakes are possible.

In a healthy information ecosystem, the burden of proof lies on the accuser. However, video evidence usually shifted that burden. If there is a video of you, you must explain it. The rise of generative AI reverses this dynamic. It introduces a permanent “reasonable doubt” into the public consciousness.

The danger operates on two levels:

  1. Direct Deception: A fake video is released to damage a reputation.
  2. The Dividend (Plausible Deniability): A real video is released, but the target successfully claims it is fake, capitalizing on the public’s awareness that AI can fake things.

This leads to a society not of gullible believers, but of cynical skeptics. When the public understands that anything can be fabricated, they are granted permission to disbelieve anything that contradicts their worldview.

Case Studies: The Dividend in Action

The Liar’s Dividend is no longer theoretical; it is already being cashed in by high-profile figures.

1. The Tesla Autopilot Defense (2023) In a lawsuit regarding a fatal crash involving Tesla’s Autopilot, Elon Musk’s lawyers attempted to block the admission of a 2016 video in which Musk claimed the car could drive itself. Their argument? Since Musk is a celebrity, he is a frequent target of deepfakes; therefore, the authenticity of the 2016 statement could not be guaranteed. The Outcome: The judge rejected the argument, calling it “deeply troubling,” but the precedent was set: legal teams are now using the mere existence of AI to challenge the admissibility of historical video evidence.

2. The Gabon Coup Attempt (2018) When Ali Bongo Ondimba, the President of Gabon, fell ill and disappeared from public view for weeks, the government released a “proof of life” video to quell unrest. The video was awkward; the President barely blinked and his movements were stiff. Opponents immediately claimed it was a deepfake. The Consequence: The belief that the video was AI-generated triggered a military coup attempt. Whether the video was actually fake or if the President was simply recovering from a stroke became irrelevant. The suspicion of AI was enough to destabilize a nation.

3. Political Audio Leaks In the United States and the UK, political operatives caught on “hot mics” making derogatory comments are increasingly pivoting to the “AI Defense.” Roger Stone, a longtime political consultant, denied the authenticity of documentary footage showing him calling for violence, suggesting the clips were manipulated. As audio cloning becomes cheaper and faster than video cloning, this defense will likely become the standard response to every leaked phone call in the coming election cycles.

The Technological Arms Race: Why Detection Fails

A common counter-argument is that technology will save us—that we will build “Deepfake Detectors” to authenticate reality. Unfortunately, the architecture of AI makes this a losing battle.

Most deepfakes are created using Generative Adversarial Networks (GANs).Image of Generative Adversarial Network diagram

Getty Images

Explore

A GAN consists of two neural networks pitted against each other:

  1. The Generator: Creates the fake image/video.
  2. The Discriminator: Tries to detect if the image is fake.

The Generator learns from the Discriminator. If the Discriminator spots a flaw (e.g., “the shadows don’t match the light source”), the Generator corrects it in the next iteration. By definition, the AI is training itself to beat detection software.

Furthermore, detection software produces false positives. If a detection tool says a real video has a “20% chance of being AI,” a politician can latch onto that 20% to claim total innocence. The tech doesn’t provide clarity; it provides statistical noise that liars can exploit.

The Death of the “Public Square”

Democracy relies on a shared set of facts. We can disagree on policy (whether to raise taxes), but we must agree on reality (whether the economy grew or shrank).

The Liar’s Dividend destroys this shared foundation. It accelerates Tribal Epistemology—the idea that truth is determined not by evidence, but by loyalty to a group.

  • If a video shows “my” leader committing a crime, I can choose to believe it is a deepfake to protect my identity.
  • If a video shows “your” leader committing a crime, I will believe it is real, even if it is actually a deepfake.

Functioning in a Zero-Trust World

We are moving from an era where we assumed media was true until proven false, to an era where we must assume media is false until proven true.

This transition poses an existential threat to journalism and justice. If a whistleblower releases footage of a war crime or corporate negligence, the immediate public reaction will no longer be outrage, but a shrug of the shoulders and a question: “Is that even real?”

In the end, the ultimate victim of the AI revolution isn’t the person in the fake video. The victim is Reality itself. When the truth becomes indistinguishable from fiction, the liar doesn’t just win the argument—they own the narrative. The Liar’s Dividend pays out in the currency of chaos, and we are all footing the bill.

Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here