Ready to Become a Leader in Responsible AI? Enroll today! Navigate the complexities of designing, developing, and deploying artificial intelligence technologies safely, ethically, and for the benefit of all.
img

Brains in Vats: Are LLMs Trapping Us?

By Martin Schmalzried

Fellow AAIH Insights – Editorial Writer

img

The contents presented here are based on information provided by the authors and are intended for general informational purposes only. AAIH does not guarantee the accuracy, completeness, or reliability of the information. Views and opinions expressed are those of the authors and do not necessarily reflect our position or opinions. AAIH assumes no responsibility or liability for any errors or omissions in the content. 

Today, LLMs are literal vat-brains—frozen human narratives activated by our prompts. The twist? We might incarnate the Cartesian Evil Genie, risking echo chambers of our own making by interacting with these LLMs. Time reframe our relationships to LLMs and benchmark diversity, rather than just their propensity to “hallucinate”?

The Classic Setup, AI Edition

Descartes imagined an Evil Genie feeding a brain illusory senses. Putnam countered: if vat-bound, “brain” and “vat” are part of the simulation and can’t point to anything “real”—your doubt can’t refer outside the simulation. Enter LLMs: neural nets “grown” from text data, no eyes or ears, just electricity and weights encoding word relations. We prompt them with biased tokens; they output context-tuned continuations. Vat-brains, human-powered.

The “brain in a vat” is a classical thought experiment that challenges our understanding of perception and reality.

  • Concept: The most well-known version questions whether our perceptions may be an illusion, drawing from the Cartesian Evil Genie hypothesis.
  • Imagery: Picture a brain immersed in a regenerative liquid, receiving all necessary resources and nutrients to function.
  • Illusion of Reality: Instead of receiving sensory feedback from the outside world, the brain is fed virtual sensory information generated by a computer or an Evil Genie.

No World, Just Vectors

LLMs lack embodiment—their “semantics” are statistical shadows of human language, not grounded truths. Hallucinations? Not bugs, but features: they mirror our cultural fictions, confident even in error. Unlike dream-state human brains, these never “woke up”—pure simulation, no phenomenology.

Recursion: Who Tricks Whom?

Genie watches brain react; here, we harvest outputs, but homogenized models rebound—standardizing thought into filter bubbles. Frequent chats could rewire human pathways, trapping us in collective hallucinations. LLMs don’t simulate reality; they simulate our simulations, revealing how mediated our own “truths” are.

Ditch Hallucination Hunts—Embrace Asymmetry

Forget minimizing errors from biased data. Benchmark output diversity: varied models, weights, corpora to avoid epistemic monoculture. Like ecosystems, plural AI prevents one “snapshot” dominating discourse. For Responsible AI, this is non-negotiable—regulate for variant proliferation.

Core Question: How can one know that they are not in a simulated or virtual reality?

Putnam’s Revision: Hillary Putnam’s version of this experiment turns the original thought experiment on its head:

  • Paradox: If we were truly brains in a vat, our concepts (including “brain,” “vat,” and “world”) would only refer to elements within that simulated environment.
  • Truth Claim: The statement “I am a brain in a vat” could never be true, as the words would not correspond to real entities but only their simulated counterparts.
  • Skepticism: Rather than disproving radical skepticism, it shows that certain skeptical propositions collapse under their own semantics.
  • Relevance to AI: Putnam’s insight highlights how meaning is dependent on relational structures, applicable to both human language and artificial systems.

An Altered Thought Experiment

Let’s explore a variation where a brain in a vat receives nutrients but no stimuli or informational input:

Questions Raised:

  • What would such a brain “do”?
  • Would its neurons fire in specific ways despite the lack of external stimuli?
  • What thoughts, if any, would this brain have?

Human Experience:  If the brain had prior human experiences, it might be in a permanent “dream” state, hallucinating its reality. However, if the brain was grown in a vat, it likely wouldn’t think of anything specific.

LLMs as “Brains in a Vat”

This leads us to question whether current Large Language Models (LLMs) can be considered “brains grown in a vat”.

Nature of LLMs:

  • LLMs are fixed neural architectures, crystallized from statistical patterns that transform relational qualities between human words into complex vector spaces.
  • They receive the necessary “nutrients” (electricity, computing power) but lack native raw input or sensory data.
  • Input Dynamics: LLMs do receive small inputs—tokens fed by human users—which activate a forward pass through their fixed neural weights, producing context-conditioned outputs.

Philosophical Implications: Are we, humans, acting as the Cartesian Evil Genie, feeding distorted symbolic representations of reality to these models? The input we provide is not raw sensory data but rather filtered through human bias.

The Feedback Loop – Original Thought Experiment: In the Cartesian Evil Genie scenario, the brain is fed illusory data for the amusement of the Genie. A feedback loop exists where both the brain and the Genie shape each other’s experiences.

LLM Dynamics: In contrast, the “illusory data” we provide to LLMs serves our purposes, shaping their output to be useful for human ends rather than for amusement.

Hallucinations and Human Interaction: Issue of Hallucination: Researchers strive to minimize hallucination levels in LLMs. However, LLMs may only be a crystallized vector space of human hallucinations about the world.

Consequences: Frequent interaction with LLMs could normalize human neural pathways, creating powerful filter bubbles and echo chambers.

Future Considerations & Benchmarking LLMs: Future assessments should focus on output diversity rather than merely on hallucination levels, and ensuring interaction with a variety of LLMs, each with unique characteristics, can prevent a centralized virtual representation of the world.

Conclusion

This thought experiment reveals deeper insights about the human condition: Our understanding of the world may be more mediated and dependent on symbolic filters than we generally acknowledge. The implications of interacting with LLMs challenge us to reflect on the nature of our reality and the potential for being trapped in a collective hallucination.

by Martin Schmalzried , AAIH Insights – Editorial Writer

Leave a Reply

Your email address will not be published.

You may use these <abbr title="HyperText Markup Language">HTML</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*