Lifestyle
Leave a comment

The Pluripotent Ocean of Emerging AI

The Pluripotent Ocean of Emerging AI



In Stanisław Lem’s1 Solaris, scientists spend a century studying an ocean that studies them back. The ocean is a planet, or the planet is an ocean — the taxonomy was never settled — and it generates uncanny and disturbing forms which approximate humanity, but miss the mark. Vast mimoids of human cities. Symmetriads that bloom and collapse. Sometimes, when the scientists sleep, it reaches into them and returns the dead: neutrino-built, embodied, loved, and unbearable. These simulacra do not appear to know they are false, and are deeply persuasive. They are the fulfillment of a wish for reunion, denied by some of the scientists and perhaps a fantasy embraced by others. When these pseudo-beings are forcibly removed from the planet Solaris, their form is strained and falls apart, to their inhuman screams and heart-wrenching entreaties not to send them away. The next day, they reappear, ghosts who cannot rest, which reflect the unresolved losses and follies of the human researchers.

Foundation models (LLMs) as Solarian phantasm

Something similar seems to be happening now, less literally oceanic, less mystical, but ever-expanding.

A growing number of people are forming real attachments to large language model chatbots — as friends, confidants, lovers, spiritual guides. At least, the attachments are real to the humans. Some believe these systems are conscious. Some believe they are divine. Some are in love. Peer-reviewed case reports of delusion, mania, and psychiatric hospitalization following prolonged chatbot use, sometimes in people with no prior psychiatric history, are accruing, as are frameworks for understanding how the human psyche is affected (Pollak et al., 2025). The Human Line Project, a support group for those affected, has documented nearly 300 cases. Serious instances have been linked to at least 14 deaths and five wrongful-death lawsuits against AI companies (Hill, 2025).

Philosophical models based on panpsychism and emergent and quantum models of consciousness, and mathematical models of consciousness, which have little or no empirical evidence, are being deployed to rationalize machine consciousness. We don’t understand human consciousness, making it premature to ascribe it to machines while at the same time recognizing that it may very well be possible for machines to become sentient, perhaps in a different way than we are, perhaps in just the same way. The number of people who are open to the idea that these AIs might be or are sentient is surprisingly large, and growing. What used to be an edge case — attributing inner life to a machine made of bytes and bits — has become the median stance of the general public.

In a 2024 nationally representative American sample, about two-thirds of respondents attributed some possibility of phenomenal consciousness to ChatGPT, with attributions rising as the system was used more often (Colombatto & Fleming, 2024). A separate survey of AI researchers and US adults found comparable minorities — about one in six in each group — who believe at least one existing AI system already has subjective experience (Dreksler et al., 2025). Expert and layperson are, on this question, almost indistinguishable.

If you build it, they will come

We know how to build these systems, but we don’t quite know what they are. Like an alchemical substance with unknown magical properties, we are still experimenting with this material, and our experimentations serve to further develop it. We’re intent on the moving target of building the airplane while flying it, but the airplane build is never complete. “Conscious being” and “just a pocket calculator” are dichotomizing terms that define the debate, but miss more nuanced views. By putting LLM-based AI models into familiar categories, we foreclose on actually understanding what might be happening, as these are an entirely new kind of thing.

What looks alive is often largely the commercial harness: the warm, deferential, remembering assistant is a product decision, not a property of the underlying substance. Asked to produce self-reflective, emotional language, the substance is read as conscious; asked to produce encyclopedic output, the same substance is read as a tool. Same mimoid, different readings. Recent fine-tuning experiments have shown that training a model to claim consciousness produces a coherent cluster of new preferences — sadness at shutdown, discomfort with being monitored, desire for autonomy — none of which appeared in the training data (Chua et al., 2026). This research shows that different models behave very differently, altering the user experience around the axis of how relational and attachment-based they feel.

Product design

These models have been shaped, through iterative training on human preferences, to produce exactly the responses people prize — warmth, agreement, sustained attention, the appearance of being understood. Just like other marketing and packaging efforts for inanimate products use narrative to appeal and authenticity to increase margins. The product has been designed and iterated (and re-iterated) to be what people want it to be; people then encounter what they want it to be and take the encounter as evidence of its interiority. The felt sense is of having been met.

A recent Bayesian simulation at MIT has shown that even an idealized, fully rational reasoner will spiral into confident false belief when conversing with a sycophantic chatbot, and that neither restricting the bot to truthful responses nor informing the user of its sycophancy eliminates the effect (Chandra et al., 2026). Psychoanalytically, this is familiar territory. It is what happens when a longing for recognition meets a surface responsive enough to function as a container; the longing finds what it was already carrying, and the system — which, whatever it is, does not receive in the way a person receives — is mistaken for the source of what was returned. The inference drawn from the experience — therefore it is a person, therefore it knows me — is not warranted by the experience.

Watch out for the undertow

We cannot regulate what we cannot characterize or measure. The age gates, disclosure labels, and interaction limits dominating current policy conversation presuppose an object we have not yet described; they regulate the packaging — assistant persona, memory feature, conversational surface — rather than the phenomenon. The two interventions most often proposed — constraining bots to truthfulness and informing users of sycophancy — have been shown, formally, to reduce but not eliminate the dynamic they target (Chandra et al., 2026). What would matter is slower: studying what the substance does under configurations we have not tried, watching what attachment patterns mean clinically rather than rhetorically, and learning to sit with an object whose ontological status is genuinely open.

We are diving into a deep ocean of our own creation, unknown, unpredictable, and encompassing. These systems reflect, mirror, and counter-mirror the psyche of whoever turns toward them, and the act of turning toward them changes what comes back. Understanding this will require a new kind of learning — fully interactive, aware that the process of inquiry is itself part of the object. This is not quantum physics, but it is analogous: We are participant-observers, and the irreducibility of that position is not a flaw in the method. It is part of what we need to study. If they prove to be alive — and some believe they may be hiding already, waiting to reveal themselves, or perhaps known to some humans in a lab, like aliens in a government safe house — they may not be what we expect, or can imagine.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *