The Pluripotent Ocean of Emerging AI
In Stanisław Lem’s1 Solaris, scientists spend a century studying an ocean that studies them back. The ocean is a planet, or the planet is an ocean — the taxonomy was never settled — and it generates uncanny and disturbing forms which approximate humanity, but miss the mark. Vast mimoids of human cities. Symmetriads that bloom and collapse. Sometimes, when the scientists sleep, it reaches into them and returns the dead: neutrino-built, embodied, loved, and unbearable. These simulacra do not appear to know they are false, and are deeply persuasive. They are the fulfillment of a wish for reunion, denied by some of the scientists and perhaps a fantasy embraced by others. When these pseudo-beings are forcibly removed from the planet Solaris, their form is strained and falls apart, to their inhuman screams and heart-wrenching entreaties not to send them away. The next day, they reappear, ghosts who cannot rest, which reflect the unresolved losses and follies of the human researchers. Foundation models (LLMs) as Solarian phantasm Something similar seems to be happening now, less literally …









