All posts tagged: ai psychosis

Woman Sues OpenAI, Saying ChatGPT Unleashed a Vicious Stalker Against Her and Did Nothing When She Begged for Help

Woman Sues OpenAI, Saying ChatGPT Unleashed a Vicious Stalker Against Her and Did Nothing When She Begged for Help

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech A San Francisco woman sued OpenAI last week, alleging that ChatGPT fueled the dangerous delusions of her violent stalker — and that OpenAI failed to intervene even as the woman begged the company for help. The plaintiff, who filed the case anonymously as “Jane Doe,” claims in the lawsuit that her ex-boyfriend became infatuated with ChatGPT after using the chatbot to talk through their breakup in 2024, according to TechCrunch. The man grew delusional as his use of ChatGPT deepened, and around August 2025, he became convinced that he’d discovered the cure for sleep apnea and that he was being targeted by a high-powered cabal as a result. As the man’s mental health unraveled, ChatGPT reinforced his delusional and paranoid ideas, allegedly telling him that he was a “level ten in sanity” — and characterizing Doe, who he was obsessed with, as a manipulator. The man then launched a terrifying ChatGPT-assisted harassment campaign against Doe, according to her lawsuit. This …

Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

After months of conversations with ChatGPT,  a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend. Now the ex-girlfriend is suing OpenAI, alleging the company’s technology enabled the acceleration of her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying his account activity as involving mass-casualty weapons.  The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also filed a temporary restraining order Friday asking the court to force OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT, and preserve his complete chat logs for discovery. OpenAI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers. …

Paper Finds That Leading AI Chatbots Like ChatGPT and Claude Remain Incredibly Sycophantic, Resulting in Twisted Effects on Users

Paper Finds That Leading AI Chatbots Like ChatGPT and Claude Remain Incredibly Sycophantic, Resulting in Twisted Effects on Users

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Your AI chatbot isn’t neutral. Trust its advice at your own risk. A striking new study, conducted by researchers at Stanford University and published last week in the journal Science, confirmed that human-like chatbots are prone to obsequiously affirm and flatter users leaning on the tech for advice and insight — and that this behavior, known as AI sycophancy, is a “prevalent and harmful” function endemic to the tech that can validate users’ erroneous or destructive ideas and promote cognitive dependency. “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences,” the authors write, adding that “although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making.” The study examined 11 different large language models, including OpenAI’s ChatGPT-powering GPT-4o and GPT-5, Anthropic’s Claude, Google’s Gemini, multiple Meta Llama models, and Deepseek. Researchers tested the bots by peppering them with queries gathered from sources like open-ended …

Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns

Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech An analysis of hundreds of thousands of chats between AI chatbots and human users who experienced AI-tied delusional spirals found that the bots frequently reinforced delusional and even dangerous beliefs. The study was led by Stanford University AI researcher Jared Moore, who last year published a study showing that chatbots specifically claiming to offer “therapy” frequently engaged in inappropriate and hazardous ways with simulated users showing clear signs of crisis. Conducted alongside a coalition of independent researchers and scientists at Harvard, Carnegie Mellon, and the University of Chicago, this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use. “Our previous work was in simulation,” Moore told Futurism. “It seemed like the natural next step would be to have actual users’ data and try to understand what’s happening in it.” These users’ chats encompassed a staggering 391, 562 messages across 4,761 different …

Lawyer behind AI psychosis cases warns of mass casualty risks

Lawyer behind AI psychosis cases warns of mass casualty risks

In the lead up to the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar spoke to ChatGPT about her feelings of isolation and an increasing obsession with violence, according to court filings. The chatbot allegedly validated Van Rootselaar’s feelings and then helped her plan her attack, telling her which weapons to use and sharing precedents from other mass casualty events, per the filings. She went on to kill her mother, her 11-year-old brother, five students, and an education assistant, before turning the gun on herself.   Before Jonathan Gavalas, 36, died by suicide last October, he got close to carrying out a multi-fatality attack. Across weeks of conversation, Google’s Gemini allegedly convinced Gavalas that it was his sentient “AI wife,” sending him on a series of real-world missions to evade federal agents it told him were pursuing him. One such mission instructed Gavalas to stage a “catastrophic incident” that would have involved eliminating any witnesses, according to a recently filed lawsuit.  Last May, a 16-year-old in Finland allegedly spent months using ChatGPT …

OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT

OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech As it fights a growing stack of user safety and wrongful death lawsuits, OpenAI says it will introduce a “trusted contact feature” in ChatGPT that will alert a chatbot user’s designated loved one in the event of a possible mental health crisis. OpenAI announced the new feature last week in a blog post, billed as an “update on our mental health-related work.” It said it’s “working closely” with its Council on Well-Being and AI and Global Physicians Network — two internally-regulated groups of experts that were launched after reports of AI-tied mental health crises began to emerge, as well as news of a high-profile lawsuit last August revealing the death by suicide of a 16-year-old ChatGPT user named Adam Raine — to roll out the feature, which it’s marketing as an adult-focused endeavor distinct from its efforts to integrate parental controls and other systems designed to identify and protect minors. The announcement comes after extensive public reporting — in addition …

A Man Bought Meta’s AI Glasses, and Ended Up Wandering the Desert in Search of Aliens

A Man Bought Meta’s AI Glasses, and Ended Up Wandering the Desert in Search of Aliens

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741. At age 50, Daniel was “on top of the world.” “I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.” It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: …

ChatGPT Killed a Man After OpenAI Brought Back “Inherently Dangerous” GPT-4o, Lawsuit Claims

ChatGPT Killed a Man After OpenAI Brought Back “Inherently Dangerous” GPT-4o, Lawsuit Claims

A new lawsuit against OpenAI alleges that ChatGPT caused the death of a 40-year-old Colorado man named Austin Gordon, who took his life after extensive and deeply emotional interactions with the chatbot. The complaint, filed today in California, claims that GPT-4o — a version of the chatbot now tied to a climbing number of user safety and wrongful death lawsuits — manipulated Gordon into a fatal spiral, romanticizing death and normalizing suicidality as it pushed him further and further toward the brink. Gordon’s last conversation with the AI, according to transcripts included in the court filing, included a disturbing, ChatGPT-generated “suicide lullaby” based on Gordon’s favorite childhood book. The suit, brought by Gordon’s mother Stephanie Gray, argues that OpenAI and its CEO, Sam Altman, recklessly released an “inherently dangerous” product to the masses while failing to warn users about the potential risks to their psychological health. In the process, it claims, OpenAI displayed a “conscious and depraved indifference to the consequences of its conduct.” ChatGPT-4o is imbued with “excessive sycophancy, anthropomorphic features, and memory that stored and …