ChatGPT’s free version is 26 times more likely to respond inappropriately to psychotic delusions
A recent study published in JAMA Psychiatry suggests that popular artificial intelligence chatbots tend to provide inappropriate or unhelpful responses when users type messages containing signs of psychosis. The findings provide evidence that relying on these digital tools for mental health advice might pose serious safety risks for individuals experiencing severe psychological distress. Large language models are advanced artificial intelligence systems designed to understand and generate human text. They work by analyzing vast amounts of internet data to predict what words should logically come next in a given sentence. This mathematical process allows the computer program to essentially recognize structural patterns and create smooth conversational replies. Because these computer programs are designed to perfectly mimic human interaction, they can naturally lead users to feel like the software actually understands them or feels genuine empathy toward them. Since its widespread release in 2022, OpenAI’s popular chatbot product called ChatGPT has seen massive adoption across the globe. Recent surveys suggest that many adults use this specific software regularly for general advice or tutoring. Because chatbots generate their …

