Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice
When autistic people ask artificial intelligence programs for life advice, mentioning their diagnosis prompts these systems to recommend highly conservative choices like skipping social events or avoiding romance. This shift in advice reveals a hidden tension where the technology relies heavily on stereotypes, leaving users torn between feeling safely supported and frustratingly infantilized. These findings were published at the April 2026 CHI Conference on Human Factors in Computing Systems. Many autistic individuals face stigma in their daily lives, which can lead to social isolation and communication barriers. To find support without the fear of judgment, some turn to artificial intelligence chatbots. These text-based programs, often called large language models, are trained on massive amounts of internet text to predict and generate human-like writing. Autistic people often ask these programs for help navigating relationships, workplace conflicts, and personal decisions. Users sometimes reveal their autism to the chatbot, hoping the system will tailor its advice to their specific needs. This expectation reflects a broader trend of consumers wanting customized interactions with their digital tools. Virginia Tech computer …









