The Many Ways Chatbot Tools Can Manipulate Us
As we continue our headlong rush into our new “chatbot culture,” where Silicon Valley companies are pushing the use of AI assistants into virtually every corner of our lives, we are gaining real productivity and quality-of-life benefits. Yet just as apparent are the psychological risks of manipulation arising from the very structure of large language model-based tools themselves. We allow Big Tech to ignore or minimize these risks at our own peril. At the very least, we should be informed and clear-eyed about these risks and how they are baked into the design of chatbot tools. Some AI ethics concerns have gotten considerable attention. Chatbot tools are reporting decreasing fabrication rates, but since we are being urged to rely on them, error rates are still a problem. Even if Google’s AI Overview tool is accurate 9 out of 10 times, as a recent analysis reported, this still means that it is providing tens of millions of wrong answers every hour as it processes more than 5 trillion searches a year (Mickel et al., 2026). Ethicists …




