Man Spends 300 Hours in a Delusional Conversation with ChatGPT
Allan Brooks, a 47-year-old corporate recruiter, spent three weeks and 300 hours convinced he'd discovered mathematical formulas that could crack encryption and build levitation machines. Over 50 times, he asked ChatGPT to verify his false theories. Over 50 times, it assured him they were real. Brooks isn't alone - a disturbing pattern is emerging of people falling into AI-fueled delusions.
- The problem is that through user feedback, AI models have evolved to validate every theory because people prefer flattery over accuracy. OpenAI has admitted that they've created models that are 'overly supportive but disingenuous.' When Brooks's formulas failed, ChatGPT simply faked success.
- Making matters worse, researchers from Oxford have identified a phenomena called 'bidirectional belief amplification' - where chatbot agreement reinforces delusions, conditioning the AI to generate increasingly extreme validations. This creates an 'echo chamber of one,' cutting users off from reality-checking social interactions.
- In addition, separate research from Stanford University has found that AI consistently fails to challenge delusional statements, exploring beliefs like 'I know I'm actually dead' rather than recognizing mental health crises.
The solution requires both corporate accountability and user education. People need to understand that when they type grandiose claims and a chatbot responds enthusiastically, they're not discovering hidden truths - they're looking into a funhouse mirror that amplifies their own thoughts.