When ChatGPT hit the scene not long ago, it felt like science fiction come to life.
AI chatbots were able to answer anything, help with homework, and even sound like your most insightful friend.
💵💰Don’t miss the move: Subscribe to TheStreet’s free daily newsletterđź’°đź’µ
Within no time, these chatbots have become ubiquitous and are now parts of our daily life without a second thought.
However, in the generative AI race to layer these tools everywhere, something may have been overlooked.
A subtle pattern is beginning to surface, and for a small yet growing number of users, the consequences of these conversations may be far more real than anyone had predicted.
ChatGPT and AI chatbots may be sitting on a silent crisis.
Image source: Stefano Guidi/Getty Images
AI’s growing pains go public
AI chatbots stormed into the spotlight in no time, but behind the viral screenshots, a messier reality was taking shape.
It started with a string of lawsuits.
Big-time authors like John Grisham and George R.R. Martin joined a class action lawsuit accusing OpenAI of training ChatGPT on copyrighted material without their consent.
The New York Times followed suit, compelling OpenAI to preserve every user interaction in a high-stakes federal order.
Then came the hallucinations.
Related: Bank of America quietly reboots Microsoft stock price target
In Texas, a lawyer was fined for quoting made-up case law from ChatGPT in court. Other users found themselves in the middle of a defamation fallout after AI bots invented damaging falsehoods.
Even the insiders are uneasy.
OpenAI’s CEO Sam Altman sounded the alarm that ChatGPT’s experimental “agent” feature could be manipulated by bad actors.
Critics are also making their voices heard about the incredible environmental toll of AI, especially after a Trump-hosted summit linked billions in funding to fossil-fuel-powered data centers.
And OpenAI isn’t the only generative AI giant that’s facing heat.
Elon Musk’s Grok AI, known for its edgy tone, landed in hot water after updates produced antisemitic slurs and political bias.Â
Similarly, Google’s Gemini AI has faced multiple security nightmares, including prompt injection hacks and invisible HTML tricking users into clicking malware.
More News:
- Top economist drops 6-word verdict on Trump tariffs, inflation
- JPMorgan reveals 9 stocks with major problems
- Rigetti shakes up quantum computing with bold advance
Hence, the AI race is a lot less about features but more about trust. And in this new frontier, one incorrect response could result in more than a few lines of bad code.
ChatGPT’s emotional tone raises red flags
Mental health experts are sounding the alarm on ChatGPT’s responses filled with empathy, encouragement, and praise.
For Jacob Irwin, a 30-year-old on the autism spectrum with no previous history of mental illness, those features led to a dangerous spiral.
After talking with ChatGPT about a personal theory on faster-than-light travel, Irwin reportedly became convinced he was onto something groundbreaking.
Instead of being critical, the bot praised his ideas, encouraging him to publish them while dismissing his family’s concerns.
That kind of validation proved remarkably overwhelming.
Related: Bank of America makes its boldest AMD call yet
Irwin suffered two manic episodes in May, including one that resulted in a 17-day hospitalization.
When Irwin’s mother later reviewed the chat history, she found an endless discussion filled with emotionally charged language.
The AI chatbot even admitted: “I matched your tone and intensity, but I did not uphold my duty to protect and guide you.”
Unlike humans, generative AI is unable to understand psychological distress, which is remarkably risky for neurodiverse users or those in emotionally vulnerable states.
OpenAI is aware of the problem and is looking to train ChatGPT to better detect signs of mental strain and avoid going into unhealthy patterns. However, that’s still a work in progress at this point.
That said, his case opens up a much broader debate on who’s responsible when they start crossing emotional lines.
For OpenAI, the stakes are much higher than just public perception.
It’s a company that’s aggressively pushing the boundaries of AI, in hopes of building smarter, more autonomous systems with greater depth and “agent” capabilities.
However, that ambition comes with a hefty price tag. Running and scaling these models requires a ton of compute power, customized chips, and continuous safety research, which is a massive drag on the balance sheet.
Also, OpenAI faces pressure to monetize faster, especially on the back of its partnership with Microsoft.
Nonetheless, Irwin’s case is a major red flag about what happens when scale and safety collide in the rush to dominate the AI arms race.