ChatGPT quietly changes how it treats teen safety

OpenAI rolled out a quiet but significant safety upgrade on Tuesday, Jan 20, in the US. This comes in the form of an age-prediction system designed to identify when ChatGPT users are likely under 18.

The system reflects the growing concern among policymakers, parents, and educators over the misuse of generative AI by minors and the lack of sufficient safeguards.

Teen safety is at the forefront of conversations today, especially as generative AI is being used to create sexualized images and as a means for body shaming; some youth are also using chatbots as “friends,” leading some parents and guardians to allege OpenAI is harmful for young people.

Notably, Australia has already banned social media use for children under 16, and several US states are curbing phone use in schools.

In August 2025, Matt and Maria Raine, parents of a 16-year-old boy who died by suicide, filed a lawsuit (warning: the lawsuit contains descriptions of self-harm) against OpenAI, claiming that ChatGPT helped their son Adam explore methods of suicide and even helped him write a suicide note. 

OpenAI has refuted the claims and alleges misuse of AI’s capabilities, while announcing improved safety measures.

ChatGPT has over 700 million weekly active users.

Shutterstock

More recently, Elon Musk’s Grok chatbot was in the news after it generated 4.4 million images. Of which it publicly shared approximately 1.8 million sexualized images of women, the New York Times reported

In a series of improvements to make ChatGPT safer for minor users, this is a rather practical one.

According to OpenAI’s recent blog post, ChatGPT, rather than relying solely on self-reported ages, which has often been a point of contention for social media companies, will use behavioral and account-level signals, such as usage patterns and interaction types, to estimate whether a user is a minor.

Additionally, it will monitor the time of day when the user is active online, along with the user’s reported age, to identify the user’s age. 

What changes for the under-18 users?

If ChatGPT finds that a user is under 18, it will automatically apply protections to prevent exposure to harmful content, in accordance with its safety guidelines for teens.

This includes censoring users from exposure to the following content:

  • Sexual, romantic, or violent role play
  • Depictions of self-harm
  • Content promoting extreme beauty standards, unhealthy dieting, or body shaming
  • Viral challenges that encourage risky behaviors in minors

OpenAI has been vocal about introducing safety protocols to protect minors. In September 2025, Sam Altman published “Teen safety, freedom, and privacy,” where Altman talked about the age prediction system.

More Tech Stocks:

As part of the new safety guidelines, OpenAI also introduced parental controls to enhance protection and ensure careful monitoring by adults. Parents can set quiet hours when minors cannot use ChatGPT. They can also personalize how model training works and receive alerts “if signs of acute distress are detected” by the system.

For users who were mistakenly placed in the under-18 experience, there is a simple option to reverse it. ChatGPT uses Persona, a secure third-party identity verification service, which allows users to update a selfie and restore their full access.

In the European Union, this feature will roll out in the upcoming weeks due to regulatory requirements.

The safety update arrives amid renewed criticism from Elon Musk, who has accused OpenAI of abandoning its original mission to remain solely a non-profit. Altman has openly dismissed the arguments as “cherry-picked” by Musk.

OpenAI argues that the current structure and ongoing changes to its business model are necessary to balance access, safety, and scalability.

Related: Trump’s Davos message hints at a costly shift for consumers