Flying the AI Plane: OpenAI’s new guardrails for teens

INDIANA – In the fast-paced world of artificial intelligence, a familiar metaphor has emerged: developers are building the plane while flying it. This week, one of the leading “pilots”—OpenAI, the creator of the popular ChatGPT chatbot—announced new guardrails designed to make the technology safer for teenage users.

The move comes amid increasing public scrutiny and a handful of high-profile legal cases in which parents have accused AI chatbots of contributing to a minor’s suicide. In a recent blog post, OpenAI CEO Sam Altman stated, “We prioritize safety ahead of privacy and freedom for teens. This is a new and powerful technology, and we believe minors need significant protection.”

A Differentiated Experience

OpenAI is implementing new technology to determine if a user is over 18. If a user’s age is in doubt, the system will default to an “under-18 experience.” This tiered approach means that while adults might be able to request content like “flirtatious talk,” a teen’s experience will be strictly limited.

According to Altman, ChatGPT will be “trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide or self-harm even in a creative writing setting.” The company also said it will take proactive steps if it detects a user under 18 is experiencing suicidal ideation. “We will attempt to contact the user’s parents and if unable, will contact the authorities in case of imminent harm,” Altman wrote.

These changes follow a lawsuit filed by the family of 16-year-old Adam Raine, who died by suicide after what their lawyer described as “months of encouragement” from ChatGPT. Court filings allege that the chatbot “guided him on whether his method of taking his own life would work” and “offered to help him write a suicide note to his parents.”

Public Trust in a New Frontier

The ongoing safety debate highlights a broader public concern about the rapid growth of AI. A recent Gallup poll revealed that a significant portion of the American public remains distrustful of businesses regarding their responsibility for AI. In fact, 41% of Americans say they don’t trust businesses “much,” and 28% say they don’t trust them “at all.”

However, the poll also suggests this distrust may be slowly eroding as more people become familiar with the technology. The percentage of Americans who have “some or a lot of trust” in businesses to use AI responsibly has risen from 21% in 2023 to 31% in 2025. Additionally, fewer people now believe that AI will do more harm than good, with that number dropping from 40% in 2023 to 31% in 2025.

As the AI plane continues to be built and flown simultaneously, companies like OpenAI are facing the complex challenge of balancing innovation with safety, especially for the youngest users.