OpenAI has confirmed it is postponing the launch of an “adult mode” for ChatGPT, saying the company will instead prioritise improving the platform’s core capabilities and user experience.
The move marks a shift from earlier plans outlined by Sam Altman, who indicated last year that the artificial intelligence developer would allow certain forms of adult content on its flagship chatbot once robust age-verification systems had been introduced.
However, OpenAI has now said that development resources are being redirected toward upgrades that will benefit a broader share of the chatbot’s rapidly expanding user base.
“We’re pushing out the launch of adult mode so we can focus on work that is a higher priority for more users right now,” the company said. “That includes gains in intelligence, personality improvements, personalisation and making the experience more proactive.”
The company added that it still supported the underlying principle behind the proposed feature, allowing adult users greater freedom in how they interact with AI systems, but acknowledged that implementing it safely would require additional work.
“We still believe in the principle of treating adults like adults,” OpenAI said. “But getting the experience right will take more time.”
The decision comes at a time of intense competition in the artificial intelligence sector. Since announcing plans to loosen restrictions on ChatGPT content in late 2025, Altman has repeatedly warned that OpenAI faces a “code red” challenge from rival AI developers.
Among the most prominent competitors are Google DeepMind and Anthropic, both of which are racing to release more capable generative AI systems.
OpenAI’s focus on performance improvements reflects the escalating pressure to maintain leadership in the AI market, where advances in reasoning capability, conversational tone and personalisation are increasingly seen as key differentiators.
The company says ChatGPT now has more than 900 million users worldwide, making it one of the fastest-growing digital platforms in history. Maintaining reliability, safety and usefulness at such scale has become a central priority.
Although the launch of adult mode has been delayed, OpenAI is continuing to develop age-verification and age-prediction systems designed to ensure younger users are protected from inappropriate content.
The technology analyses usage patterns and behavioural signals to estimate whether a user may be under the age of 18. If the system determines that a user is likely to be a minor, stricter safety filters are automatically applied.
These additional safeguards limit exposure to graphic violence, explicit content and sexual role-play scenarios.
The work is also partly driven by regulatory pressures in several countries. In the UK, for example, the Online Safety Act requires platforms hosting potentially harmful or adult material to ensure that under-18s cannot access such content without effective age verification measures.
As a result, any future “adult mode” would likely need to be accompanied by robust compliance systems in multiple jurisdictions before being deployed widely.
The announcement about ChatGPT’s delayed adult mode came as OpenAI faced internal controversy following the resignation of a senior executive linked to its robotics division.
Caitlin Kalinowski stepped down after raising concerns about the company’s partnership with the United States Department of Defense.
Kalinowski said she was troubled by the potential implications of AI technologies being used in areas such as mass surveillance or autonomous weapons systems.
“AI has an important role in national security,” she wrote in a statement on social media platform X. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got.”
She emphasised that her concerns related primarily to the speed with which the deal had been announced rather than the concept of national security collaboration itself.
“These are governance concerns first and foremost,” she said. “Issues this significant require clearly defined guardrails before agreements are announced.”
In response, OpenAI said it would update the terms of its defence agreement to ensure that its technology cannot be used for mass domestic surveillance or fully autonomous weapons systems.
A company spokesperson said the partnership was intended to support responsible national-security applications of AI while maintaining clear ethical boundaries.
“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the spokesperson said.
OpenAI added that it would continue engaging with employees, policymakers and civil society groups to ensure its technology is deployed responsibly.
The delay of ChatGPT’s adult mode reflects the broader challenge facing AI companies as they attempt to balance technological innovation, safety safeguards and regulatory compliance.
As generative AI tools become more widely used for everything from work productivity to creative expression, companies are increasingly under pressure to introduce new features carefully and responsibly.
For OpenAI, the immediate focus appears to be ensuring that ChatGPT’s core intelligence and usability continue to improve — a strategy the company believes will have a greater impact on its hundreds of millions of users than expanding the range of content the chatbot can produce.
Whether adult mode eventually launches may depend on how effectively OpenAI can implement reliable age verification and content moderation systems — a complex technical and legal challenge that is still evolving alongside the rapidly advancing capabilities of artificial intelligence.

