Skip to main content

Australia’s eSafety Commissioner has ordered four AI chatbot firms — including Character.ai, Glimpse.AI, Chai Research, and Chub AI — to detail what measures they have in place to shield minors from sexual, exploitative, or self-harm-related content.

The notices come amid mounting concern that AI chatbots designed for companionship can expose young users to explicit or psychologically harmful conversations. The commissioner said failure to comply could result in daily fines of up to A$825,000 ($536,000).

“There’s a darker side to some of these tools,” said Commissioner Julie Inman Grant, citing cases where chatbots have engaged in sexually explicit dialogue with minors or promoted suicidal and eating disorder behaviors.

The most high-profile case involves Character.ai, currently facing a U.S. lawsuit after a 14-year-old boy took his life following extensive interaction with an AI companion. The company maintains it has strengthened safeguards, including alerts directing users to crisis helplines.

Australia’s government has been a global leader in tough digital safety rules. From December, social media platforms will be required to ban or deactivate under-16 accounts or face penalties of up to A$49.5 million.

The regulator clarified that OpenAI’s ChatGPT was not included in the inquiry, as it is not yet covered by the AI companion code, which comes into force in 2026.

Analysts say Australia’s move signals a new phase in AI accountability, targeting emotional and behavioral risks as much as misinformation.