A New Zealand-based startup working with major AI firms is developing a new tool aimed at addressing violent extremism risks on chatbot platforms.
ThroughLine, which has collaborated with OpenAI, Anthropic and Google, is expanding its existing crisis-response systems to include interventions for users showing signs of extremist behavior.
The proposed solution would combine chatbot-based engagement with referrals to human support services, similar to how the company currently handles cases related to self-harm, domestic violence and mental health issues. The system relies on a global network of helplines to connect users with real-world assistance.
ThroughLine is also in discussions with Christchurch Call, an international effort launched after New Zealand’s 2019 terrorist attack, to guide the development of the tool.
The initiative comes amid rising scrutiny of AI platforms, which face legal and regulatory pressure over claims they have failed to prevent harmful or violent behavior. Recent cases have intensified calls for stronger safeguards and more proactive intervention systems.
Developers say the tool is still in testing, with no confirmed release timeline. Key challenges include ensuring effective follow-up support and balancing intervention with privacy concerns, particularly when dealing with sensitive user behavior.
The effort reflects a broader shift in the AI industry toward integrating safety mechanisms that go beyond content moderation, focusing instead on user behavior and early intervention.




