OpenAI has revealed new findings on how its ChatGPT technology has been misused in cybercrime and influence operations, leading to the removal of multiple accounts involved in deceptive activities.
According to the company’s latest threat report, banned accounts were linked to schemes such as online romance fraud, impersonation of legal professionals, and coordinated messaging campaigns. In some cases, individuals used the technology alongside social media tools to present themselves as dating agencies, law firms, or government representatives.
OpenAI said certain accounts appeared to gather publicly available information on individuals and institutions, while others generated targeted communications designed to encourage engagement under false pretenses.
The report also described cases in which ChatGPT was used to create promotional materials for fraudulent services, including online platforms that pressured users into making repeated payments. In separate incidents, actors allegedly impersonated legal professionals or officials to contact victims of scams.
OpenAI stated that the identified accounts have been removed and emphasized its ongoing efforts to detect and prevent misuse of its systems. The company noted that such activities typically involve a combination of digital tools rather than reliance on a single platform.
The findings highlight broader challenges faced by technology providers in balancing open access to advanced tools with the need to mitigate harmful or deceptive applications.




