Skip to main content

State-linked hacking groups from several countries are reportedly using Google’s Gemini AI tools to support cyber-operations, according to a new report from Google’s Threat Intelligence Group. The company says it has detected activity connected to groups tied to Russia, China, Iran, and North Korea that involved the use of Gemini for reconnaissance, coding assistance, and social-engineering campaigns.

Most of the observed activity centered on information gathering. Attackers used large language models to analyze public data, identify potential targets, and search for vulnerabilities in organizations or individuals of interest. Google says the systems were also used to help craft phishing messages and propaganda content, highlighting how AI tools can speed up the preparation phase of cyberattacks.

In some cases, groups linked to China and Iran allegedly used Gemini for more advanced tasks such as debugging exploit code and refining malware-related scripts. One reported incident involved attempts to create a proof-of-concept exploit for a known software vulnerability. Google says such uses violate its terms of service and that it has taken steps to restrict accounts associated with malicious activity.

Security experts have long warned that AI tools can be used for both defensive and offensive purposes. Large language models excel at analyzing massive data sets and generating code, making them useful for legitimate security research but also for attackers conducting reconnaissance or developing exploits. Because the underlying tasks often overlap, distinguishing between ethical researchers and malicious actors remains a major challenge for AI providers.

Google maintains that it is monitoring misuse of its AI systems and blocking access where it can confidently attribute activity to harmful actors. The company says it will continue to refine safeguards as AI tools become more widely available and integrated into everyday workflows.