Summary:
1. State-sponsored hackers are using advanced AI tools like Google’s Gemini to accelerate cyberattacks, including phishing campaigns and malware development.
2. Government-backed threat actors are integrating artificial intelligence into their attack lifecycle, from reconnaissance to malware development.
3. Google’s Threat Intelligence Group has identified various instances of state-sponsored hackers using AI for malicious activities, such as model extraction attacks and AI-integrated malware.
Article:
State-sponsored hackers are leveraging cutting-edge technology to enhance their cyberattacks, with threat actors from countries like Iran, North Korea, China, and Russia utilizing tools like Google’s Gemini to advance their malicious campaigns. According to a recent report from Google’s Threat Intelligence Group (GTIG), these hackers are employing sophisticated AI to craft targeted phishing campaigns and develop complex malware.
The quarterly AI Threat Tracker report, published by GTIG, sheds light on how government-backed attackers are incorporating artificial intelligence into every stage of their attacks, from initial reconnaissance to social engineering and ultimately, malware creation. This shift towards AI-powered attacks has become increasingly evident through GTIG’s work in the final quarter of 2025.
Government-backed threat actors, such as the Iranian group APT42 and the North Korean actor UNC2970, have been utilizing AI models like Gemini to enhance their reconnaissance and targeted social engineering operations. APT42, for example, has been using AI to create authentic-looking email addresses for specific entities and conducting extensive research to establish credible pretexts for approaching their targets. On the other hand, UNC2970 has been focusing on defense targeting and impersonating corporate recruiters, using AI to profile high-value targets and gather crucial information for their campaigns.
Moreover, the report highlights a surge in model extraction attacks, also known as “distillation attacks,” which are aimed at stealing intellectual property from AI models. While there have been no direct attacks on advanced AI models from threat actors, GTIG has detected several attempts to extract models for malicious purposes.
Additionally, GTIG has observed the emergence of AI-integrated malware, such as the HONESTCUE malware, which utilizes Gemini’s API to outsource functionality generation. This malware is designed to evade traditional detection methods through sophisticated obfuscation techniques, posing a significant threat to cybersecurity defenses.
In response to these malicious activities, Google has taken proactive measures to disrupt threat actors by disabling accounts and assets associated with malicious behavior. The company is constantly improving its models and classifiers to make them less vulnerable to misuse, emphasizing the importance of developing AI responsibly.
Overall, the findings underscore the evolving role of AI in cybersecurity, with both defenders and attackers leveraging this technology to gain an edge in the digital landscape. For enterprise security teams, particularly in regions where state-sponsored hackers are active, enhancing defenses against AI-augmented attacks is crucial to safeguard against evolving cyber threats.