Threat actors increasingly integrated artificial intelligence into cyber operations in the final quarter of 2025, accelerating reconnaissance, social engineering and malware development, according to a new report from Google Threat Intelligence Group.
The report, which updates findings published in November 2025, said AI tools delivered significant productivity gains for malicious actors and were used across multiple stages of attack workflows. Google said it has expanded mitigation efforts as AI-enabled threats evolve.
GTIG, working with Google DeepMind, identified a rise in model extraction attempts, also known as distillation attacks. These attacks seek to replicate proprietary AI model logic by abusing legitimate application programming interface access. While GTIG said it did not observe advanced persistent threat groups directly attacking frontier models or generative AI products, it detected and mitigated frequent extraction attempts by private-sector actors and researchers.
Government-backed threat groups increasingly used large language models for technical research, target development and the creation of more sophisticated phishing lures. Countries observed operationalizing AI in these activities included North Korea, Iran, China and Russia.
According to GTIG, model extraction attacks exploit authorized access to systematically probe AI systems and extract knowledge to train derivative models. Although knowledge distillation can have legitimate applications, unauthorized extraction from Google’s Gemini models violates company policies. Google said it disrupted these attempts worldwide and strengthened safeguards to protect intellectual property, including blocking more than 100,000 malicious prompts aimed at replicating Gemini’s reasoning capabilities.
The report said threat actors used AI to improve reconnaissance and social engineering, enabling faster and more personalized phishing campaigns that bypass traditional warning signs. Iranian-linked APT42, for example, used Gemini to research targets, craft persuasive personas and localize content. North Korean group UNC2970 applied similar techniques to defense-related targeting and tailored phishing.
GTIG also observed exploration of agentic AI capabilities to support malware development, penetration testing and automated coding. China-linked APT31 and UNC795 used AI tools for vulnerability analysis, code auditing and tool generation. Some malware families, including HONESTCUE, leveraged AI APIs to generate follow-on malicious code, while the COINBAIT phishing kit used AI-generated interfaces for credential harvesting.
The report noted the emergence of underground marketplaces offering AI tools for offensive use, including services that claimed to operate independent models but relied on commercial platforms. Misconfigured systems and exposed API keys contributed to a black market for AI resources. Google said it responded by disabling abusive accounts and monitoring exploitation pathways.
Google said it continues to invest in proactive defenses, including improved detection classifiers, asset takedowns and safety measures to limit misuse. The company also collaborates with industry partners to share intelligence, test defenses and develop secure AI frameworks. Experimental tools such as Big Sleep and CodeMender highlight AI’s potential for proactive vulnerability discovery and automated remediation.
GTIG said AI adoption by threat actors is advancing rapidly, increasing the sophistication of phishing, malware and reconnaissance operations. The group said it will continue to monitor and share intelligence on emerging risks, with indicators of compromise available to registered users to support threat-hunting efforts.
#Google #Warns #Rising #StateBacked #Hackers #Late