Gemini as a Hacker's Assistant
The Google Threat Intelligence Group's report details how hackers are leveraging Gemini, a large language model (LLM), to streamline and improve their operations. This isn't about entirely new attack methods; it's about making existing ones faster and more efficient. The report highlights that hackers are using Gemini for tasks ranging from target profiling and open-source intelligence (OSINT) to generating phishing lures and translating text [1].
Expanding Attack Vectors
The applications are diverse. A China-based actor used Gemini for debugging, research, and technical guidance related to intrusions. Another instance involved a Chinese-linked group creating an expert cybersecurity persona to automate vulnerability analysis and develop targeted test plans [1].
Model Extraction: Cloning Gemini's Brain
Beyond direct usage, Google identified "model extraction" attempts. This involves attackers with authorized API access sending a barrage of prompts—in one case, over 100,000—to replicate Gemini's behavior and reasoning [1]. The goal is to distill (recreate) the model’s functionality to train a separate, potentially malicious AI. This poses a risk to commercial and intellectual property.
Google's Response and Limitations
Google states it has taken action by disabling abusive accounts and implementing targeted defenses within Gemini's classifiers. They are also continuously testing and relying on safety guardrails. However, the report suggests this is an ongoing battle, with attackers constantly seeking new ways to exploit the technology.
"The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets," Google said in the report [1].
Who's Using Gemini for Attacks?
The Google Threat Intelligence Group (GTIG) identified state-backed groups from China, Iran, North Korea, and Russia using Gemini [1]. These groups are employing Gemini for reconnaissance, phishing, and even post-compromise activities. One North Korean group, UNC2970, used Gemini to synthesize open-source intelligence (OSINT) and profile high-value targets [2].
Hiding in Plain Sight
Hackers also abuse the public sharing features of AI platforms like Gemini and OpenAI’s ChatGPT to host deceptive social engineering content. This involves using techniques like 'ClickFix' to trick users into manually executing malicious commands by hosting the instructions on trusted AI domains to bypass security filters.
What's Next
Expect to see continued cat-and-mouse games between AI developers and malicious actors. As AI models become more powerful, so too does the potential for abuse. Enhanced detection and prevention methods will be critical, as will industry-wide collaboration to share threat intelligence and best practices.
Why It Matters
- Tempo: The most significant impact is the accelerated pace of attacks. By automating tasks like vulnerability analysis and phishing lure generation, hackers can significantly reduce the time between initial reconnaissance and actual damage [1].
- Model Extraction: Attempts to replicate Gemini's capabilities through extensive prompting pose a threat to intellectual property and could lead to the creation of malicious AI models. This "distillation" campaign involved over 100,000 prompts.
- Evasion: The Honestcue malware leveraged Google Gemini’s API to dynamically generate and execute malicious C# code in memory, showcasing how threat actors exploit AI to evade detection [1].
- Accessibility: The use of AI democratizes sophisticated attack techniques, potentially enabling less skilled actors to launch more complex campaigns. Nation-state actors are already using it for reconnaissance and social engineering [2].
- Defensive AI: The rise of AI-powered attacks necessitates the development of equally sophisticated AI-powered defenses. Some companies are already developing AI models for vulnerability scanning, reconnaissance, and automation [1].





