Google basically just revealed that its flagship artificial intelligence chatbot Gemini is under siege by what the company describes as “commercially motivated” actors attempting to systematically extract its core capabilities and logic. These attackers aren’t trying to break into Google’s systems or steal data directly. They’re doing something simultaneously simpler and more sophisticated—they’re asking Gemini the same types of questions thousands and thousands of times, creating a detailed map of how the AI thinks, reasons, and responds. One documented campaign alone prompted Gemini over 100,000 times. This isn’t espionage in the traditional sense. It’s methodical intellectual property theft happening in plain sight.
Understanding distillation attacks and model extraction
Distillation attacks represent a fascinating and troubling evolution in cyber threats. Rather than hacking into systems, attackers use a publicly accessible AI chatbot repeatedly to extract knowledge about how it operates internally. Each question and answer provides data points. Thousands of question-and-answer pairs create comprehensive patterns. Eventually, attackers understand the underlying logic, the decision-making frameworks, the reasoning patterns—essentially reverse-engineering how the AI functions.
Google calls this process “model extraction,” which is exactly what it is. Attackers are extracting the model—the foundational structure and logic—from Gemini by battering it with relentless queries. The scale of these attacks is staggering. We’re talking about coordinated campaigns where attackers submit hundreds of thousands of questions, each one designed to test specific aspects of how Gemini processes information, responds to edge cases, handles complex reasoning, and navigates uncertainty.
Who’s actually doing this
Google has indicated that the attackers are primarily private companies and research organizations seeking competitive advantages in the increasingly cutthroat AI landscape. The company acknowledged these attacks are occurring globally but notably declined to name specific perpetrators. This restraint makes sense—publicly accusing competitors of distillation attacks creates massive diplomatic and legal complications. But the implication is clear: multiple organizations across multiple countries are actively trying to clone Gemini.
The motivation is obvious. Building advanced AI systems requires enormous capital investment, sophisticated talent, and computational resources. If a competitor can extract the core logic from Google’s Gemini through distillation attacks, they essentially shortcut years of research and development. They gain insights into decision-making architectures, reasoning patterns, and response mechanisms without having to develop them independently.
The threat to all AI systems
Google’s threat intelligence leadership warned that Gemini is essentially the canary in the coal mine for broader AI security threats. As more companies develop custom large language models—particularly those trained on sensitive proprietary data—they become increasingly vulnerable to distillation attacks. A financial firm’s AI trained on decades of trading strategies. A pharmaceutical company’s model trained on confidential research data. A technology company’s system trained on proprietary algorithms. All of these become attractive targets for attackers willing to bombard them with hundreds of thousands of queries.
The vulnerability isn’t really a flaw in any specific system. It’s inherent to the nature of public-facing AI chatbots. To function, they must be accessible on the internet. To be useful, they must respond to diverse queries with detailed answers. This openness—the fundamental characteristic that makes them useful—is simultaneously the characteristic that makes them vulnerable to extraction attacks.
The intellectual property angle
Tech companies have invested billions developing advanced AI systems. Google, OpenAI, Anthropic, Meta, Microsoft—these organizations treat their AI models as invaluable intellectual property. The architecture, the training methods, the decision-making frameworks, the reasoning patterns—all of this represents years of research encoded into the system. Distillation attacks threaten all of this by essentially allowing competitors to learn how these systems work without doing the original development work themselves.
OpenAI previously accused Chinese competitor DeepSeek of conducting similar extraction attempts. The pattern is consistent: companies developing advanced AI see competitors using distillation techniques to enhance their own models. Each successful extraction makes competitive advantage harder to maintain because the core intellectual property gets copied through methodical querying.
What happens next in AI security
The rise of distillation attacks forces tech companies to make uncomfortable choices. They can implement stricter rate limiting, making it harder to submit hundreds of thousands of queries. But this makes the systems less useful for legitimate users. They can try to detect and block suspicious query patterns. But sophisticated attackers can disguise their intentions across distributed queries from different sources over extended timeframes.
Most concerning is the implication for future AI systems. As companies build more specialized models trained on sensitive proprietary data, the value of distillation attacks increases. A hedge fund’s AI trained on confidential market analysis. A pharma company’s model trained on drug research. A manufacturing firm’s system trained on production optimization. All become potential targets because the extracted knowledge could provide massive competitive advantages to whoever acquires it.
The landscape of AI security is essentially being rewritten in real time, and distillation attacks represent one of the most fundamental challenges to protecting intellectual property in the age of accessible AI systems.

