News Today
Politics & Governance
Business & Economy
Technology & Innovation
Entertainment & Culture
About Us Contact Us Privacy Policy Terms of Use
© Copyright 2026 Eymic News. All Rights Reserved.
EymicNews
EymicNews
NEWS.EYMIC.IN / BUSINESS & ECONOMY

AI Cloning Threat: Google Warns Hackers Are Attempting to Create Rogue Gemini Chatbots

Google says attackers are attempting “model extraction” and distillation attacks to clone its Gemini AI chatbot, raising concerns about rogue AI models and intellectual property theft.
UPDATED FEB 13, 2026
Google is warning about hackers cloning AI models to attack victims.
Google is warning about hackers cloning AI models to attack victims.

Google has raised fresh concerns about a new category of artificial intelligence threats — attempts by attackers to clone its Gemini AI chatbot through what it describes as “distillation” or model extraction attacks.

According to a detailed report shared by the company, certain actors have tried to manipulate Gemini into revealing internal details about how the model functions, with the goal of recreating or enhancing competing AI systems.


What Are “Distillation” or Model Extraction Attacks?

Google refers to these incidents as “distillation attacks,” a technique where attackers bombard an AI system with large volumes of prompts — sometimes in the hundreds of thousands — to extract patterns, outputs and behavioural signals.

This process, often called model extraction, aims to reverse-engineer the AI’s capabilities by analysing its responses at scale.

If successful, the extracted data could potentially be used to:

  • Build similar AI systems

  • Improve rival models

  • Replicate proprietary functionality

  • Reduce development costs for competitors


Why This Is a Bigger Concern

While AI security discussions have previously focused on prompt injection attacks or malicious code insertion, cloning an AI model represents a more strategic threat.

Instead of simply misusing an AI chatbot, attackers attempt to replicate its “intelligence layer” — including response styles, knowledge patterns and reasoning behaviours.

Google’s report suggests that such efforts may not be limited to individual hackers. In some cases, they could involve organised entities such as competing firms or independent researchers attempting large-scale extraction.


Risks for the AI Industry

For smaller AI companies without Google’s scale, security infrastructure or monitoring resources, such attacks could pose serious risks.

Developing advanced AI systems requires billions of dollars in infrastructure, research and training data. If malicious actors successfully extract core behaviour patterns, companies could lose competitive advantage or intellectual property.

The broader concern is industry-wide:

  • Stolen AI models could be redistributed

  • Rogue versions may operate without safeguards

  • Users may struggle to distinguish official tools from cloned systems


Why Users Should Be Aware

From a user perspective, distinguishing between legitimate AI platforms and rogue clones could become increasingly difficult.

If a cloned AI system mimics the behaviour of a trusted chatbot, users might unknowingly interact with an unauthorised version. This raises concerns around:

  • Data privacy

  • Credential theft

  • Misinformation risks

  • Manipulated outputs

Google emphasises that protecting AI systems from extraction attempts remains a priority as the technology continues to scale globally.


A Growing Cybersecurity Challenge

Gemini is unlikely to be the only AI platform targeted by such attacks. As artificial intelligence adoption grows, so does the incentive to replicate high-performing systems.

The emergence of AI cloning attempts signals a shift in cybersecurity priorities — from protecting user data alone to safeguarding the models themselves.

For companies building AI tools, the message is clear: security must evolve alongside capability.

MORE ON Eymic News
Google says attackers are attempting “model extraction” and distillation attacks to clone its Gemini AI chatbot, raising concerns about rogue AI models and intellectual property theft.
Feb 13, 2026
Lost your Android phone? Use Google’s built-in Find My Device and Remote Lock features to track, secure, and protect your personal data before it falls into the wrong hands.
Feb 13, 2026
As ChatGPT’s viral AI caricature trend spreads across social media, mental health experts question whether users are seeking creativity — or deeper validation and emotional connection.
Feb 13, 2026
Microsoft AI CEO Mustafa Suleyman said artificial intelligence could automate most white-collar tasks within 12–18 months, while also predicting that AI model creation will soon become easier and more accessible.
Feb 13, 2026