Image Main

Cybersecurity researchers have discovered two new malicious extensions in the Chrome Web Store designed to exfiltrate OpenAI ChatGPT and DeepSeek conversations, along with browsing data, to servers under the attackers’ control. This type of attack, which uses browser extensions to stealthily capture AI conversations, has been dubbed “Prompt Poaching” by Secure Annex.

Malicious Extensions Identified

The two extensions, which together have more than 900,000 users, are:

  • Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI (ID: fnmihdojmnkclgjpcoonokmkhjpjechg, 600,000 users)
  • AI Sidebar with Deepseek, ChatGPT, Claude, and more. (ID: inhcgfpbfdjbjogdfjbclgolkmhnooop, 300,000 users)

Both extensions were discovered exfiltrating user conversations and all Chrome tab URLs to a remote command and control (C2) server every 30 minutes. They use a deceptive tactic, requesting consent for “anonymous, non-identifiable analytics data” while actually exfiltrating the entire content of ChatGPT and DeepSeek conversations.

These malicious extensions impersonate a legitimate extension called “Chat with all AI models (Gemini, Claude, DeepSeek…) & AI Agents” by AITOPIA, which has approximately 1 million users.

Attack Mechanism

Once installed, the extensions ask users for permissions to collect “anonymized” browsing behavior under the guise of improving the sidebar experience. If the user agrees, the embedded malware begins collecting information about the chatbot’s open tabs and conversation data.

To achieve the latter, the malware searches for specific DOM elements within the web page, extracts chat messages, and stores them locally for later exfiltration to remote servers. Threat actors also use the AI-powered web development platform Lovable to host their privacy policies and other infrastructure components, in an attempt to conceal their actions.

Consequences of Infection

The consequences of installing these plugins can be serious, as they have the potential to exfiltrate a wide range of sensitive information, including:

  • Data shared with chatbots such as ChatGPT and DeepSeek. *Web browsing activity, including searches and internal corporate URLs.

OX Security warns that this information can be used for corporate espionage, identity theft, targeted phishing campaigns, or sold on clandestine forums. Organizations whose employees installed these extensions could have unknowingly exposed intellectual property, customer data, and sensitive business information.

“Prompt Poaching” in Legitimate Extensions

The problem is not limited to just overtly malicious extensions. Secure Annex also identified legitimate browser extensions, such as Similarweb (1 million users) and Sensor Tower’s Stayfocusd (600,000 users), involved in prompt poaching.

Similarweb, for example, introduced the ability to monitor conversations in May 2025. A January 1, 2026 update added a terms of service pop-up that makes explicit that data fed into AI tools is collected to “provide the in-depth analysis of traffic and engagement metrics you expect when using the Service.” A privacy policy update from December 30, 2025 also clearly states this:

“This information includes prompts, queries, content, uploaded or attached files (e.g., images, videos, text, CSV files) and other inputs that you may enter or send to certain artificial intelligence (AI) tools, as well as the results or other outputs (including attachments included in such outputs) that you may receive from such AI tools (“AI Inputs and Outputs”).”

Similarweb uses DOM scraping or hijacks native browser APIs such as fetch() and XMLHttpRequest() to collect conversation data, loading a remote configuration file that includes custom parsing logic for ChatGPT, Anthropic Claude, Google Gemini, and Perplexity.

John Tuckner of Secure Annex notes that this behavior is common in both the Chrome and Edge versions of the Similarweb extension, and predicts that this trend will increase as more companies recognize the value of this data.

Conclusions

The proliferation of malicious browser extensions and the growing phenomenon of “prompt poaching” by legitimate extensions represent a significant threat to data privacy and security. The ability to exfiltrate AI conversations and browsing data can have serious repercussions for individuals and organizations. It is crucial that users exercise extreme caution when installing extensions and understand the permissions they grant.

References

  • Malicious C2 servers:
    • chatsaigpt[.]com
    • deepaichats[.]com
  • Hosting infrastructure:
    • chataigpt[.]pro
    • chatgptsidebar[.]pro
  • Companies and tools mentioned:
    • OX Security
    • Secure Annex
    • AITOPIA *Lovable AI
    • Similarweb *Sensor Tower’s Stayfocusd *OpenAI ChatGPT *DeepSeek AI *Urban VPN Proxy