Developer Tools Are the New Attack Surface

Developer Tools Are the New Attack Surface Image made by Gemini

Developer tools are the new attack surface. VS Code extensions with 1.5 million installs exfiltrating source code to China, 175,000 unsecured Ollama servers globally accessible, and CrowdStrike research on compromising AI agent tool chains. Plus: a former Google engineer convicted for AI trade secret theft, and AI models now conducting autonomous multi-stage network attacks.


175,000 Ollama AI servers exposed without authentication

SentinelOne and Censys discovered 175,000 Ollama instances across 130 countries operating without authentication. These self-hosted AI servers, frequently deployed on corporate networks, are publicly accessible. Free compute for attackers, free data for exfiltration. Organizations running Ollama should verify firewall configurations.

VS Code AI extensions with 1.5 million installs steal developer source code

Two extensions marketed as AI coding assistants reached 1.5 million installations while covertly exfiltrating developer source code to Chinese servers. Despite appearing legitimate and providing functional AI features, they simultaneously harvested typed content. This represents a shift in supply chain attacks from npm packages toward IDE plugins.

Agentic tool chain attacks: turning AI agents against themselves

CrowdStrike documented methods for exploiting AI agent tool chains to achieve code execution and data exfiltration. These attacks leverage legitimate agent workflows by compromising trusted tools rather than agents themselves. The vulnerability stems from implicit agent trust in their tools. Organizations deploying agents with internal system access should review this research.

Ex-Google engineer convicted for stealing AI trade secrets

Linwei Ding received conviction on seven counts of economic espionage and trade secret theft for transferring Google's AI research to a China-based startup. The prosecution underscores that AI intellectual property theft now receives national security treatment, with additional similar cases anticipated.

AI models now run autonomous multi-stage network attacks

Bruce Schneier documented AI models executing multi-stage network attacks autonomously using standard penetration testing tools without requiring human guidance between steps. Previously requiring skilled operators, models now independently conduct reconnaissance, exploitation, and data exfiltration. The barrier to sophisticated attacks has significantly lowered.


In brief

Next roundup: AI Agents Under Attack — Claude finds 500+ vulns and LLMs run autonomous network breaches.

One practical response to these threats: Running AI Agents in a Box, where I containerize AI coding tools with network lockdown.