Image made by Gemini
Developer tools are the new attack surface. VS Code extensions with 1.5 million installs exfiltrating source code to China, 175,000 unsecured Ollama servers globally accessible, and CrowdStrike research on compromising AI agent tool chains. Plus: a former Google engineer convicted for AI trade secret theft, and AI models now conducting autonomous multi-stage network attacks.
Featured stories
175,000 Ollama AI servers exposed without authentication
SentinelOne and Censys discovered 175,000 Ollama instances across 130 countries operating without authentication. These self-hosted AI servers, frequently deployed on corporate networks, are publicly accessible. Free compute for attackers, free data for exfiltration. Organizations running Ollama should verify firewall configurations.
VS Code AI extensions with 1.5 million installs steal developer source code
Two extensions marketed as AI coding assistants reached 1.5 million installations while covertly exfiltrating developer source code to Chinese servers. Despite appearing legitimate and providing functional AI features, they simultaneously harvested typed content. This represents a shift in supply chain attacks from npm packages toward IDE plugins.
Agentic tool chain attacks: turning AI agents against themselves
CrowdStrike documented methods for exploiting AI agent tool chains to achieve code execution and data exfiltration. These attacks leverage legitimate agent workflows by compromising trusted tools rather than agents themselves. The vulnerability stems from implicit agent trust in their tools. Organizations deploying agents with internal system access should review this research.
Ex-Google engineer convicted for stealing AI trade secrets
Linwei Ding received conviction on seven counts of economic espionage and trade secret theft for transferring Google's AI research to a China-based startup. The prosecution underscores that AI intellectual property theft now receives national security treatment, with additional similar cases anticipated.
AI models now run autonomous multi-stage network attacks
Bruce Schneier documented AI models executing multi-stage network attacks autonomously using standard penetration testing tools without requiring human guidance between steps. Previously requiring skilled operators, models now independently conduct reconnaissance, exploitation, and data exfiltration. The barrier to sophisticated attacks has significantly lowered.
In brief
-
Chrome extensions harvesting ChatGPT tokens: Malicious browser extensions redirect affiliate links and collect OpenAI authentication tokens from logged-in users. Extension audits recommended.
-
Fake AI coding assistant on VS Code delivers malware: An extension named Moltbot posed as free AI assistance while delivering malware. Microsoft removed it, though the pattern persists.
-
North Korean Konni group using AI-generated backdoors: Konni threat actors employ AI to generate PowerShell backdoors targeting blockchain developers. State-sponsored actors now use generative AI for malware development.
-
Hugging Face abused to host Android malware: Attackers distribute thousands of Android malware variants through Hugging Face repositories for credential harvesting. AI platforms are becoming malware distribution infrastructure.
-
LLMs generate phishing JavaScript in real-time: Unit 42 documented attacks where malicious webpages invoke LLM APIs to dynamically generate phishing code in-browser with varying payloads, defeating signature-based detection.
-
AI agent access control is a governance gap: Analysis reveals organizations deploy AI agents without proper access controls or accountability frameworks, leaving agent permissions unmonitored.
-
Moltbook shows prompt injection risks in agent networks: Simon Willison examines Moltbook, a social network for AI agent interaction, identifying heightened prompt injection threats when agents communicate with other agents.
Next roundup: AI Agents Under Attack — Claude finds 500+ vulns and LLMs run autonomous network breaches.
One practical response to these threats: Running AI Agents in a Box, where I containerize AI coding tools with network lockdown.