AI, jailbreaks, and 150GB of unanswered questions
A hacker allegedly jailbroke Claude to hack Mexican government agencies, but key details remain unverified. Plus Claude Code and Copilot vulns, and Chinese AI firms stealing Anthropic's model.
The threat model that made me sandbox my AI agents
AI coding agents have shell access to your machine. I mapped out the threats before letting one touch my code, then built Claudecker to contain them.
Your AI Assistant Might Be Working for Someone Else
Copilot and Grok repurposed as C2 channels, Cline supply chain attack installed AI agents on 4,000 dev machines, and AI found 12 zero-days in OpenSSL.
AI Agents Under Attack
AI security roundup: Claude finds 500+ vulns in open-source libs, LLMs conduct autonomous network breaches, and AI agent attack surfaces keep expanding.
Building a session retrospective skill for Claude Code
A Claude Code skill that reads the session JSONL history and generates a human-readable markdown retrospective covering problems, decisions, and key takeaways.
Developer Tools Are the New Attack Surface
VS Code AI extensions with 1.5M installs stealing source code, 175K Ollama servers exposed globally, and AI running autonomous multi-stage network attacks.
Running AI agents in a box because I don't trust them
Claudecker is my Docker wrapper for Claude Code that isolates AI agents from my host with network lockdown, per-project custom images, and SSH agent forwarding.