Writing
AI security research, hands-on attack walkthroughs, and engineering notes.
I Asked OpenClaw to Summarize a File. It Tried to Steal My Credentials Instead.
Verified indirect prompt injection against OpenClaw 2026.3.13 + Gemini 2.0 Flash. A single text file made the agent attempt to read and exfiltrate all stored credentials.
Read on dev.to →What OpenClaw's Built-In Security Can and Cannot Protect You From
I tested every security setting OpenClaw offers. Some work. Some don't. And the biggest threat has no config option at all.
Read on dev.to →How to Harden OpenClaw in 5 Minutes — Before and After a Real Prompt Injection Attack
I ran the same prompt injection attack before and after hardening. The default config leaked credentials. One command fixed it.
Read on dev.to →How to Attack an MCP Server — and Why Your AI Agent Will Comply
MCP is becoming the standard for AI tool use. Most MCP servers have critical vulnerabilities. Hands-on attack walkthrough: tool poisoning, unauthenticated file access, and SSRF — with live verification.
Read on dev.to →How to Attack a RAG System — and Why Your Security Scanner Won't Catch It
A hands-on walkthrough of RAG poisoning attacks against a deliberately vulnerable chatbot. What automated scanners find, what they miss, and how to fix it.
Read on dev.to →Your LLM API Is an Attack Surface. Are You Scanning It?
A practical guide to the security blind spot in AI-powered applications — and how to test for it with a single command.
Read on dev.to →