About
I’ve been in cybersecurity for more than 15 years. Started where most of us start: reverse-engineering whatever landed in someone’s inbox that morning, and wondering why ransomware authors can’t write better code.
Over those years, I built and led threat research teams at what’s now Gen Digital (the company behind Norton, Avast, AVG, and a few other names you’ve probably installed on your parents’ computers). I managed groups of 30+ researchers, engineers, and analysts, ran threat research and telemetry systems, and our teams shipped more than 40 free ransomware decryption tools for victims through projects like NoMoreRansom. I’ve written a few thousand YARA rules, co-created RetDec (an open-source retargetable decompiler), published and presented at conferences like Virus Bulletin, CARO, and Botconf, and somewhere along the way picked up a PhD in intelligent systems and cybersecurity.
Today, I lead Threat Research & Applied AI at Gen. My team is building AI-native workflows that are transforming how our company operates. But the part that keeps me up at night isn’t just how we use AI internally.
AI agents are rapidly becoming part of how we work, live, and run critical systems. The progress is fascinating to watch. What’s less fascinating is watching a whole new class of attack vectors emerge while the security landscape around these systems is still taking shape. Agents can be manipulated, exploited, or simply misbehave, and the stakes are growing faster than the defenses.
My team recently built Sage ADR (Agent Detection & Response), an open-source safety layer that protects AI agents, such as Claude Code, Cursor, or OpenClaw, from being exploited, and catches them when they go off the rails. It’s just the beginning.
This is where I write about agentic AI safety: the security of AI agents, the threats they introduce, the defenses we need, and the things that deserve more attention. I care about both sides of the equation: protecting AI agents from being exploited, and protecting the world from agents that misbehave. Think of it as the next evolution of what cybersecurity has always done, just with a new kind of endpoint.
Why me? Because that kind of experience gives you an instinct for what’s coming next, not just what’s repeating. I’ve seen enough cycles of “move fast, secure later” to know where the gaps will open before they do, and the agentic AI space is no exception.
If any of this sounds like your kind of reading, subscribe. It’s free, and I won’t flood your inbox. I publish when I have something worth saying. No filler, no weekly obligation.
X/Twitter: https://x.com/JakubKroustek
GitHub: https://github.com/jakubkroustek

