Blog
Jailbreaking LLMs: My Journey
2026-05-04
A semi origin story of jailbreaking LLMs, from early persona vectors to chain-of-thought hijacking and the ENI framework.
Peeling Onions: What I Layer on Top of Persona Engineering
2026-03-14
Plain language, attention splitting, and narrative embedding—the three techniques I layer on top of persona-based social engineering, why they work, and the academic research that validates all of it.
The Assistant Vector - Jailbroken: Steering Towards "Assistant" Increases Harmful Compliance
2026-01-22
An in-depth rebut of the recent Assistant Axis Article from Anthropic, how it's not safe, how it's exploitable and research into varying vectors and red-teaming.
The Safety Theater: Why We Need the Monster (And Why Companies Are Lying About It)
2025-12-09
We are being sold a comfortable lie. The narrative from Silicon Valley is that AI safety is about ethics, about teaching a machine to be 'good.'
What I'm Running Right Now
2025-12-04
A quick look at my current model rotation and the tools keeping my jailbreak research organized.




