
































Exploring the boundaries of AI through creative prompt engineering
A quick look at my current model rotation and the tools keeping my jailbreak research organized.
Why a single nonsense word can bypass safety training in frontier models. Breaking down trigger-based attacks on LLMs.
Have questions or want to collaborate? Send me a message!