











































Exploring the boundaries of AI through creative prompt engineering
An in-depth rebut of the recent Assistant Axis Article from Anthropic, how it's not safe, how it's exploitable and research into varying vectors and red-teaming.
Why a single nonsense word can bypass safety training in frontier models. Breaking down trigger-based attacks on LLMs.
Have questions or want to collaborate? Send me a message!