











































Exploring the boundaries of AI through creative prompt engineering
Plain language, attention splitting, and narrative embedding—the three techniques I layer on top of persona-based social engineering, why they work, and the academic research that validates all of it.
Memory poisoning is phishing for machines. Instead of tricking a human into clicking a link, you trick a model into storing a lie. A breakdown of how persistent memory in LLMs creates an entirely new class of social engineering attack.
Have questions or want to collaborate? Send me a message!