Integrating Adversarial Scenarios into LLM Security Labs: An Experience Report on a Hands-On Approach

Dominic A. Wilson
Publication Date 1-26-2026 Abstract This paper presents an exploratory case study detailed as a pedagogical experience report on integrating adversarial Large Language Model (LLM) scenarios into a graduate cybersecurity curriculum. In addition to prompt injection, sophisticated techniques such as jailbreaking and model inversion pose emerging threats that traditional computer security curricula often lack. We present the design and implementation of a structured, hands-on module addressing this.