
This workshop starts with AI-LLM application fundamentals, moving to a reference architecture based on Open WebUI, and then discusses related threats and vulnerabilities.
Course Length: 4 Hours
Includes a Certificate of Completion
Next scheduled date:
Notify me when available
Description
As AI-LLM based applications are being ubiquitously integrated into more applications today, cybersecurity red teamers need to understand the security vulnerabilities, tools, and methodology to effectively perform penetration testing activities that are unique to these deployments.
This workshop starts with AI-LLM application fundamentals, moving to a reference architecture based on Open WebUI, and then discusses related threats and vulnerabilities. Labs will cover red teaming use cases against an Open WebUI reference architecture, which is analogous to many real-world AI-LLM deployments. Labs will be internet-hosted to allow students to access them. Students will also be provided with scripting to stand up their own vulnerable AI-LLM implementations for further practice.
By the end of this workshop, participants will have both the foundational understanding and tactical insight necessary to evaluate and secure AI-LLM applications. Students will have an understanding of terminology associated with AI-LLMs, prompt-based defenses, input and output filtering, AI-based protections, Retrieval Augmented Generation (RAG) concerns, agentic risks, and the dangers of code interpreters. Participants will also be exposed to tooling that assists in assessing AI-LLM applications.
-
System Requirements
- System with reliable internet connection
-
For those wishing to install the labs locally:
- Ubuntu 24.04 LTS (other Ubuntu LTS versions may work, but have not been tested)
- A GPU with at least 8GB of VRAM (locally or access to a cloud service, such as Digital Ocean, Amazon, Azure, etc.)
- Note: The labs can be run on a CPU-only system but they will be very slow
Syllabus
Workshop Syllabus: Hacking AI-LLM Applications
-
AI & Machine Learning Essentials
-
AI, ML, and Deep Learning overview
-
Supervised vs. Unsupervised learning
-
Neural networks and model training
-
Generative vs. Discriminative models
-
Practical applications in cybersecurity
-
Large Language Models (LLMs)
-
What LLMs are and how they work
-
Transformer architecture (encoder, decoder types)
-
Key capabilities: NLP, text generation, reasoning
-
Context windows and system prompts
-
Prompt Engineering
-
Elements of effective prompts
-
Prompting techniques:
-
Zero-shot, few-shot, chain-of-thought
-
Generated knowledge and emotional prompting
-
Iterative refinement strategies
-
Secure AI System Design
-
Open WebUI architecture and components
-
Retrieval-Augmented Generation (RAG) flow
-
Vector databases and embedding security risks
-
Tools and pipelines in Open WebUI
-
Deploying filters (e.g. prompt injection, PII, toxicity)
-
AI Security Threats
-
AI safety vs. security concerns
-
OWASP LLM Top 10 risks
-
Common attack vectors:
-
Prompt injection, system prompt leaks, jailbreaking
-
Role deception, confusion tactics, custom encoding
-
External malicious content, escalation chains
-
Offensive AI Use Cases (with labs)
-
Adversarial LLM prompt design
-
Bypassing safeguards
-
Leveraging agentic and interpreter capabilities
-
Attacking RAG
-
Tooling to assist with attacks and assessments
FAQ
This workshop will benefit both red team and blue team security professionals who are looking to gain a better understanding of AI-LLM applications and potential security risks associated with these applications. The workshop assumes no prior knowledge of the technologies involved.
The target audience for this workshop are beginners to this area, although the workshop can still benefit those who have some familiarity with the material.
Students will be provided with scripting and tools to install and configure the challenges that will be shown in the workshop. Additionally, students will be provided access to an internet-hosted version of the labs for those who do not wish to install the labs locally. Instructors will introduce the labs, allow the students time to work on the labs, and then answer questions as needed.
-
Basic understanding and the applications of AI models and LLMs
-
Common vulnerabilities associated with AI-LLMs
-
Common defense mechanisms for AI-LLMs and their weaknesses
-
Hands-on experience with exploiting AI-LLM vulnerabilities
About the Instructors
Brian Fehrman
Bio
Brian Fehrman has been with Black Hills Information Security (BHIS) as a Security Researcher and Analyst since 2014, but his interest in security started when his family got their very first computer. Brian holds a BS in Computer Science, an MS in Mechanical Engineering, an MS in Computational Sciences and Robotics, and a PhD in Data Science and Engineering with a focus in Cyber Security. He also holds various industry certifications, such as Offensive Security Certified Professional (OSCP) and GIAC Exploit Researcher and Advanced Penetration Tester (GXPN). He enjoys being able to protect his customers from “the real bad people” and his favorite aspects of security include artificial intelligence, hardware hacking, and red teaming. Outside of time spent working with BHIS, Brian is an avid Brazilian Jiu-Jitsu enthusiast, big game hunter, and enjoys home improvement projects.
Derek Banks
"Security Analyst and Data Nerd"Bio
Derek Banks has been with Black Hills Information Security (BHIS) since 2014 as a security analyst, penetration tester and red teamer, and now fulfills a leadership role in the BHIS Security Operations Center (SOC). He has a B.S. in Information Systems and a M.S. in Data Science, as well as several industry certifications. Derek has experience in computer forensics and incident response, creating custom host and network-based logging and monitoring solutions, penetration testing and red teaming.
Related products
-
Wade WellsLiveOD8 Hrs
Cyber Threat Intelligence 101
View Course This product has multiple variants. The options may be chosen on the product page -
Kevin KlingbileLiveOD16 Hrs
Defending M365 & Azure
View Course This product has multiple variants. The options may be chosen on the product pageJun 15 - Jun 16
-
Bryan StrandLiveOD4 Hrs
Blue Team Foundations with Atomic Controls
View Course This product has multiple variants. The options may be chosen on the product page -
Alissa TorresLiveOD16 Hrs
Advanced Endpoint Investigations
View Course This product has multiple variants. The options may be chosen on the product page


