This workshop starts with AI-LLM application fundamentals, moving to a reference architecture based on Open WebUI, and then discusses related threats and vulnerabilities.
Next scheduled date:March 6th, 2026 @ 12:00 PM EST
Description
As AI-LLM based applications are being ubiquitously integrated into more applications today, cybersecurity red teamers need to understand the security vulnerabilities, tools, and methodology to effectively perform penetration testing activities that are unique to these deployments.
This workshop starts with AI-LLM application fundamentals, moving to a reference architecture based on Open WebUI, and then discusses related threats and vulnerabilities. Labs will cover red teaming use cases against an Open WebUI reference architecture, which is analogous to many real-world AI-LLM deployments. Labs will be internet-hosted to allow students to access them. Students will also be provided with scripting to stand up their own vulnerable AI-LLM implementations for further practice.
By the end of this workshop, participants will have both the foundational understanding and tactical insight necessary to evaluate and secure AI-LLM applications. Students will have an understanding of terminology associated with AI-LLMs, prompt-based defenses, input and output filtering, AI-based protections, Retrieval Augmented Generation (RAG) concerns, agentic risks, and the dangers of code interpreters. Participants will also be exposed to tooling that assists in assessing AI-LLM applications.
System Requirements
System with reliable internet connection
For those wishing to install the labs locally:
Ubuntu 24.04 LTS (other Ubuntu LTS versions may work, but have not been tested)
A GPU with at least 8GB of VRAM (locally or access to a cloud service, such as Digital Ocean, Amazon, Azure, etc.)
Note: The labs can be run on a CPU-only system but they will be very slow
This workshop will benefit both red team and blue team security professionals who are looking to gain a better understanding of AI-LLM applications and potential security risks associated with these applications. The workshop assumes no prior knowledge of the technologies involved.
The target audience for this workshop are beginners to this area, although the workshop can still benefit those who have some familiarity with the material.
Students will be provided with scripting and tools to install and configure the challenges that will be shown in the workshop. Additionally, students will be provided access to an internet-hosted version of the labs for those who do not wish to install the labs locally. Instructors will introduce the labs, allow the students time to work on the labs, and then answer questions as needed.
Brian Fehrman has been with Black Hills Information Security (BHIS) as a Security Researcher and Analyst since 2014, but his interest in security started when his family got their very first computer. Brian holds a BS in Computer Science, an MS in Mechanical Engineering, an MS in Computational Sciences and Robotics, and a PhD in Data Science and Engineering with a focus in Cyber Security. He also holds various industry certifications, such as Offensive Security Certified Professional (OSCP) and GIAC Exploit Researcher and Advanced Penetration Tester (GXPN). He enjoys being able to protect his customers from “the real bad people” and his favorite aspects of security include artificial intelligence, hardware hacking, and red teaming. Outside of time spent working with BHIS, Brian is an avid Brazilian Jiu-Jitsu enthusiast, big game hunter, and enjoys home improvement projects.
Derek Banks has been with Black Hills Information Security (BHIS) since 2014 as a security analyst, penetration tester and red teamer, and now fulfills a leadership role in the BHIS Security Operations Center (SOC). He has a B.S. in Information Systems and a M.S. in Data Science, as well as several industry certifications. Derek has experience in computer forensics and incident response, creating custom host and network-based logging and monitoring solutions, penetration testing and red teaming.