Attacking, Defending, and Leveraging AI/LLM Systems is a 16-hour, hands-on training designed for cybersecurity professionals, red teamers, and defenders who want to master how modern AI systems work, and how they can be exploited, secured, and applied in real-world operations.
Next scheduled date:March 15th, 2026 @ 10:00 AM EDT
Description
Attacking, Defending, and Leveraging AI/LLM Systems is a 16-hour, hands-on training designed for cybersecurity professionals, red teamers, and defenders who want to master how modern AI systems work, and how they can be exploited, secured, and applied in real-world operations.
What you’ll learn in this course:
Foundations of AI and Machine Learning: Build an understanding of LLMs, transformer architectures, and effective prompt engineering.
Real-World Attack and Defense Techniques: Explore threats like prompt injections, jailbreaks, system prompt leaks, escalation chains, and agent misuse, based on OWASP LLM Top 10 and MITRE ATLAS.
Building and Securing AI Applications: Understand RAG pipelines, vector databases, tool integrations, and common security weaknesses.
Hands-On Labs and Capture-the-Flag Challenges: Practice exploiting and defending LLM environments, bypassing guardrails, assessing RAG security, and using tools like Deepteam, as well as custom tooling.
Leveraging AI for Cybersecurity: Learn how to accelerate workflows by using AI as an analyst assistant, automation layer, and agentic partner for reconnaissance, investigation, and validation.
This course is taught by experienced practitioners who actively research AI security, lead a weekly podcast on the topic, and bring the latest insights from the field directly to your screen. By the end of the training, you will have practical skills, tested techniques, and actionable strategies to confidently attack, defend, and harness AI systems in your organization.
System Requirements
System with reliable internet connection, 16GB RAM recommended
Docker Desktop
OpenAI API Key
Azure Foundry Access
AWS Bedrock Access
VM/Lab Information
Students will be provided with a containerized environment with Open WebUI, Jupyter, n8n, and a custom MCP server that runs locally on student laptops and uses API keys for LLM inference.
This course is designed for security professionals who want practical, operational experience attacking, defending, and leveraging AI and LLM-powered systems. It is ideal for red teamers, penetration testers, and offensive security engineers looking to exploit real-world LLM weaknesses, as well as blue teamers and defenders responsible for securing AI-enabled applications and infrastructure.
The course is also valuable for analysts and security practitioners who want to incorporate AI into modern workflows by using LLMs as assistants and automation layers for tasks such as reconnaissance, investigation, code review, and validation.
AI engineers, application security engineers, and architects who are building or integrating LLM systems will benefit by learning how these systems fail in practice, how adversaries abuse model behavior, data pipelines, agents, and guardrails, and how to design AI ecosystems that are both effective and defensible.
The target audience for this workshop are beginners to this area, although the workshop can still benefit those who have some familiarity with the material. No prior AI or machine learning expertise is required. The course focuses on applied techniques, making it accessible to practitioners who understand cybersecurity fundamentals and want to confidently assess, test, and leverage modern AI-driven systems.
LLMs are real attack surfaces, and you’ll learn how they actually break.
Students leave understanding how modern AI systems fail in practice: prompt injection, RAG abuse, guardrail bypasses, and agent misuse. All of these attacks based on real frameworks like OWASP LLM Top 10 and MITRE ATLAS.
You’ll get hands-on offensive and defensive experience with AI systems.
This isn’t just theory. Through live CTFs and labs, students actively attack and defend LLM environments, exploit AI-assisted workflows, and validate defenses.
AI becomes a tool in your security workflow, not just a target.
Students learn how to leverage AI for cybersecurity tasks (i.e., log parsing, discovery and exploitation, code review) using agentic assistance, custom tooling, and automated security validation.
Brian Fehrman has been with Black Hills Information Security (BHIS) as a Security Researcher and Analyst since 2014, but his interest in security started when his family got their very first computer. Brian holds a BS in Computer Science, an MS in Mechanical Engineering, an MS in Computational Sciences and Robotics, and a PhD in Data Science and Engineering with a focus in Cyber Security. He also holds various industry certifications, such as Offensive Security Certified Professional (OSCP) and GIAC Exploit Researcher and Advanced Penetration Tester (GXPN). He enjoys being able to protect his customers from “the real bad people” and his favorite aspects of security include artificial intelligence, hardware hacking, and red teaming. Outside of time spent working with BHIS, Brian is an avid Brazilian Jiu-Jitsu enthusiast, big game hunter, and enjoys home improvement projects.
Derek Banks has been with Black Hills Information Security (BHIS) since 2014 as a security analyst, penetration tester and red teamer, and now fulfills a leadership role in the BHIS Security Operations Center (SOC). He has a B.S. in Information Systems and a M.S. in Data Science, as well as several industry certifications. Derek has experience in computer forensics and incident response, creating custom host and network-based logging and monitoring solutions, penetration testing and red teaming.
Register for Upcoming
Filter by Product Date
Filter by Product Instructor
Filter by Product Type
Attacking, Defending, and Leveraging AI-LLM Systems