Sign Up for our Free One-Day SOC Summit Event March 25, 2026 Register Here

Attacking, Defending, and Leveraging AI-LLM Systems

Course Authored by and .

Attacking, Defending, and Leveraging AI/LLM Systems is a 16-hour, hands-on training designed for cybersecurity professionals, red teamers, and defenders who want to master how modern AI systems work, and how they can be exploited, secured, and applied in real-world operations.

Live Training $575.00

Course Length: 16 Hours

Includes a Certificate of Completion



Next scheduled date: March 15th, 2026 @ 10:00 AM EDT

Description

Attacking, Defending, and Leveraging AI/LLM Systems is a 16-hour, hands-on training designed for cybersecurity professionals, red teamers, and defenders who want to master how modern AI systems work, and how they can be exploited, secured, and applied in real-world operations.

What you’ll learn in this course:

  • Foundations of AI and Machine Learning: Build an understanding of LLMs, transformer architectures, and effective prompt engineering.

  • Real-World Attack and Defense Techniques: Explore threats like prompt injections, jailbreaks, system prompt leaks, escalation chains, and agent misuse, based on OWASP LLM Top 10 and MITRE ATLAS.

  • Building and Securing AI Applications: Understand RAG pipelines, vector databases, tool integrations, and common security weaknesses.

  • Hands-On Labs and Capture-the-Flag Challenges: Practice exploiting and defending LLM environments, bypassing guardrails, assessing RAG security, and using tools like Deepteam, as well as custom tooling.

  • Leveraging AI for Cybersecurity: Learn how to accelerate workflows by using AI as an analyst assistant, automation layer, and agentic partner for reconnaissance, investigation, and validation.

This course is taught by experienced practitioners who actively research AI security, lead a weekly podcast on the topic, and bring the latest insights from the field directly to your screen. By the end of the training, you will have practical skills, tested techniques, and actionable strategies to confidently attack, defend, and harness AI systems in your organization.

  • System Requirements
    • System with reliable internet connection, 16GB RAM recommended 
    • Docker Desktop 
    • OpenAI API Key 
    • Azure Foundry Access 
    • AWS Bedrock Access
  • VM/Lab Information
    • Students will be provided with a containerized environment with Open WebUI, Jupyter, n8n, and a custom MCP server that runs locally on student laptops and uses API keys for LLM inference.

Syllabus

  1. AI & machine learning demystified

  • AI, ML, and Deep Learning overview

  • Supervised vs. Unsupervised learning

  • Neural networks and model training

  • Practical applications in cybersecurity

  1. Large Language Models (LLMs)

  • What LLMs are and how they work

  • Transformer architecture (encoder, decoder types)

  • Key capabilities: NLP, text generation, reasoning

  • Context windows and system prompts

  1. Prompt engineering

  • Elements of effective prompts

  • Prompting techniques:

    • Zero-shot, few-shot, chain-of-thought

    • Generated knowledge and emotional prompting

    • Iterative refinement strategies

  1. Secure AI system design

  • Open WebUI architecture and components

  • Retrieval-Augmented Generation (RAG) flow

  • Vector databases and embedding security risks

  • Tools and pipelines in Open WebUI

  • Deploying filters (e.g. prompt injection, PII, toxicity)

  1. AI security threats

  • AI safety vs. security concerns

  • OWASP LLM Top 10 risks

  • Common attack vectors:

    • Prompt injection, system prompt leaks, jailbreaking

    • Role deception, confusion tactics, custom encoding

    • External malicious content, escalation chains

  1. Exploiting LLMs with hands-on Capture the Flag challenges

  • Adversarial LLM prompt design

  • Bypassing safeguards

  • Leveraging agentic and interpreter capabilities

  • Attacking RAG

  1. Tooling to assist with attacks and assessments

  • Safety and Security testing with Deepteam

  • Custom tooling

  • Agentic Assistance with Security Research

  • Finding vulnerabilities in source code

  1. Practical guardrails

  • Amazon Bedrock

  • Azure Foundry

  1. AI as a security workflow accelerator

  • AI as an analyst, assistant, and automation layer

  • Human-in-the-loop vs. fully agentic workflows

  • Task decomposition and goal-driven agents

  • Tool-using agents for reconnaissance, analysis, and validation

FAQ

Who Should Take this Course

This course is designed for security professionals who want practical, operational experience attacking, defending, and leveraging AI and LLM-powered systems. It is ideal for red teamers, penetration testers, and offensive security engineers looking to exploit real-world LLM weaknesses, as well as blue teamers and defenders responsible for securing AI-enabled applications and infrastructure.

The course is also valuable for analysts and security practitioners who want to incorporate AI into modern workflows by using LLMs as assistants and automation layers for tasks such as reconnaissance, investigation, code review, and validation.

AI engineers, application security engineers, and architects who are building or integrating LLM systems will benefit by learning how these systems fail in practice, how adversaries abuse model behavior, data pipelines, agents, and guardrails, and how to design AI ecosystems that are both effective and defensible.

Audience Skill Level 

The target audience for this workshop are beginners to this area, although the workshop can still benefit those who have some familiarity with the material. No prior AI or machine learning expertise is required. The course focuses on applied techniques, making it accessible to practitioners who understand cybersecurity fundamentals and want to confidently assess, test, and leverage modern AI-driven systems.

Key Takeaways
  • LLMs are real attack surfaces, and you’ll learn how they actually break.

    • Students leave understanding how modern AI systems fail in practice: prompt injection, RAG abuse, guardrail bypasses, and agent misuse. All of these attacks based on real frameworks like OWASP LLM Top 10 and MITRE ATLAS.

  • You’ll get hands-on offensive and defensive experience with AI systems.

    • This isn’t just theory. Through live CTFs and labs, students actively attack and defend LLM environments, exploit AI-assisted workflows, and validate defenses.

  • AI becomes a tool in your security workflow, not just a target.

    • Students learn how to leverage AI for cybersecurity tasks (i.e., log parsing, discovery and exploitation, code review) using agentic assistance, custom tooling, and automated security validation.

About the Instructors

Pixel splash background
Bio

Brian Fehrman has been with Black Hills Information Security (BHIS) as a Security Researcher and Analyst since 2014, but his interest in security started when his family got their very first computer. Brian holds a BS in Computer Science, an MS in Mechanical Engineering, an MS in Computational Sciences and Robotics, and a PhD in Data Science and Engineering with a focus in Cyber Security. He also holds various industry certifications, such as Offensive Security Certified Professional (OSCP) and GIAC Exploit Researcher and Advanced Penetration Tester (GXPN). He enjoys being able to protect his customers from “the real bad people” and his favorite aspects of security include artificial intelligence, hardware hacking, and red teaming. Outside of time spent working with BHIS, Brian is an avid Brazilian Jiu-Jitsu enthusiast, big game hunter, and enjoys home improvement projects.

Pixel splash background
"Security Analyst and Data Nerd"
Bio

Derek Banks has been with Black Hills Information Security (BHIS) since 2014 as a security analyst, penetration tester and red teamer, and now fulfills a leadership role in the BHIS Security Operations Center (SOC). He has a B.S. in Information Systems and a M.S. in Data Science, as well as several industry certifications. Derek has experience in computer forensics and incident response, creating custom host and network-based logging and monitoring solutions, penetration testing and red teaming.

Register for Upcoming

  • Filter by Product Date
  • Filter by Product Instructor
  • Filter by Product Type

Attacking, Defending, and Leveraging AI-LLM Systems

Complete Package

Live Training Brian Fehrman and Derek Banks

Virtual

Includes:

• Includes certificate of participation
• 12 months access to Cyber Range
• 6 months access to class recordings via Discord
• Our appreciation

Content is loading, please wait.
Content is loading, please wait.
$575.00
March 15th, 2026 10:00 AM EDT - March 16th, 2026 6:00 PM

Registration End Date: 10:00 PM, EDT March 14th 2026

Shopping Cart

No products in the cart.