
With AI being embedded in practically every line of business application, security teams are struggling to keep pace with the rapidly moving landscape of AI.
Course Length: 16 Hours
Includes a Certificate of Completion
Next scheduled date: WWHF Deadwood 2025 - Link at bottom.
Description
With AI being embedded in practically every line of business application, security teams are struggling to keep pace with the rapidly moving landscape of AI.
This course trains students on the skills necessary to understand the foundations of AI from a security perspective, threat model applications using AI, identify appropriate AI use cases for security teams and line of business applications, critically evaluate the risk of each, and build out an AI governance program. Students will learn how to separate realistic risks from hype, learn the security controls available for AI, and how to apply them to their risks.
A core challenge related to AI at many organizations is third party risk management (TPRM). Students will learn the Risk AIssessment™ methodology for rapidly assessing the security of AI applications. Combining their new knowledge of the realistic risks of AI, students can apply the AIssessment™ methodology to rapidly focus their limited resources to evaluating the most important applications.
Students will evaluate applications in hands-on labs, applying a “learning by doing” instruction methodology.
On day two of the course, students will learn the OWASP Top 10 for LLM Applications. Students will learn how to attack and defend against prompt injection with hands-on labs. Students will also target a RAG application and insecure plugins in additional labs.
FAQ
Gain fundamental knowledge of the realistic risks of AI applications.
Learn to risk-assess applications using generative AI.
Understand the core concepts required to perform technical assessments of generative AI applications, including prompt injection and insecure plugin use.
Understand the components of an AI governance program.
Performance Objectives:
Build analysis skills to analyze AI-enabled applications through using hands-on exercises built around real-world scenarios.
Successfully exploit prompt injection and insecure plugin use in LLM applications, identify root causes, and deploy countermeasures to limit exploitation.
Develop the vocabulary to communicate residual risks with generative AI applications that cannot be fully mitigated.
Blue Team and Defensive Cyber Analysts who need to understand AI risks.
Security Leaders who need to quickly gain knowledge and skills to lead their teams in assessing and managing the risk of AI applications.
Familiarity with using Large Language Models (LLMs), such as chat applications.
(Optional) Foundational Python skills. Sample applications are built in Python. Students will have the opportunity to remediate issues discovered and a working knowledge of Python will be useful. Note: All of the pentesting skills can still be gained without programming knowledge.
1-2 years of experience in performing application security assessments or in defensive security operations.
About the Instructor

Jake Williams
This class is being taught at Wild West Hackin’ Fest – Deadwood 2025.
For more information about our conferences, visit Wild West Hackin’ Fest!
Clicking on the button above will take you to our registration page
Related products
-
Multiple InstructorsLive
Workshop: Hands on Kerberos with Tim Medin
View Course -
Multiple InstructorsLive
Workshop: The OWASP API Security Top Ten 2023 with Tanya Janca
View Course This product has multiple variants. The options may be chosen on the product page -
Multiple InstructorsLive
Workshop: Build a Multi Modal C2 Covert Channel in Golang with Faan Rossouw
View Course This product has multiple variants. The options may be chosen on the product page -
Multiple InstructorsLive
Workshop: Getting Comfortable in Burp Suite with BB King
View Course