Hands-On AI Security Risk Assessment with Jake Williams

Overview
- Course Length: 16 hours
- Support from expert instructors
- Includes certificate of completion
- 12 months access to Cyber Range
With AI being embedded in practically every line of business application, security teams are struggling to keep pace with the rapidly moving landscape of AI.
This course trains students on the skills necessary to understand the foundations of AI from a security perspective, threat model applications using AI, identify appropriate AI use cases for security teams and line of business applications, critically evaluate the risk of each, and build out an AI governance program. Students will learn how to separate realistic risks from hype, learn the security controls available for AI, and how to apply them to their risks.
A core challenge related to AI at many organizations is third party risk management (TPRM). Students will learn the Risk AIssessment™ methodology for rapidly assessing the security of AI applications. Combining their new knowledge of the realistic risks of AI, students can apply the AIssessment™ methodology to rapidly focus their limited resources to evaluating the most important applications.
Students will evaluate applications in hands-on labs, applying a “learning by doing” instruction methodology.
On day two of the course, students will learn the OWASP Top 10 for LLM Applications. Students will learn how to attack and defend against prompt injection with hands-on labs. Students will also target a RAG application and insecure plugins in additional labs.
Wild West Hackin’ Fest – Deadwood (Oct 7th – Oct 8th, 2025) – Deadwood, SD
- October 8th – 8:30 AM to 5:00 PM MDT
- October 7th – 8:30 AM to 5:00 PM MDT
Key Takeaways
Learning Objectives:
- Gain fundamental knowledge of the realistic risks of AI applications.
- Learn to risk-assess applications using generative AI.
- Understand the core concepts required to perform technical assessments of generative AI applications, including prompt injection and insecure plugin use.
- Understand the components of an AI governance program.
Performance Objectives:
- Build analysis skills to analyze AI-enabled applications through using hands-on exercises built around real-world scenarios.
- Successfully exploit prompt injection and insecure plugin use in LLM applications, identify root causes, and deploy countermeasures to limit exploitation.
- Develop the vocabulary to communicate residual risks with generative AI applications that cannot be fully mitigated.
Who Should Take This Course
- Penetration Testers and Red Team staff needing to build their AI skills.
- Blue Team and Defensive Cyber Analysts who need to understand AI risks.
- Security Leaders who need to quickly gain knowledge and skills to lead their teams in assessing and managing the risk of AI applications.
Audience Skill Level
- Basic understanding of Linux and web application penetration testing.
- Familiarity with using Large Language Models (LLMs), such as chat applications.
- (Optional) Foundational Python skills. Sample applications are built in Python. Students will have the opportunity to remediate issues discovered and a working knowledge of Python will be useful. Note: All of the pentesting skills can still be gained without programming knowledge.
- 1-2 years of experience in performing application security assessments or in defensive security operations.
This class is being taught at Wild West Hackin’ Fest – Deadwood 2025.
For more information about our conferences, visit Wild West Hackin’ Fest!
Clicking on the button above will take you
to our registration page on the website.