
Exploiting AI begins with the assumption that there’s little to no understanding of AI beyond tools like ChatGPT. This course starts by introducing the basics—how AI behaves, key concepts, and essential terminology in the field.
Course Length: 16 Hours
Includes a Certificate of Completion
Next scheduled date:
Notify me when available
Description
Exploiting AI begins with the assumption that there’s little to no understanding of AI beyond tools like ChatGPT. This course starts by introducing the basics—how AI behaves, key concepts, and essential terminology in the field.
Once that foundation is in place, the focus shifts to exploring attack surfaces through hands-on labs and practical examples. From there, the course examines common vulnerabilities and discusses what high-level remediation looks like, without diving too deep into technical specifics.
As the course progresses, learners will delve into AI security threats and explore how to utilize, automate, and execute attacks using pre-built tools. With a broader understanding of how AI intersects with offensive security, the final modules introduce real-world testing methodologies. These include frameworks such as OWASP, MITRE, and a custom approach developed by the instructor—all supported by hands-on lab work.
-
Hardware Requirements
- Ryzen 5 or i5 CPU with 16 GB of RAM | No ARM machines (Mac)
-
VM/Lab
- The students will need to bring there own Debian based VM preferably the newest image of Ubuntu. They will need to have docker installed.
-
Student Requirements
- Debian Virtual Machine and Hugging Face.
Syllabus
Learning the Basics
What is AI and LLM
Deep Dive
Terminology and Attack Surfaces
AI Spaces
AI Training Spaces and Hosting
Hugging Face
Ollama
MSTY
LMStudio
Our First AI
Creating our First Dataset
Training a model locally (SKIP IF LOW PC SPECS)
Hosting a Pre-Trained Model in OpenWebUI
Attack Surfaces and Remediations
Prompt Injection
Bypassing Gaurdrails
Filter Dumping
Preventing Prompt Injection
Data Poisoning and Refining
Training a spam classifier
Preventing Data Poisoning
Model Inversion Attack
Inferring Information Using a Loan Assessment AI
Preventing Model Inversion Attacks
Transfer Model Attack Overview
Attacking Two Models with one Prompt
Preventing Transfer Model Attacks
RAG AI Attack Overview
Attacking RAG
Preventing RAG Attacks
Ablation Overview
Ablating an LLM
Tooling
PyRit
Garak
WhiteRabbitNeo
Fabric
Jupyter Notebook
ai-exploits
promptfoo
spikee
giskard
PyRIT-Ship
exo
eternal
Offensive Testing Methodology
OWASP Methodology
MITRE Methodology
Heretics Methodology
FAQ
This class is for people trying to learn about inherent risks that come with implementing AI in any facet.
Intermediate
About the Instructor
Benjamin Bowman
"Hacker | Researcher | Speaker | Bird Enthusiast"Bio
Ben Bowman joined the cyber security world at 12 years old. Slowly migrating from the wrong side of the field to the right side, perusing a bachelor’s in cyber operations and catching the attention of Black Hills Info Sec after appearing on NPR for hacking AI at Defcon. Follow me on GitHub: https://github.com/her3ticAVI
Related products
-
Jim ManicoLiveOD4 Hrs
OWASP Top 10
View Course This product has multiple variants. The options may be chosen on the product page -
Beau BullockLive4 Hrs
Workshop: Introduction to Cloud Security
View Course -
Benjamin BowmanLive4 Hrs
Workshop: Exploiting AI
View Course This product has multiple variants. The options may be chosen on the product page -
Hal DentonLive4 Hrs
Workshop: Telemetry to Tactics: A Hands-On Detection Engineering Workshop with Hal Denton
View Course This product has multiple variants. The options may be chosen on the product page

