Black Friday Sale Happening Now! Learn More

Workshop: Hacking AI-LLM Applications

Course Authored by , , and .

This workshop starts with AI-LLM application fundamentals, moving to a reference architecture based on Open WebUI, and then discusses related threats and vulnerabilities.

Course Length: 4 Hours

Includes a Certificate of Completion



Next scheduled date: Content is loading, please wait.

Description

As AI-LLM based applications are being ubiquitously integrated into more applications today, cyber security red teamers need to understand the security vulnerabilities, tools, and methodology to effectively perform penetration testing activities that are unique to these deployments.

This workshop starts with AI-LLM application fundamentals, moving to a reference architecture based on Open WebUI, and then discusses related threats and vulnerabilities.  Demonstration labs will cover red teaming use cases against an Open WebUI reference architecture, which is analogous to many real-world AI-LLM deployments. Students will be provided with scripting to stand up their own vulnerable AI-LLM implementations for further practice.

By the end of this workshop, participants will have both the foundational understanding and tactical insight necessary to evaluate and secure AI-LLM applications. Students will have an understanding of terminology associated with AI-LLMs, prompt-based defenses, input and output filtering, AI-based protections, Retrieval Augmented Generation (RAG) concerns, agentic risks, and the dangers of code interpreters. Participants will also be exposed to tooling that assists in assessing AI-LLM applications.

  • System Requirements
    • System with reliable internet connection
  • For those wishing to follow along with the labs or work on them after class:
    • Ubuntu 24.04 LTS (other Ubuntu LTS versions may work, but have not been tested)
    • A GPU with at least 8GB of VRAM (locally or access to a cloud service, such as Digital Ocean, Amazon, Azure, etc.)
    • Note: The labs can be run on a CPU-only system but they will be very slow

Syllabus

Workshop Syllabus: Hacking AI-LLM Applications

  1. AI & Machine Learning Essentials
  • AI, ML, and Deep Learning overview
  • Supervised vs. Unsupervised learning
  • Neural networks and model training
  • Generative vs. Discriminative models
  • Practical applications in cybersecurity
  1. Large Language Models (LLMs)
  • What LLMs are and how they work
  • Transformer architecture (encoder, decoder types)
  • Key capabilities: NLP, text generation, reasoning
  • Context windows and system prompts
  1. Prompt Engineering
  • Elements of effective prompts
  • Prompting techniques:
    • Zero-shot, few-shot, chain-of-thought
    • Generated knowledge and emotional prompting
  • Iterative refinement strategies
  1. Secure AI System Design
  • Open WebUI architecture and components
  • Retrieval-Augmented Generation (RAG) flow
  • Vector databases and embedding security risks
  • Tools and pipelines in Open WebUI
  • Deploying filters (e.g. prompt injection, PII, toxicity)
  1. AI Security Threats
  • AI safety vs. security concerns
  • OWASP LLM Top 10 risks
  • Common attack vectors:
    • Prompt injection, system prompt leaks, jailbreaking
    • Role deception, confusion tactics, custom encoding
    • External malicious content, escalation chains
  1. Offensive AI Use Cases (with demonstrations)
  • Adversarial LLM prompt design
  • Bypassing safeguards
  • Leveraging agentic and interpreter capabilities
  • Attacking RAG
  • Tooling to assist with attacks and assessments

FAQ

Who should take this workshop?

This workshop will benefit both red team and blue team security professionals who are looking to gain a better understanding of AI-LLM applications and potential security risks that are associated with these applications. The workshop assumes no prior knowledge of the technologies involved.

Audience Skill Level

The target audience for this workshop are beginners to this area, although the workshop can still benefit those who have some familiarity with the material.

VM/Lab Information

Students will be provided with scripting and tools to install and configure the challenges that will be shown in the workshop. The instructors will walk through the labs during the workshop. Students will be able to either follow along with the labs or complete them outside of class at their own pace. The labs will be self-hosted by students, which will allow students perpetual access to the challenges.

About the Instructors

Shopping Cart

No products in the cart.