This workshop teaches the foundations of building your own AI security agent from the ground up. You’ll learn why harness engineering, not prompt engineering or model selection, is where the real leverage lives, and you’ll build the core components of a working, model-agnostic agent framework you can extend to any defensive security use case.
Next scheduled date:June 12th, 2026 @ 12:00 PM EDT
Description
AI agents are rapidly becoming part of how defensive security teams operate. But using an off-the-shelf agent product and building your own are fundamentally different skills. The teams who understand the architecture underneath, what the field calls “harness engineering,” are the ones who can adapt agentic systems to their own environment, tooling, and security problems, rather than being locked into whatever capabilities a vendor ships.
This workshop teaches the foundations of building your own AI security agent from the ground up. You’ll learn why harness engineering, not prompt engineering or model selection, is where the real leverage lives, and you’ll build the core components of a working, model-agnostic agent framework you can extend to any defensive security use case.
The workshop uses threat hunting as its running example. You’ll build an agent that investigates a security scenario end-to-end. But the architecture, patterns, and principles you learn apply equally to any security agent you want to build: detection engineering, incident response, vulnerability triage, alert enrichment, compliance checking, and more.
Specifically, over four hours, we’ll cover:
Why harness engineering matters more than the model you pick
The core state pattern every agentic system needs (immutable, traceable, auditable)
Model-agnostic design – decoupling your agent from any one vendor so you can run on Anthropic, OpenAI, Google, local open-weight models, or your existing Claude Code subscription without changing your code
Skills as executable, testable investigation procedures rather than ad-hoc prompts
Orchestration patterns for coordinating multiple agents in sequence and parallel
A working, end-to-end security investigation you can extend to your own use cases
This is not a course about prompting a chatbot to analyze logs. This is a course about the engineering decisions behind systems that agents can actually perform useful security work in. You leave with a working agent framework, yours to modify, extend, and adapt to whatever security problems you face in your environment.
The security field is moving toward agentic AI faster than most teams planned for. The practitioners who understand how these systems actually work, how to architect them, adapt them, and debug them when they misbehave, are the ones who’ll be equipped to lead their teams through that shift. Harness engineering is the kind of capability that compounds. The earlier you invest, the more leverage you have as the field continues to move.
SYSTEM REQUIREMENTS
A computer with a terminal, modern web browser, and stable internet connection
Ability to join the live workshop stream
Node.js 20+ installed with npm
A code editor or IDE of your choice (Instructor will be using Zed)
An API key for at least one LLM provider (Anthropic, OpenAI, or Google AI Studio - the last has a genuinely free tier), or an active Claude Code subscription (which the workshop framework can wrap), or a local open-weight model + Ollama
Detailed setup instructions will be sent to registered students ahead of the workshop.
VM / LAB / STUDENT INFO
No VM is required. All exercises run directly on the student's local machine using Node.js and standard tooling. Students who prefer an isolated environment are welcome to use a VM or container, but it's not necessary.
A workshop repository will be provided to all registered students ahead of the session. The repo includes:
Working framework components for each module - ready to configure, run, and experiment with
Sample security telemetry (connection logs, process events, authentication records) to investigate during the hands-on segment
Example skill definitions and configuration files
Reference solutions for each exercise
Students build the framework progressively through the workshop - each module adds a new layer on top of the previous one. Clear checkpoints let students keep up even if they fall behind on any single step.
Syllabus
SYLLABUS
Module 1: Why Build Your Own Agent
Theory:
The fundamental difference between using an off-the-shelf agent and building one
Why harness engineering, not prompting or model choice, is where the leverage lives
How this applies across security domains
Module 2: The Foundation: State & Transitions
Theory:
Agents as “model + harness”
The immutable accumulating state pattern
Pure-function transitions
Practical:
Build and run a minimal agent framework with typed state and transitions that move information forward through stages
Module 3: Model-Agnostic Design
Theory:
The provider interface pattern
Why decoupling from any one vendor matters
Cost, compliance, and reliability implications
Practical:
Wire up provider adapters for multiple LLMs (Gemini, Anthropic, OpenAI, local via Ollama, Claude Code CLI wrapper) and make the first real model call through the framework
Security practitioners curious about building their own AI agents rather than relying on black-box vendor tools
Threat hunters and detection engineers looking to understand the architecture behind agentic hunting
SOC analysts who want to move beyond alert triage and build proactive or semi-autonomous tooling
Security engineers responsible for evaluating or integrating AI capabilities into internal tooling
Blue-team leads and security architects exploring how agentic AI fits into their defensive strategy
Anyone who has tried off-the-shelf AI security tools and hit their limits and wants to understand how to build their own that can be adapted, extended, and trusted
Security generalists wanting a hands-on introduction to agentic AI principles that apply across domains
The workshop assumes working familiarity with security operations concepts (what telemetry is, what a detection rule does, what incident response looks like) and basic comfort navigating a command line.
Some exposure to code is helpful – you’ll be copying and running TypeScript files – but no programming experience is required. The workshop provides pre-built components that students configure, run, and compose rather than write from scratch. The focus is on understanding the architecture and making informed decisions, not on software development.
If you’re newer to security and willing to put in the effort, you can still keep up. The workshop is structured progressively, each module builds on the last, and the concepts are taught before they’re applied.
Basic familiarity with security operations concepts – you should know what telemetry, detection rules, and threat hunting mean at a high level, even if you haven’t done them hands-on
Comfort navigating a terminal and running commands
Node.js 20+ installed** on your machine ahead of the workshop (a setup guide will be provided to registered students)
No coding experience is required. You will not be writing code from scratch; you’ll be working with pre-built components.
Understand why harness engineering, not prompting or model choice, is where the leverage lives for getting real results from agentic AI
Know the core components of an agent framework: state, provider interface, skills, orchestration – and why each one matters
Understand how to formalize investigation procedures into testable, reproducible skills rather than relying on ad-hoc prompts
Know the orchestration patterns for multi-agent coordination (sequential pipeline, concurrent fan-out) and when to use each
Leave with a working repository they can adapt to their own environment and extend to any defensive security use case – threat hunting, detection engineering, incident response, vulnerability triage, compliance checking, and more
Understand when it makes sense to move from a hand-built framework to a production-grade orchestration library (LangGraph, Agent SDKs), and how to structure their code so that migration is a wrap rather than a rewrite
Faan Rossouw is a security researcher focused on the intersection of threat hunting and agentic AI. Faan is currently working on aionsec.ai, a complete educational ecosystem that helps threat hunters master AI agents – from using them effectively, to building their own, to securing them. He also has a deep interest in developing robust systems that produce coherent synthetic telemetry for security model training at scale. In his free time, Faan likes to hang out with his family and go for forest runs with his dog.