Seasides
trainingtechnical

Attacking and Defending AI systems through AIGoat

Day 2February 20, 2026
09:00 AM
Goa, India

Overview

Goal:

Provide a practical, attack-and-defend learning experience using AIGoat tool Learning Outcomes:

Recognize how AI/LLM systems expand the attack surface.
Map vulnerabilities to OWASP LLM Top 10 and see them in action.
Perform red-team style attacks (prompt injection, jailbreaks, data poisoning, sensitive data leakage) directly in AIGoat.
Implement guardrails: input/output validation, memory isolation, secret splitting, capability segmentation, monitoring.
Learn how to integrate attack simulations and mitigations into CI/CD pipelines.
Apply skills in a Capture-the-Flag competition modeled on real-world scenarios.

Training Approach:

This two-day workshop blends short lectures, live demos, and labs inside AIGoat. Day 1 & 2 focuses on attacking AI systems, emphasizes defenses, mitigations, and ends with a CTF to cement learning.

Takeaways:

Hands-On Understanding of OWASP LLM Top 10
Red-Team Skills for AI Systems
Defensive Guardrail Patterns
Risk Assessment Framework
Incident Prevention Mindset
Ethical Hacking Skills Transferable to Work
Capture-the-Flag Experience
Reusable Playbook & Tools built by us to learn AI security, a purpose-built lab for AI security. Attendees will understand the OWASP Top 10 for LLMs, perform real adversarial tests, and implement developer-friendly mitigations in live exercises.

Audience:

Developers, security engineers, red teamers, QA/security champions, and product owners who want hands-on experience in testing and securing LLM-powered applications.

Goal:

Provide a practical, attack-and-defend learning experience using AIGoat (https://github.com/AISecurityConsortium/AIGoat) tool built by us to learn AI security, a purpose-built lab for AI security. Attendees will understand the OWASP Top 10 for LLMs, perform real adversarial tests, and implement developer-friendly mitigations in live exercises.

Learning Outcomes:

Recognize how AI/LLM systems expand the attack surface.
Map vulnerabilities to OWASP LLM Top 10 and see them in action.
Perform red-team style attacks (prompt injection, jailbreaks, data poisoning, sensitive data leakage) directly in AIGoat.
Implement guardrails: input/output validation, memory isolation, secret splitting, capability segmentation, monitoring.
Learn how to integrate attack simulations and mitigations into CI/CD pipelines.
Apply skills in a Capture-the-Flag competition modeled on real-world scenarios.

Training Approach:

This two-day workshop blends short lectures, live demos, and labs inside AIGoat. Day 1 & 2 focuses on attacking AI systems, emphasizes defenses, mitigations, and ends with a CTF to cement learning.

Takeaways:

Hands-On Understanding of OWASP LLM Top 10: Experience each vulnerability category in a safe lab and learn how attackers exploit them.
Red-Team Skills for AI Systems: Practice prompt injection, jailbreaks, data poisoning, and sensitive data leakage using AIGoat scenarios.
Defensive Guardrail Patterns: Learn concrete mitigations including input/output filters, sandboxing, memory isolation, capability segmentation, and secret-splitting.
Risk Assessment Framework: Get a ready-to-use checklist for evaluating LLM/AI-powered applications before production release.
Incident Prevention Mindset: Shift from reactive to proactive security with attacker-mindset testing and automated guardrails.
Ethical Hacking Skills Transferable to Work: Practice on AIGoat to build confidence applying these tests safely within your own environment.
Capture-the-Flag Experience: Conclude the workshop with a CTF challenge simulating real-world AI security attacks and defenses.
Reusable Playbook & Tools: Walk away with an AIGoat lab Setup, a guardrail playbook, and a red-team checklist to use back at work.
Attacking and Defending AI systems through AIGoat | Seasides 2026