Course Description
C|OASP is a hands-on, practitioner-level credential that validates your ability to ethically attack AI systems so you can defend them with engineering-grade controls.
C|OASP is not about building AI models or running AI programs. It is about proving you can:
- Think like an attacker inside AI systems
- Uncover weaknesses across models and pipelines
- Validate security controls
- Reduce operational risk before deployment
This is the only credential built for offensive AI security work with outcomes you can demonstrate.
Who Should Attend
- Penetration Tester/Ethical Hacker
- Red Team Operator/Red Team Lead
- Offensive Security Engineer
- Adversary Emulation/Purple Team Specialist
- SOC Analyst (Tier 2/3)/Detection Engineer
- Blue Team Engineer/Threat Detection Engineer
- Incident Responder (IR)/DFIR Analyst
- Security Operations Manager (SOC Lead)
- Malware Analyst/Threat Researcher
- Cyber Threat Intelligence (CTI) Analyst – AI Focus
- Fraud/Abuse Detection Analyst (AI enabled threats)
- ML Engineer/Applied AI Engineer
- GenAI Engineer (RAG/Agents)
- AI/LLM Application Developer
- MLOps/AI Platform Engineer
- DevSecOps/Secure DevOps Specialist
- Application Security Engineer (LLM Apps/APIs)
- Product Security Engineer/AI Product Security
- Secure AI Engineer/AI Security Architect
- LLM Systems Engineer
COASP Course – Certified Offensive AI Security Professional
- Module 1: Offensive AI and AI System Hacking Methodology
- Module 2: AI Reconnaissance and Attack Surface Mapping
- Module 3: AI Vulnerability Scanning and Fuzzing
- Module 4: Prompt Injection and LLM Application Attacks
- Module 5: Adversarial Machine Learning and Model Privacy Attacks
- Module 6: Data and Training Pipeline Attacks
- Module 7: Agentic AI and Model-to-Model Attacks
- Module 8: AI Infrastructure and Supply Chain Attacks
- Module 9: AI Security Testing, Evaluation, and Hardening
- Module 10: AI Incident Response and Forensics