感谢您发送咨询!我们的团队成员将很快与您联系。
感谢您发送预订!我们的团队成员将很快与您联系。
课程大纲
Introduction to AI Red Teaming
- Understanding the AI threat landscape
- Roles of red teams in AI security
- Ethical and legal considerations
Adversarial Machine Learning
- Types of attacks: evasion, poisoning, extraction, inference
- Generating adversarial examples (e.g., FGSM, PGD)
- Targeted vs untargeted attacks and success metrics
Testing Model Robustness
- Evaluating robustness under perturbations
- Exploring model blind spots and failure modes
- Stress testing classification, vision, and NLP models
Red Teaming AI Pipelines
- Attack surface of AI pipelines: data, model, deployment
- Exploiting insecure model APIs and endpoints
- Reverse engineering model behavior and outputs
Simulation and Tooling
- Using the Adversarial Robustness Toolbox (ART)
- Red teaming with tools like TextAttack and IBM ART
- Sandboxing, monitoring, and observability tools
AI Red Team Strategy and Defense Collaboration
- Developing red team exercises and goals
- Communicating findings to blue teams
- Integrating red teaming into AI risk management
Summary and Next Steps
要求
- An understanding of machine learning and deep learning architectures
- Experience with Python and ML frameworks (e.g., TensorFlow, PyTorch)
- Familiarity with cybersecurity concepts or offensive security techniques
Audience
- Security researchers
- Offensive security teams
- AI assurance and red team professionals
14 小时