Securing AI Models: Threats, Attacks, and Defenses 培训
保护AI模型是一门防御机器学习系统免受特定威胁的学科,这些威胁包括对抗性输入、数据投毒、反演攻击和隐私泄露。
本次由讲师指导的培训(线上或线下)面向中级机器学习与网络安全专业人员,旨在帮助他们理解并缓解针对AI模型的新兴威胁,通过概念框架和实际操作防御手段,如鲁棒训练和差分隐私。
在培训结束时,参与者将能够:
- 识别并分类AI特定威胁,如对抗性攻击、反演和投毒。
- 使用Adversarial Robustness Toolbox(ART)等工具模拟攻击并测试模型。
- 应用实际防御措施,包括对抗性训练、噪声注入和隐私保护技术。
- 在生产环境中设计威胁感知的模型评估策略。
课程形式
- 互动讲座与讨论。
- 大量练习与实践。
- 在实时实验室环境中进行实际操作。
课程定制选项
- 如需为本课程定制培训,请联系我们安排。
课程大纲
AI威胁建模简介
- AI系统的脆弱性来源是什么?
- AI攻击面与传统系统的比较
- 主要攻击向量:数据层、模型层、输出层和接口层
AI模型的对抗攻击
- 理解对抗样本和扰动技术
- 白盒攻击与黑盒攻击
- FGSM、PGD和DeepFool方法
- 可视化与制作对抗样本
模型反演与隐私泄露
- 从模型输出推断训练数据
- 成员推断攻击
- 分类模型和生成模型中的隐私风险
数据中毒与后门注入
- 中毒数据如何影响模型行为
- 触发式后门与木马攻击
- 检测与净化策略
鲁棒性与防御技术
- 对抗训练与数据增强
- 梯度掩蔽与输入预处理
- 模型平滑与正则化技术
隐私保护的AI防御
- 差分隐私简介
- 噪声注入与隐私预算
- 联邦学习与安全聚合
AI Security 实践应用
- 威胁感知的模型评估与部署
- 在实际应用中使用ART(对抗鲁棒性工具箱)
- 行业案例研究:真实世界的漏洞与缓解措施
总结与下一步
要求
- 理解机器学习工作流程和模型训练
- 熟悉Python及常见ML框架,如PyTorch或TensorFlow
- 了解基本的安全或威胁建模概念会有所帮助
受众
- 机器学习工程师
- 网络安全分析师
- AI研究人员和模型验证团队
需要帮助选择合适的课程吗?
Securing AI Models: Threats, Attacks, and Defenses 培训 - Enquiry
Securing AI Models: Threats, Attacks, and Defenses - 问询
问询
即将举行的公开课程
相关课程
AI Governance, Compliance, and Security for Enterprise Leaders
14 小时This instructor-led, live training in 中国 (online or onsite) is aimed at intermediate-level enterprise leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
- Understand the legal, ethical, and regulatory risks of using AI across departments.
- Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Establish security, auditing, and oversight policies for AI deployment in the enterprise.
- Develop procurement and usage guidelines for third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 小时Artificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
- Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001.
- Recognize cybersecurity threats targeting AI models and data pipelines.
- Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
- Interactive lecture and discussion of public sector use cases.
- AI governance framework exercises and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Secure and Responsible LLM Applications
14 小时This instructor-led, live training in 中国 (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
- Understand the core vulnerabilities of LLM-based systems.
- Apply secure design principles to LLM app architecture.
- Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
Introduction to AI Security and Risk Management
14 小时这门由讲师指导的中国(线上或线下)培训课程,旨在帮助初级IT安全、风险和合规专业人员理解AI安全的基础概念、威胁向量以及全球框架,如NIST AI RMF和ISO/IEC 42001。
在培训结束时,参与者将能够:
- 理解AI系统带来的独特安全风险。
- 识别威胁向量,如对抗攻击、数据中毒和模型反转。
- 应用基础治理模型,如NIST AI Risk Management框架。
- 将AI使用与新兴标准、合规指南和道德原则对齐。
Privacy-Preserving Machine Learning
14 小时This instructor-led, live training in 中国 (online or onsite) is aimed at advanced-level professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
By the end of this training, participants will be able to:
- Understand and compare key privacy-preserving techniques in ML.
- Implement federated learning systems using open-source frameworks.
- Apply differential privacy for safe data sharing and model training.
- Use encryption and secure computation techniques to protect model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 小时This instructor-led, live training in 中国 (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 小时This instructor-led, live training in 中国 (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
- Identify and assess security risks in edge AI deployments.
- Apply tamper resistance and encrypted inference techniques.
- Harden edge-deployed models and secure data pipelines.
- Implement threat mitigation strategies specific to embedded and constrained systems.