Building Secure and Responsible LLM Applications 培训
LLM application security is the discipline of designing, building, and maintaining safe, trustworthy, and policy-compliant systems using large language models.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
- Understand the core vulnerabilities of LLM-based systems.
- Apply secure design principles to LLM app architecture.
- Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
课程大纲
Overview of LLM Architecture and Attack Surface
- How LLMs are built, deployed, and accessed via APIs
- Key components in LLM app stacks (e.g., prompts, agents, memory, APIs)
- Where and how security issues arise in real-world use
Prompt Injection and Jailbreak Attacks
- What is prompt injection and why it’s dangerous
- Direct and indirect prompt injection scenarios
- Jailbreaking techniques to bypass safety filters
- Detection and mitigation strategies
Data Leakage and Privacy Risks
- Accidental data exposure through responses
- PII leaks and model memory misuse
- Designing privacy-conscious prompts and retrieval-augmented generation (RAG)
LLM Output Filtering and Guarding
- Using Guardrails AI for content filtering and validation
- Defining output schemas and constraints
- Monitoring and logging unsafe outputs
Human-in-the-Loop and Workflow Approaches
- Where and when to introduce human oversight
- Approval queues, scoring thresholds, fallback handling
- Trust calibration and role of explainability
Secure LLM App Design Patterns
- Least privilege and sandboxing for API calls and agents
- Rate limiting, throttling, and abuse detection
- Robust chaining with LangChain and prompt isolation
Compliance, Logging, and Governance
- Ensuring auditability of LLM outputs
- Maintaining traceability and prompt/version control
- Aligning with internal security policies and regulatory needs
Summary and Next Steps
要求
- An understanding of large language models and prompt-based interfaces
- Experience building LLM applications using Python
- Familiarity with API integrations and cloud-based deployments
Audience
- AI developers
- Application and solution architects
- Technical product managers working with LLM tools
需要帮助选择合适的课程吗?
Building Secure and Responsible LLM Applications 培训 - Enquiry
Building Secure and Responsible LLM Applications - 问询
问询
即将举行的公开课程
相关课程
Advanced LangGraph: Optimization, Debugging, and Monitoring Complex Graphs
35 小时LangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI platform engineers, DevOps for AI, and ML architects who wish to optimize, debug, monitor, and operate production-grade LangGraph systems.
By the end of this training, participants will be able to:
- Design and optimize complex LangGraph topologies for speed, cost, and scalability.
- Engineer reliability with retries, timeouts, idempotency, and checkpoint-based recovery.
- Debug and trace graph executions, inspect state, and systematically reproduce production issues.
- Instrument graphs with logs, metrics, and traces, deploy to production, and monitor SLAs and costs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Advanced Ollama Model Debugging & Evaluation
35 小时Advanced Ollama Model Debugging & Evaluation is an in-depth course focused on diagnosing, testing, and measuring model behavior when running local or private Ollama deployments.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI engineers, ML Ops professionals, and QA practitioners who wish to ensure reliability, fidelity, and operational readiness of Ollama-based models in production.
By the end of this training, participants will be able to:
- Perform systematic debugging of Ollama-hosted models and reproduce failure modes reliably.
- Design and execute robust evaluation pipelines with quantitative and qualitative metrics.
- Implement observability (logs, traces, metrics) to monitor model health and drift.
- Automate testing, validation, and regression checks integrated into CI/CD pipelines.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs and debugging exercises using Ollama deployments.
- Case studies, group troubleshooting sessions, and automation workshops.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
使用Ollama构建私有AI工作流程
14 小时这是一个由讲师指导的线上或线下培训,针对希望使用Ollama实现安全高效的AI驱动工作流程的高级专业人士。
通过本培训,参与者将能够:
- 部署和配置Ollama以进行私有AI处理。
- 将AI模型集成到安全的企业工作流程中。
- 在保持数据隐私的同时,优化AI性能。
- 利用本地AI功能自动化业务流程。
- 确保符合企业安全与治理政策。
Claude AI 面向开发者:构建AI驱动的应用程序
14 小时这个面向希望将Claude AI集成到他们的应用程序中、构建AI驱动的聊天机器人以及通过AI驱动的自动化来增强软件功能的中级软件开发人员和AI工程师的中国(在线或现场) Instructor-led, live training (online or onsite)。
在这次培训结束时,参与者将能够:
- 使用Claude AI API将AI集成到应用程序中。
- 开发AI驱动的聊天机器人和虚拟助手。
- 利用AI驱动的自动化和NLP增强应用程序。
- 对不同的用例优化和微调Claude AI模型。
Claude AI 适用于 Workflow Automation 和 Productivity
14 小时这门由讲师主导的现场培训在中国(在线或现场)旨在帮助有志于将Claude AI整合到日常工作流程中以提高效率和自动化的初级专业人员。
培训结束时,参加者将能够:
- 利用Claude AI来自动化重复性任务和精简工作流程。
- 利用人工智慧驱动的自动化技术提高个人和团队的生产力。
- 将Claude AI与现有的商业工具和平台整合。
- 优化AI驱动的决策制定和任务管理。
使用Ollama部署和优化LLM
14 小时这是一场由讲师指导的现场培训,地点在中国(线上或现场),适合希望使用Ollama部署、优化和整合LLM的中级专业人士。
在培训结束时,参与者将能够:
- 使用Ollama设置和部署LLM。
- 优化AI模型以提升性能和效率。
- 利用GPU加速提升推理速度。
- 将Ollama整合到工作流程和应用程式中。
- 监控和维护AI模型的长期性能。
Fine-Tuning 与在 Ollama 上自订 AI 模型
14 小时这项由讲师指导的中国(线上或线下)培训,针对希望微调和自订Ollama上的AI模型以提升性能和特定领域应用的高级专业人士。
在培训结束时,参与者将能够:
- 在Ollama上设置高效的AI模型微调环境。
- 准备用于监督式微调和强化学习的数据集。
- 优化AI模型的性能、准确性和效率。
- 在生产环境中部署自订模型。
- 评估模型改进并确保其稳健性。
Claude AI 简介:对话式 AI 与商业应用
14 小时这项由讲师主持的现场培训(在线或现场)旨在帮助希望了解Claude AI的基本原理并利用它进行商业应用的初级商业专业人士、客户支持团队和技术爱好者。
培训结束时,参与者将能够:
- 了解Claude AI的能力和用例。
- 有效地设置和与Claude AI互动。
- 利用对话式AI自动化业务工作流程。
- 利用AI驱动的解决方案提升客户互动和支持。
LangGraph Applications in Finance
35 小时LangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based finance solutions with proper governance, observability, and compliance.
By the end of this training, participants will be able to:
- Design finance-specific LangGraph workflows aligned to regulatory and audit requirements.
- Integrate financial data standards and ontologies into graph state and tooling.
- Implement reliability, safety, and human-in-the-loop controls for critical processes.
- Deploy, monitor, and optimize LangGraph systems for performance, cost, and SLAs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph Foundations: Graph-Based LLM Prompting and Chaining
14 小时LangGraph is a framework for building graph-structured LLM applications that support planning, branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at beginner-level developers, prompt engineers, and data practitioners who wish to design and build reliable, multi-step LLM workflows using LangGraph.
By the end of this training, participants will be able to:
- Explain core LangGraph concepts (nodes, edges, state) and when to use them.
- Build prompt chains that branch, call tools, and maintain memory.
- Integrate retrieval and external APIs into graph workflows.
- Test, debug, and evaluate LangGraph apps for reliability and safety.
Format of the Course
- Interactive lecture and facilitated discussion.
- Guided labs and code walkthroughs in a sandbox environment.
- Scenario-based exercises on design, testing, and evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph in Healthcare: Workflow Orchestration for Regulated Environments
35 小时LangGraph enables stateful, multi-actor workflows powered by LLMs with precise control over execution paths and state persistence. In healthcare, these capabilities are crucial for compliance, interoperability, and building decision-support systems that align with medical workflows.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and manage LangGraph-based healthcare solutions while addressing regulatory, ethical, and operational challenges.
By the end of this training, participants will be able to:
- Design healthcare-specific LangGraph workflows with compliance and auditability in mind.
- Integrate LangGraph applications with medical ontologies and standards (FHIR, SNOMED CT, ICD).
- Apply best practices for reliability, traceability, and explainability in sensitive environments.
- Deploy, monitor, and validate LangGraph applications in healthcare production settings.
Format of the Course
- Interactive lecture and discussion.
- Hands-on exercises with real-world case studies.
- Implementation practice in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph for Legal Applications
35 小时LangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and precise control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based legal solutions with the necessary compliance, traceability, and governance controls.
By the end of this training, participants will be able to:
- Design legal-specific LangGraph workflows that preserve auditability and compliance.
- Integrate legal ontologies and document standards into graph state and processing.
- Implement guardrails, human-in-the-loop approvals, and traceable decision paths.
- Deploy, monitor, and maintain LangGraph services in production with observability and cost controls.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Dynamic Workflows with LangGraph and LLM Agents
14 小时LangGraph is a framework for composing graph-structured LLM workflows that support branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level engineers and product teams who wish to combine LangGraph’s graph logic with LLM agent loops to build dynamic, context-aware applications such as customer support agents, decision trees, and information retrieval systems.
By the end of this training, participants will be able to:
- Design graph-based workflows that coordinate LLM agents, tools, and memory.
- Implement conditional routing, retries, and fallbacks for robust execution.
- Integrate retrieval, APIs, and structured outputs into agent loops.
- Evaluate, monitor, and harden agent behavior for reliability and safety.
Format of the Course
- Interactive lecture and facilitated discussion.
- Guided labs and code walkthroughs in a sandbox environment.
- Scenario-based design exercises and peer reviews.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph for Marketing Automation
14 小时LangGraph is a graph-based orchestration framework that enables conditional, multi-step LLM and tool workflows, ideal for automating and personalizing content pipelines.
This instructor-led, live training (online or onsite) is aimed at intermediate-level marketers, content strategists, and automation developers who wish to implement dynamic, branching email campaigns and content generation pipelines using LangGraph.
By the end of this training, participants will be able to:
- Design graph-structured content and email workflows with conditional logic.
- Integrate LLMs, APIs, and data sources for automated personalization.
- Manage state, memory, and context across multi-step campaigns.
- Evaluate, monitor, and optimize workflow performance and delivery outcomes.
Format of the Course
- Interactive lectures and group discussions.
- Hands-on labs implementing email workflows and content pipelines.
- Scenario-based exercises on personalization, segmentation, and branching logic.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Ollama 入门指南:运行本地 AI 模型
7 小时本课程为讲师指导的中国(线上或线下)培训,旨在帮助初级专业人士安装、配置并使用Ollama在本地机器上运行AI模型。
培训结束后,学员将能够:
- 了解Ollama的基本原理及其功能。
- 设置Ollama以运行本地AI模型。
- 使用Ollama部署并与LLM进行互动。
- 优化AI工作负载的性能和资源使用。
- 探索本地AI部署在各行业中的应用案例。