感谢您发送咨询!我们的团队成员将很快与您联系。
感谢您发送预订!我们的团队成员将很快与您联系。
课程大纲
Overview of LLM Architecture and Attack Surface
- How LLMs are built, deployed, and accessed via APIs
- Key components in LLM app stacks (e.g., prompts, agents, memory, APIs)
- Where and how security issues arise in real-world use
Prompt Injection and Jailbreak Attacks
- What is prompt injection and why it’s dangerous
- Direct and indirect prompt injection scenarios
- Jailbreaking techniques to bypass safety filters
- Detection and mitigation strategies
Data Leakage and Privacy Risks
- Accidental data exposure through responses
- PII leaks and model memory misuse
- Designing privacy-conscious prompts and retrieval-augmented generation (RAG)
LLM Output Filtering and Guarding
- Using Guardrails AI for content filtering and validation
- Defining output schemas and constraints
- Monitoring and logging unsafe outputs
Human-in-the-Loop and Workflow Approaches
- Where and when to introduce human oversight
- Approval queues, scoring thresholds, fallback handling
- Trust calibration and role of explainability
Secure LLM App Design Patterns
- Least privilege and sandboxing for API calls and agents
- Rate limiting, throttling, and abuse detection
- Robust chaining with LangChain and prompt isolation
Compliance, Logging, and Governance
- Ensuring auditability of LLM outputs
- Maintaining traceability and prompt/version control
- Aligning with internal security policies and regulatory needs
Summary and Next Steps
要求
- An understanding of large language models and prompt-based interfaces
- Experience building LLM applications using Python
- Familiarity with API integrations and cloud-based deployments
Audience
- AI developers
- Application and solution architects
- Technical product managers working with LLM tools
14 小时