AI Engineer
Pass
Audited by Gen Agent Trust Hub on Mar 21, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill functions as a technical guide and template collection for building LLM applications, using reputable and well-known industry libraries like LangChain and LangGraph.
- [SAFE]: The code implementation for mathematical tools uses the
ast(Abstract Syntax Trees) module to safely parse and evaluate expressions, avoiding the risks associated with dynamic code execution. - [SAFE]: No hardcoded credentials or sensitive information are present; placeholders are correctly used for API keys and database connection strings in the code examples.
- [PROMPT_INJECTION]: The skill demonstrates RAG and Agent patterns which naturally involve processing untrusted input from external documents and user queries. This represents an inherent indirect prompt injection surface.
- Ingestion points: User queries and document content in
LangChain架构.mdandRAG实现.md. - Boundary markers: The code utilizes prompt templates with clear delimiters and instructions to ensure responses are grounded in the provided context.
- Capability inventory: The templates provide capabilities for LLM invocation, tool execution (database search and math), and persistent state management.
- Sanitization: Standard for educational templates, no specific adversarial sanitization logic is included for the data processed within the RAG pipeline.
Audit Metadata