llm-application-dev-langchain-agent
Pass
Audited by Gen Agent Trust Hub on Feb 19, 2026
Risk Level: SAFE
Full Analysis
- [Data Exposure & Exfiltration] (SAFE): The skill specifically instructs users to 'Secure secrets: Environment variables, never hardcode' and does not contain any hardcoded API keys or credentials. Code snippets for vector stores and LLM initialization use variables rather than literal strings for authentication.
- [Unverifiable Dependencies & Remote Code Execution] (SAFE): All Python imports refer to standard, reputable packages within the LangChain ecosystem (e.g., langchain-anthropic, langgraph). No external script downloads or shell-piped execution patterns are present.
- [Indirect Prompt Injection] (SAFE): The skill provides patterns for RAG (Retrieval-Augmented Generation) and tool-calling, which involve processing external data. However, as this is the primary stated purpose of the skill and no specific vulnerabilities are introduced, this is considered safe in this context. The skill's focus on 'production-grade' and 'observability' (LangSmith) further mitigates risk through recommended monitoring.
- [Prompt Injection] (SAFE): The instructions are focused on defining the agent's persona as an expert developer and do not contain any instructions that attempt to bypass AI safety filters or override system constraints.
Audit Metadata