llm-application-dev-langchain-agent
Pass
Audited by Gen Agent Trust Hub on Apr 14, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill uses the
$ARGUMENTSplaceholder to interpolate user input directly into its instructions without sanitization or boundary markers. This creates a surface for indirect prompt injection where malicious input could override the intended behavior of the developed agent. (1) Ingestion points:$ARGUMENTSplaceholder inSKILL.md. (2) Boundary markers: Not present to delimit untrusted data from instructions. (3) Capability inventory: The skill provides templates for agents with tool-calling capabilities, network access, and graph state manipulation. (4) Sanitization: No input validation or escaping logic is specified.
Audit Metadata