new-llm
Pass
Audited by Gen Agent Trust Hub on Mar 12, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill demonstrates a surface for indirect prompt injection by using unsanitized user arguments (
provider,model,base_url) to dynamically construct filesystem paths and Python source code. - Ingestion points: User-provided arguments extracted from the
$ARGUMENTSvariable (SKILL.md). - Boundary markers: Absent; there are no instructions to delimit or ignore instructions within the user-provided data.
- Capability inventory: The skill uses the
Writetool to create files on disk and theGlobtool for directory checks (SKILL.md). - Sanitization: No validation or escaping logic is defined for handling path traversal characters (e.g.,
..) or code injection sequences in the input parameters.
Audit Metadata