addon-direct-llm-sdk

Pass

Audited by Gen Agent Trust Hub on Mar 2, 2026

Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The validation checklist contains a bash snippet that executes the test utility to verify file presence and rg (ripgrep) to perform local text searches within the source directory for configuration keywords.
  • [PROMPT_INJECTION]: The skill defines a workflow for ingesting and returning data from external AI providers, which introduces a potential surface for indirect prompt injection where untrusted model output could influence downstream processing.
  • Ingestion points: External LLM responses are processed via src/{{MODULE_NAME}}/llm/client.* and exposed through API routes in src/{{MODULE_NAME}}/api/routes/llm.*.
  • Boundary markers: The skill encourages typed response shapes (JSON) but does not explicitly require the use of delimiters or escaping for the outputText field to prevent command/instruction confusion.
  • Capability inventory: The generated code performs network operations to provider APIs and logs metadata to the local environment.
  • Sanitization: The skill incorporates defensive guardrails that mandate validation for SDK_PROVIDER and DEFAULT_MODEL inputs and require normalization of provider-specific exceptions.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 2, 2026, 02:27 PM