addon-direct-llm-sdk
Pass
Audited by Gen Agent Trust Hub on Mar 2, 2026
Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The validation checklist contains a bash snippet that executes the
testutility to verify file presence andrg(ripgrep) to perform local text searches within the source directory for configuration keywords. - [PROMPT_INJECTION]: The skill defines a workflow for ingesting and returning data from external AI providers, which introduces a potential surface for indirect prompt injection where untrusted model output could influence downstream processing.
- Ingestion points: External LLM responses are processed via
src/{{MODULE_NAME}}/llm/client.*and exposed through API routes insrc/{{MODULE_NAME}}/api/routes/llm.*. - Boundary markers: The skill encourages typed response shapes (JSON) but does not explicitly require the use of delimiters or escaping for the
outputTextfield to prevent command/instruction confusion. - Capability inventory: The generated code performs network operations to provider APIs and logs metadata to the local environment.
- Sanitization: The skill incorporates defensive guardrails that mandate validation for
SDK_PROVIDERandDEFAULT_MODELinputs and require normalization of provider-specific exceptions.
Audit Metadata