scorable-integration
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS] (SAFE): The skill instructs users to install the
scorable(Python) and@root-signals/scorable(Node.js) packages. These are standard SDK installations for the intended service functionality. - [COMMAND_EXECUTION] (SAFE): The instructions use standard
curlcommands to interact with the Scorable REST API for judge generation and execution. These are appropriate for the developer-centric use case. - [DATA_EXPOSURE] (SAFE): The skill demonstrates high awareness of credential safety. It provides specific instructions for environment variable management and includes a security note: 'Do not ask the user to paste the key into this session.'
- [PROMPT_INJECTION] (LOW): The skill exhibits an Indirect Prompt Injection surface (Category 8) because it processes untrusted LLM inputs and outputs to send them to an external evaluation API.
- Ingestion points: Untrusted data enters the agent context via the
request,response, andcontextsparameters inSKILL.md(Step 4) and all reference files. - Boundary markers: Data is passed within JSON structures, but no specific NL delimiters or safety warnings for the LLM regarding embedded instructions in the evaluation payload are provided.
- Capability inventory: The skill performs network POST requests to
api.scorable.ai(found inSKILL.mdand all reference files). - Sanitization: No explicit sanitization or filtering of the LLM-generated content is performed before transmission.
Audit Metadata