multi-llm-consult
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEDATA_EXFILTRATIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- Data Exfiltration (LOW): The skill transmits information to external endpoints: api.openai.com, generativelanguage.googleapis.com, and dashscope.aliyuncs.com. The latter is not included in the standard whitelist of trusted domains for data exfiltration analysis. The transmission of local file content (e.g., via --context-file) to these external providers is a core function but constitutes data export.- Indirect Prompt Injection (LOW): The skill processes untrusted external data provided through file arguments, creating a vulnerability surface.
- Ingestion points: Data is ingested via --prompt-file and --context-file arguments in the consult_llm.py script.
- Boundary markers: No programmatic boundary markers or delimiters are specified in the usage examples.
- Capability inventory: The system executes a Python script that performs authenticated network requests to external LLM APIs.
- Sanitization: The workflow instructions rely on manual user sanitization ('sanitize sensitive data before sending it out') rather than automated filtering.- Command Execution (SAFE): The skill executes a local script (scripts/consult_llm.py) to perform its duties. This is the intended behavior for this extension.- Credentials Unsafe (SAFE): The documentation describes storing API keys in environment variables and settings.json. The examples use placeholders ('...') and variable names, which does not constitute a hardcoded credential finding.
Audit Metadata