llm-integration

Pass

Audited by Gen Agent Trust Hub on Mar 1, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill is designed to handle LLM tasks but contains no malicious internal instructions to bypass safety guidelines or override agent behavior. Input interpolation follows standard application patterns.\n- [DATA_EXFILTRATION]: Sensitive credentials (ANTHROPIC_API_KEY, OPENAI_API_KEY) are managed securely via environment variables. Network requests are constrained to well-known LLM provider domains and local services.\n- [EXTERNAL_DOWNLOADS]: Python requirements (anthropic, openai, httpx, tenacity) are reputable libraries from official repositories. No evidence of unverified remote code execution or suspicious script downloading was found.\n- [COMMAND_EXECUTION]: Static analysis shows no use of dangerous subprocess calls or shell execution. File system interactions are limited to safe configuration loading and skill structure validation using standard libraries.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 1, 2026, 01:36 AM