solo-sgr

Pass

Audited by Gen Agent Trust Hub on Apr 9, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill serves as a technical resource and development tool for implementing structured reasoning pipelines. It provides educational content, design patterns, and reference code for developers using LLM APIs.
  • [COMMAND_EXECUTION]: The skill utilizes platform tools such as Glob, Grep, Read, and Bash to implement an 'audit' functionality. This mode scans the local project environment to identify and evaluate schema definitions (Pydantic and Zod). This behavior is transparently documented and aligned with the skill's stated purpose of helping developers optimize their reasoning schemas.
  • [EXTERNAL_DOWNLOADS]: The documentation references external implementation libraries and community resources, including the openai Python SDK, pydantic, and repositories like vamplabAI/sgr-agent-core and the author's own fortunto2/openai-oxide. These references are provided for development context and do not involve unauthorized or hidden downloads.
  • [DATA_EXFILTRATION]: No evidence of sensitive data collection or exfiltration was found. The included demonstration script (sgr-demo.py) uses the standard OpenAI API for its operations, which is common practice for AI agents and requires user-provided API credentials via the environment.
  • [DATA_EXPOSURE]: The skill contains logic to read project files to identify schema models. While this involves data ingestion, it is scoped to the developer's project files and intended for architectural review, presenting no inherent security risk under normal use.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 9, 2026, 02:54 PM