llms-generative-ai

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • SAFE (SAFE): The overall skill structure and provided documentation show no signs of prompt injection or malicious intent.
  • DATA_EXFILTRATION (SAFE): The scripts/validate.py script only checks for the existence of local files and directories within its own package. No network calls or sensitive file path access (like SSH keys or environment variables) were found.
  • REMOTE_CODE_EXECUTION (SAFE): The script uses yaml.safe_load(), which is a secure practice that prevents the execution of arbitrary Python code that could otherwise be embedded in malicious YAML files via constructor tags.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:07 PM