machine-learning

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • Prompt Injection (SAFE): The skill contains no instructions designed to subvert the agent's safety protocols or system prompts.
  • Data Exposure & Exfiltration (SAFE): No hardcoded credentials, API keys, or access to sensitive system files like SSH keys or AWS configs were found.
  • Obfuscation (SAFE): No encoded strings, zero-width characters, or other methods to hide malicious code were used.
  • Unverifiable Dependencies & Remote Code Execution (SAFE): References to libraries like pandas and scikit-learn are standard for ML, and no remote script execution (e.g., curl | bash) is present.
  • Privilege Escalation (SAFE): No commands to acquire root privileges or modify system configurations (like sudo or chmod) were detected.
  • Persistence Mechanisms (SAFE): No evidence of attempts to install services, cron jobs, or modify shell profiles.
  • Metadata Poisoning (SAFE): The skill's metadata accurately reflects its content and purpose.
  • Indirect Prompt Injection (SAFE): The skill is a static reference and does not ingest untrusted data in a way that creates an injection vulnerability.
  • Time-Delayed / Conditional Attacks (SAFE): No logic triggers based on time or specific environment variables were found.
  • Dynamic Execution (SAFE): The code snippets are static and do not use dynamic evaluation functions like eval() or exec().
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:04 PM