model-equivariance-auditor

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • Prompt Injection (SAFE): No instructions found that attempt to override agent behavior, bypass safety filters, or extract system prompts.
  • Data Exposure & Exfiltration (SAFE): No hardcoded credentials, access to sensitive file paths, or unauthorized network operations were identified. The code snippets operate on local model objects.
  • Obfuscation (SAFE): No evidence of Base64 encoding, zero-width characters, homoglyphs, or other obfuscation techniques intended to hide malicious intent.
  • Unverifiable Dependencies & Remote Code Execution (SAFE): While the documentation mentions the 'e3nn' and 'torch' libraries, there are no commands to download or execute remote scripts (e.g., curl | bash). All code is intended for local execution within a controlled ML environment.
  • Privilege Escalation (SAFE): No use of sudo, chmod, or other commands that attempt to elevate user permissions.
  • Persistence Mechanisms (SAFE): No attempts to modify shell profiles, cron jobs, or system services to maintain persistent access.
  • Indirect Prompt Injection (SAFE): The skill provides static resources and does not define a pipeline for processing untrusted external data in a way that could influence agent behavior.
  • Dynamic Execution (SAFE): The use of 'model.eval()' is a standard PyTorch method for switching a model to evaluation mode and is not related to the unsafe 'eval()' function used for arbitrary code execution.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:11 PM