deep-learning

Pass

Audited by Gen Agent Trust Hub on Feb 15, 2026

Risk Level: LOW
Full Analysis
  • PROMPT_INJECTION (SAFE): No malicious instructions or bypass attempts were found in the markdown files or metadata. The instructional language is standard for technical documentation.
  • DATA_EXFILTRATION (SAFE): No hardcoded credentials, sensitive file access, or unauthorized network operations were detected. The use of standard ML libraries like Weights & Biases (wandb) for experiment tracking is typical for this domain.
  • REMOTE_CODE_EXECUTION (SAFE): The skill does not contain any patterns for downloading and executing remote scripts. The provided scripts are for local validation of configuration files.
  • COMMAND_EXECUTION (SAFE): No dangerous subprocess calls or shell executions were found. The validation script uses standard file system operations and does not execute user-provided strings.
  • EXTERNAL_DOWNLOADS (SAFE): No external downloads or unverifiable dependencies are referenced. The Python imports (torch, onnxruntime, wandb) are standard packages in the AI/ML ecosystem.
  • INDIRECT_PROMPT_INJECTION (SAFE): The skill's primary function is to provide code templates and validation. It does not ingest untrusted external data (such as web content) that could lead to indirect injection attacks.
  • DYNAMIC_EXECUTION (SAFE): The skill uses yaml.safe_load() in its validation script, which is the secure method for parsing YAML files. It also uses standard PyTorch JIT scripting for model serialization, which is a routine practice in deep learning.
Audit Metadata
Risk Level
LOW
Analyzed
Feb 15, 2026, 11:38 PM