pytorch-lightning
Warn
Audited by Gen Agent Trust Hub on Mar 28, 2026
Risk Level: MEDIUMCREDENTIALS_UNSAFEREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE]: The documentation in
references/hyperparameter-tuning.mdincludes a PostgreSQL connection string containing embedded credential placeholders:postgresql://user:pass@localhost/optuna. While intended as an example, this practice can lead to accidental exposure of sensitive information if the pattern is adapted for production use. - [REMOTE_CODE_EXECUTION]: Multiple documents, including
SKILL.mdandreferences/callbacks.md, demonstrate model checkpoint loading using theload_from_checkpointmethod. This method internally utilizes PyTorch'storch.load, which defaults to Python'spicklemodule for deserialization. This is a known security risk as loading untrusted or maliciously crafted checkpoints can result in arbitrary code execution on the host machine. - [COMMAND_EXECUTION]: The hyperparameter tuning workflows described in
references/hyperparameter-tuning.mdutilize libraries such as Ray Tune and Optuna to execute arbitrary Python functions (e.g.,train_fn,objective). This provides an interface for dynamic code execution within the environment. - [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface by providing functionality to ingest and process external model checkpoint files. Ingestion points:
LitModel.load_from_checkpointreferenced inSKILL.mdandreferences/callbacks.md. Boundary markers: None identified; there are no explicit instructions or delimiters to distinguish between data and potentially embedded malicious instructions. Capability inventory: The skill usesL.Trainerwhich can invoke shell commands for distributed operations and performs file-system read/write tasks. Sanitization: No validation or integrity checks (like cryptographic signatures) are performed on the external checkpoint data before loading.
Audit Metadata