labarchive-integration

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCREDENTIALS_UNSAFE
Full Analysis
  • EXTERNAL_DOWNLOADS (HIGH): The skill explicitly instructs users to install the labarchives-py package from a personal GitHub repository (git+https://github.com/mcmero/labarchives-py). This bypasses official package registries like PyPI and introduces significant supply-chain risk, as the source code is not from a trusted organization and could be maliciously altered at any time.
  • REMOTE_CODE_EXECUTION (HIGH): Installing unverified packages from Git repositories allows for arbitrary code execution during the installation phase (via setup.py) and subsequent runtime execution, effectively granting the package owner full control over the environment where the skill runs.
  • CREDENTIALS_UNSAFE (MEDIUM): The setup_config.py script prompts for and stores high-privilege credentials (access_key_id, access_password, and user_external_password) in a local config.yaml file. While the script attempts to mitigate risk by setting file permissions to 600, the credentials remain in plaintext on the disk, making them vulnerable if the local environment is compromised.
  • INDIRECT_PROMPT_INJECTION (MEDIUM): The skill is designed to ingest data (notebook entries, comments, attachments) from the LabArchives API. This constitutes a vulnerability surface where an attacker with access to a notebook could embed malicious instructions. If an AI agent later processes these backups or API responses without strict sanitization, it could be manipulated into performing unintended actions.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 08:02 AM