skills/zircote/.claude/notebooklm/Gen Agent Trust Hub

notebooklm

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • Indirect Prompt Injection (HIGH): The skill is designed to query external documents in NotebookLM. The agent is explicitly instructed in SKILL.md to analyze the output from these documents and perform follow-up actions (multi-step chains). This creates a high-risk surface where malicious instructions embedded in a notebook could trigger the agent to execute unauthorized commands or exfiltrate data. Ingestion points: ask_question.py retrieves content from Google NotebookLM. Boundary markers: Absent. Capability inventory: Shell execution via run.py and subprocess calls in management scripts. Sanitization: Absent.
  • Command Execution (HIGH): The run.py wrapper facilitates the execution of various scripts and accepts arbitrary arguments from the agent, providing a direct path for executing commands if the agent is compromised via prompt injection.
  • External Downloads (MEDIUM): scripts/setup_environment.py and scripts/init.py perform runtime installation of the patchright library and download browser binaries (chrome/chromium). This bypasses standard static dependency checks.
  • Credentials Unsafe (MEDIUM): The skill stores and manages persistent Google authentication cookies in data/browser_state/state.json. While necessary for the hybrid authentication described in AUTHENTICATION.md, this creates a high-value target for exfiltration.
  • Prompt Injection (LOW): SKILL.md contains behavioral overrides (Required Claude Behavior) that force the agent into a specific multi-step synthesis loop, potentially interfering with standard safety or instructional guardrails.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 12:49 PM