notebooklm-second-brain
Fail
Audited by Gen Agent Trust Hub on Apr 13, 2026
Risk Level: HIGHCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE]: The skill documents the use of
nlm loginfor "Browser cookie extraction". This is a highly sensitive operation that attempts to harvest session credentials from the user's browser, posing a significant risk of account takeover or session hijacking if the referenced CLI tool is malicious. - [EXTERNAL_DOWNLOADS]: The skill's setup process involves a bootstrap command (
/notebooklm-bootstrap) that installsnotebooklm-mcp-cli. This is a third-party package of unknown origin, representing a significant supply chain risk as it is not an official tool from a well-known service provider. - [COMMAND_EXECUTION]: The skill uses automated execution hooks and local script execution including:
- A PowerShell sync hook (
.claude/hooks/scripts/notebooklm-sync.ps1) that runs automatically after builds. - Python validation scripts (
python scripts/validate-notebooks.py). - Multiple subprocess calls to the
nlmCLI tool for notebook and source management. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection by design, as it establishes a pipeline for ingesting untrusted external data that influences agent behavior.
- Ingestion points: Untrusted data enters the context through the
nlm source addcommand which supports arbitrary URLs and file paths. - Boundary markers: There are no instructions for using delimiters or specific isolation markers for the data stored within the NotebookLM notebooks.
- Capability inventory: The agent possesses capabilities to execute shell commands, PowerShell scripts, and Python scripts, all of which could be targeted by instructions embedded in ingested sources.
- Sanitization: The skill lacks any description of validation, sanitization, or filtering for the content added from URLs or local files.
Recommendations
- AI detected serious security threats
Audit Metadata