notebooklm-second-brain
Fail
Audited by Gen Agent Trust Hub on Feb 28, 2026
Risk Level: HIGHCREDENTIALS_UNSAFEDATA_EXFILTRATIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE]: The 'nlm login' command is explicitly documented to perform 'Browser cookie extraction'. This method of authentication is highly invasive as it involves programmatic access to sensitive session data within the user's web browser environment.
- [DATA_EXFILTRATION]: The skill implements a 'Post-Build Sync' mechanism via an automated PowerShell hook ('.claude/hooks/scripts/notebooklm-sync.ps1'). This hook is designed to automatically transmit project data, including commit messages, lists of changed files, and test summaries, to Google's NotebookLM service.
- [COMMAND_EXECUTION]: The skill relies on multiple shell-based execution points, including the 'nlm' CLI for notebook management, a bootstrap command ('/notebooklm-bootstrap') for installation, and both Python and PowerShell scripts for validation and synchronization.
- [EXTERNAL_DOWNLOADS]: The setup process installs the 'notebooklm-mcp-cli' package. This package is not from a recognized trusted vendor list, and the skill's instructions do not provide verification or integrity checks for the tool during the bootstrap process.
- [PROMPT_INJECTION]: The skill enforces a policy where agent decisions (regarding architecture, debugging, and security) are guided by content retrieved from external NotebookLM notebooks. Because these notebooks can ingest data from arbitrary URLs via the 'nlm source add --url' command, it creates a significant surface for indirect prompt injection where untrusted external content could manipulate the agent's logic.
Recommendations
- AI detected serious security threats
Audit Metadata