NotebookLM
Warn
Audited by Gen Agent Trust Hub on Mar 10, 2026
Risk Level: MEDIUMCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE]: The skill manages sensitive authentication credentials for Google accounts. It stores session tokens in ~/.notebooklm/storage_state.json and utilizes the NOTEBOOKLM_AUTH_JSON environment variable for headless authentication.
- [EXTERNAL_DOWNLOADS]: The skill depends on external packages and binary components. It installs the notebooklm-py package from PyPI and uses the Playwright library to download and install the Chromium browser binary.
- [COMMAND_EXECUTION]: The skill's primary functionality is built upon the execution of CLI tools. It invokes the notebooklm utility for all core operations, including notebook management, source ingestion, and content generation. Additionally, it suggests running playwright install-deps chromium on Linux, which typically involves administrative commands to install system-level dependencies.
- [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface through its content ingestion features. 1. Ingestion points: External data enters the context via notebooklm source add (URLs, YouTube links, and various file formats like PDF/MD/Docx) and the notebooklm source add-research capability. 2. Boundary markers: The instructions do not define any delimiters or warnings to ignore instructions embedded within the ingested source materials. 3. Capability inventory: The skill can execute subprocesses via the CLI, perform network operations to fetch source content, and write files during artifact downloads. 4. Sanitization: There is no evidence of content sanitization or validation performed on the data fetched from external URLs or documents before it is processed by the AI.
Audit Metadata