notebooklm
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONREMOTE_CODE_EXECUTION
Full Analysis
- Indirect Prompt Injection (HIGH): The skill extracts text from external URLs (Twitter/X) and provides it as a source for LLM reasoning via NotebookLM. An attacker could craft a tweet containing instructions to override the agent's behavior.
- Ingestion points: The
skill_mcp playwright browser_run_codefunction inTWITTER_EXTRACTION_EXAMPLE.mdextracts innerText from the web page. - Boundary markers: Absent. The extracted
$EXTRACTEDcontent is placed directly into a markdown file without delimiters or 'ignore' instructions. - Capability inventory:
notebooklm source add(file ingestion) andnotebooklm ask(LLM reasoning/decision making based on source). - Sanitization: Absent. The skill does not filter or sanitize the extracted text for malicious instructions.
- Dynamic Execution (MEDIUM): The skill relies on executing arbitrary JavaScript code at runtime within a browser context.
- Evidence: Multiple blocks in
TWITTER_EXTRACTION_EXAMPLE.mduseskill_mcp playwright browser_run_codewith complex JS functions to handle selectors and content extraction. - Command Execution (LOW): The provided automation script uses shell commands to handle data flow and external tool calls.
- Evidence: Uses
cat > "$OUTPUT_FILE" << EOFto create local files andjqto parse tool outputs. While standard, this represents a local command execution surface.
Recommendations
- AI detected serious security threats
Audit Metadata