perplexity-web-research
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
- Indirect Prompt Injection (LOW): The skill instructs the agent to interpolate user queries directly into shell commands (e.g.,
llm -m sonar 'your question here'). This creates a vulnerability where untrusted data could influence the agent's behavior or the resulting research output. - Ingestion points:
llmcommand arguments inSKILL.mdandreferences/subagent-patterns.md. - Boundary markers: Examples use single quotes for shell arguments, but there are no explicit instructions to the agent to escape or sanitize these inputs.
- Capability inventory: The skill utilizes the
Bashtool to execute thellmCLI. - Sanitization: No sanitization or validation logic is provided in the instructions.
- External Downloads (LOW): The
references/setup.mdfile encourages the installation of thellmpackage from PyPI and thellm-perplexityplugin. Whilellmis a reputable tool, these are external dependencies that must be managed by the user. - Command Execution (LOW): The skill is centered around executing the
llmbinary via a shell. Theallowed-toolsmetadata restricts this tollm:*, which is a positive security constraint, but the risk of shell metacharacter injection remains if the agent's runtime doesn't handle quoting properly.
Audit Metadata