perplexity-web-research

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
  • Indirect Prompt Injection (LOW): The skill instructs the agent to interpolate user queries directly into shell commands (e.g., llm -m sonar 'your question here'). This creates a vulnerability where untrusted data could influence the agent's behavior or the resulting research output.
  • Ingestion points: llm command arguments in SKILL.md and references/subagent-patterns.md.
  • Boundary markers: Examples use single quotes for shell arguments, but there are no explicit instructions to the agent to escape or sanitize these inputs.
  • Capability inventory: The skill utilizes the Bash tool to execute the llm CLI.
  • Sanitization: No sanitization or validation logic is provided in the instructions.
  • External Downloads (LOW): The references/setup.md file encourages the installation of the llm package from PyPI and the llm-perplexity plugin. While llm is a reputable tool, these are external dependencies that must be managed by the user.
  • Command Execution (LOW): The skill is centered around executing the llm binary via a shell. The allowed-tools metadata restricts this to llm:*, which is a positive security constraint, but the risk of shell metacharacter injection remains if the agent's runtime doesn't handle quoting properly.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:32 PM