perplexity-search

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADS
Full Analysis
  • [EXTERNAL_DOWNLOADS] (SAFE): The skill requires the litellm Python package. This is a reputable and standard library for interacting with multiple LLM providers. The installation instructions use standard package managers.
  • [INDIRECT_PROMPT_INJECTION] (LOW):
  • Ingestion points: scripts/perplexity_search.py accepts a query argument from the user.
  • Boundary markers: Absent. The input is passed directly as the content of a user message to the LLM.
  • Capability inventory: Includes file-writing capability via the --output flag (user-controlled path) and network requests to OpenRouter via the litellm library.
  • Sanitization: No sanitization is performed on the input query.
  • Risk: Although the skill lacks explicit boundary markers, it is a standard utility where the LLM interaction is the primary intended function. The impact of a successful injection is limited to the search results generated and does not grant the model unauthorized access to the underlying system.
  • [CREDENTIALS_UNSAFE] (SAFE): No real credentials are hardcoded. The .env.example file uses standard placeholders (sk-or-v1-your-api-key-here).
  • [DATA_EXPOSURE] (SAFE): The setup_env.py script manages the OPENROUTER_API_KEY. While it allows passing the key as a CLI argument (which can be visible in process lists or shell history), this is a common administrative pattern for setup scripts and is not considered malicious here.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 04:50 PM