perplexity-search
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADS
Full Analysis
- [EXTERNAL_DOWNLOADS] (SAFE): The skill requires the
litellmPython package. This is a reputable and standard library for interacting with multiple LLM providers. The installation instructions use standard package managers. - [INDIRECT_PROMPT_INJECTION] (LOW):
- Ingestion points:
scripts/perplexity_search.pyaccepts aqueryargument from the user. - Boundary markers: Absent. The input is passed directly as the content of a user message to the LLM.
- Capability inventory: Includes file-writing capability via the
--outputflag (user-controlled path) and network requests to OpenRouter via thelitellmlibrary. - Sanitization: No sanitization is performed on the input query.
- Risk: Although the skill lacks explicit boundary markers, it is a standard utility where the LLM interaction is the primary intended function. The impact of a successful injection is limited to the search results generated and does not grant the model unauthorized access to the underlying system.
- [CREDENTIALS_UNSAFE] (SAFE): No real credentials are hardcoded. The
.env.examplefile uses standard placeholders (sk-or-v1-your-api-key-here). - [DATA_EXPOSURE] (SAFE): The
setup_env.pyscript manages theOPENROUTER_API_KEY. While it allows passing the key as a CLI argument (which can be visible in process lists or shell history), this is a common administrative pattern for setup scripts and is not considered malicious here.
Audit Metadata