portkey-python-sdk

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • [EXTERNAL_DOWNLOADS] (SAFE): The skill references standard Python packages (portkey-ai, langchain-portkey, llama-index-llms-portkey) available on PyPI. These are legitimate dependencies for the intended functionality and are used according to standard development practices.\n- [CREDENTIALS_UNSAFE] (SAFE): All code examples consistently demonstrate the use of environment variables (os.environ) for sensitive information such as Portkey API keys, virtual keys, and AWS/Azure credentials. No hardcoded secrets, tokens, or private keys were found in the analyzed files.\n- [DATA_EXFILTRATION] (SAFE): Network communication is directed toward the Portkey AI Gateway (app.portkey.ai) and well-known, supported LLM providers (OpenAI, Anthropic, AWS, Google) as per the SDK's intended purpose. No unauthorized data transmission to non-whitelisted or suspicious domains was observed.\n- [COMMAND_EXECUTION] (SAFE): The skill contains standard package management commands (pip, poetry, uv) for environment setup. No suspicious shell commands, piped remote executions, or unauthorized subprocess calls were identified.\n- [PROMPT_INJECTION] (SAFE): As an LLM SDK, the tool inherently processes external model outputs, representing an indirect prompt injection surface. However, this is managed through standard API response structures.\n
  • Ingestion points: LLM responses in SKILL.md and references/ADVANCED_FEATURES.md (including tool calls).\n
  • Boundary markers: Relies on structured API responses provided by the gateway.\n
  • Capability inventory: File system access for audio processing (speech-to-text/text-to-speech) and network requests to verified AI services.\n
  • Sanitization: Documentation demonstrates standard JSON parsing for tool arguments.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:34 PM