openai-agents-sdk

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEREMOTE_CODE_EXECUTIONDATA_EXFILTRATIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [REMOTE_CODE_EXECUTION] (LOW): The SDK provides a CodeInterpreterTool (tools.md) for server-side code execution. While documented as a feature, it constitutes a remote code execution vector if the underlying environment is not properly sandboxed.
  • [DATA_EXFILTRATION] (LOW): Example tools such as read_file (tools.md) allow agents to access local file systems, and fetch_data (tools.md) allows outbound HTTP requests. These capabilities could be leveraged to expose sensitive local data or exfiltrate information to external domains.
  • [EXTERNAL_DOWNLOADS] (SAFE): Documentation (models.md) recommends installing standard Python packages like openai-agents and litellm.
  • [PROMPT_INJECTION] (LOW): Indirect Prompt Injection Risk. Ingestion points: WebSearchTool, FileSearchTool, fetch_data (tools.md). Boundary markers: input_guardrail, output_guardrail (guardrails.md), remove_all_tools (handoffs.md). Capability inventory: read_file, CodeInterpreterTool, fetch_data (tools.md). Sanitization: output_type Pydantic validation (agents.md). The library provides tools that ingest untrusted data but also provides robust mitigations via its guardrails system.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:49 PM