ai-orchestration-vercel-ai-sdk

Pass

Audited by Gen Agent Trust Hub on Apr 7, 2026

Risk Level: SAFE
Full Analysis
  • [INDIRECT_PROMPT_INJECTION]: The skill provides patterns for processing external data via RAG and summarization, creating a surface for indirect prompt injection.\n- Ingestion points: Untrusted data enters the context through the ragQuery function in examples/rag.md and the summarizeArticle function in examples/core.md.\n- Boundary markers: The examples use strong system prompts (e.g., 'Only use information from the context to answer') to delimit external data from instructions.\n- Capability inventory: The skill defines tools in examples/tools.md that can perform network operations via fetch.\n- Sanitization: Tool inputs are validated using Zod schemas, providing a layer of protection against malformed or malicious inputs.\n- [DATA_EXPOSURE_AND_EXFILTRATION]: No hardcoded credentials or sensitive file paths were detected. The skill correctly demonstrates using process.env for API keys.\n- [UNVERIFIABLE_DEPENDENCIES_AND_REMOTE_CODE_EXECUTION]: All referenced packages are standard libraries from trusted organizations like Vercel and OpenAI.\n- [PROMPT_INJECTION]: Instructions found in the skill are restricted to guiding code generation and AI SDK usage best practices. No attempts to override safety filters or extract system prompts were detected.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 7, 2026, 01:31 AM