ai-native-development

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [Prompt Injection] (SAFE): The skill demonstrates best practices by using system prompts to constrain LLM behavior and including a checklist that explicitly warns against prompt injection attacks.
  • [Indirect Prompt Injection] (LOW): The RAG implementation in rag-pipeline-template.ts and chatbot-with-rag-example.ts interpolates untrusted data (user input and retrieved context) into LLM prompts, creating a potential surface for indirect injection.
  • Ingestion points: userMessage in chatbot-with-rag-example.ts and contextText in rag-pipeline-template.ts.
  • Boundary markers: The templates use plain text labels ('Context:', 'Question:') rather than strict delimiters like XML or JSON to isolate untrusted content.
  • Capability inventory: The skill is designed to interact with OpenAI APIs and vector databases (Pinecone, Chroma, etc.).
  • Sanitization: No programmatic sanitization of inputs is present in the code, though it is recommended in the accompanying checklist.
  • [Credential Safety] (SAFE): All provided code snippets correctly utilize environment variables for API keys, avoiding hardcoded secrets.
  • [External Downloads] (SAFE): The skill references reputable and standard industry libraries for its operations.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:13 PM