create-agent-with-sanity-context

Pass

Audited by Gen Agent Trust Hub on Apr 3, 2026

Risk Level: SAFEPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
  • [PROMPT_INJECTION]: The skill implements an architecture where the AI agent processes untrusted data from a Sanity CMS and the webpage DOM.
  • Ingestion points: The agent ingests external content via the groq_query (CMS data) and get_page_context (HTML to Markdown) tools.
  • Boundary markers: The reference implementation uses XML-style tags (e.g., <page-context>) as delimiters but lacks explicit instructions for the agent to disregard instructions embedded within that data.
  • Capability inventory: The agent has the ability to execute GROQ queries (read access), update UI state via set_product_filters, and store data back to the CMS using saveConversation.
  • Sanitization: While the skill uses turndown to convert HTML to text and structured JSON for CMS data, it lacks mechanisms to sanitize or filter for embedded malicious natural language instructions.
  • [DATA_EXFILTRATION]: The skill provides a client-side tool get_page_screenshot which uses the html2canvas-pro library to capture the current browser viewport. This screenshot is transmitted to the LLM provider as a base64-encoded JPEG image. While documented as a feature for visual context, this functionality poses a risk of inadvertently capturing and transmitting sensitive user data or PII visible on the screen to the AI service provider.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 3, 2026, 06:32 PM