integrate-flowlines-sdk-js

Pass

Audited by Gen Agent Trust Hub on Mar 13, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill defines a workflow for incorporating external memory into LLM prompts, which creates a surface for indirect prompt injection.\n
  • Ingestion points: Data retrieved via Flowlines.getMemory() in SKILL.md is interpolated directly into the system prompt.\n
  • Boundary markers: The provided integration examples use standard string interpolation without explicit delimiters (e.g., XML tags or block quotes) or instructions to the model to ignore embedded commands within the retrieved data.\n
  • Capability inventory: The integration grants the application the ability to fetch and use historical context, which dynamically influences the LLM's behavior based on external API data.\n
  • Sanitization: The guide does not include steps for validating or sanitizing the content returned by the memory API before it is processed by the AI model.\n- [EXTERNAL_DOWNLOADS]: The skill facilitates the installation of the @flowlines/sdk package via npm, which is the official library provided by the authoring organization.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 13, 2026, 04:56 PM