prompt-chaining

Pass

Audited by Gen Agent Trust Hub on May 1, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill documents a prompt chaining implementation that is vulnerable to indirect prompt injection. Malicious instructions embedded in the raw input could be preserved through the extraction step and subsequently executed by the transformation step.
  • Ingestion points: The input_data parameter in the prompt_chain_workflow function within SKILL.md accepts untrusted external data.
  • Boundary markers: None are defined in the implementation pattern to distinguish between instructions and data in the second LLM call.
  • Capability inventory: The workflow utilizes llm_call, allowing injected instructions to influence the model's behavior in the final output step.
  • Sanitization: No validation or escaping of the extracted_data is performed to prevent the propagation of malicious prompts.
Audit Metadata
Risk Level
SAFE
Analyzed
May 1, 2026, 10:50 PM