prompt-chaining
Pass
Audited by Gen Agent Trust Hub on May 1, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill documents a prompt chaining implementation that is vulnerable to indirect prompt injection. Malicious instructions embedded in the raw input could be preserved through the extraction step and subsequently executed by the transformation step.
- Ingestion points: The
input_dataparameter in theprompt_chain_workflowfunction withinSKILL.mdaccepts untrusted external data. - Boundary markers: None are defined in the implementation pattern to distinguish between instructions and data in the second LLM call.
- Capability inventory: The workflow utilizes
llm_call, allowing injected instructions to influence the model's behavior in the final output step. - Sanitization: No validation or escaping of the
extracted_datais performed to prevent the propagation of malicious prompts.
Audit Metadata