prompt-engineering

Pass

Audited by Gen Agent Trust Hub on Mar 7, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION]: Indirect Prompt Injection Surface
  • Ingestion points: User-provided data is interpolated into prompt templates using placeholders like {text}, {input_text}, and {task_description} as seen in SKILL.md usage examples.
  • Boundary markers: The provided examples do not use delimiters or boundary markers (such as XML tags or unique separators) to distinguish untrusted user input from the rest of the prompt instruction.
  • Capability inventory: The skill triggers LLM executions and code generation tasks using the openclaw client and CLI tool.
  • Sanitization: While the documentation mentions validating outputs against schemas, it does not specify or implement input sanitization or instructions to ignore embedded commands within the user-provided text.
  • [EXTERNAL_DOWNLOADS]: Unlisted Dependency and External API Communication
  • The skill references an external Python library openclaw and a CLI tool that are not declared in the dependencies field of the YAML frontmatter.
  • The skill communicates with an external API endpoint (api.openclaw.ai) to execute and manage prompts, which is expected behavior for this tool but relies on the security of that external domain.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 7, 2026, 05:43 PM