prompt-engineering

Pass

Audited by Gen Agent Trust Hub on Mar 28, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The static analysis identified several patterns related to prompt injection and instruction overrides in the reference files (e.g., prompting-risks.md, failure-taxonomy.md). A manual review confirms these are false positives; they are documented as Minimal Reproducible Prompts (MRPs) and educational examples to teach the agent how to perform security audits on user-provided prompts. They do not represent an attempt to hijack the agent's own system instructions.
  • [EXTERNAL_DOWNLOADS]: The README.md file suggests an installation path via npx skills add CodeAlive-AI/prompt-engineering-skill. This is a standard installation procedure for the platform and targets the vendor's own official repository. The documentation also points to well-known technology domains (e.g., anthropic.com, openai.com, google.dev) for official model-specific guidelines.
  • [COMMAND_EXECUTION]: The prompting-techniques.md reference file contains a conceptual example of Program-Aided Language Models (PAL) that demonstrates the use of exec() in Python. However, the skill itself does not implement or invoke any shell commands or dynamic code execution; the content is strictly documentation of existing prompting paradigms.
  • [SAFE]: The skill primarily consists of high-quality technical documentation and prompt templates. It does not contain executable scripts, hardcoded credentials, or exfiltration logic. It reinforces security best practices, such as separating the control plane from the data plane and using XML tags to delimit untrusted inputs.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 28, 2026, 07:15 AM