prompt-engineering
Pass
Audited by Gen Agent Trust Hub on Mar 28, 2026
Risk Level: SAFE
Full Analysis
- [PROMPT_INJECTION]: The static analysis identified several patterns related to prompt injection and instruction overrides in the reference files (e.g.,
prompting-risks.md,failure-taxonomy.md). A manual review confirms these are false positives; they are documented as Minimal Reproducible Prompts (MRPs) and educational examples to teach the agent how to perform security audits on user-provided prompts. They do not represent an attempt to hijack the agent's own system instructions. - [EXTERNAL_DOWNLOADS]: The
README.mdfile suggests an installation path vianpx skills add CodeAlive-AI/prompt-engineering-skill. This is a standard installation procedure for the platform and targets the vendor's own official repository. The documentation also points to well-known technology domains (e.g., anthropic.com, openai.com, google.dev) for official model-specific guidelines. - [COMMAND_EXECUTION]: The
prompting-techniques.mdreference file contains a conceptual example of Program-Aided Language Models (PAL) that demonstrates the use ofexec()in Python. However, the skill itself does not implement or invoke any shell commands or dynamic code execution; the content is strictly documentation of existing prompting paradigms. - [SAFE]: The skill primarily consists of high-quality technical documentation and prompt templates. It does not contain executable scripts, hardcoded credentials, or exfiltration logic. It reinforces security best practices, such as separating the control plane from the data plane and using XML tags to delimit untrusted inputs.
Audit Metadata