designing-genai-patterns

Pass

Audited by Gen Agent Trust Hub on Mar 29, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill is primarily instructional, offering a catalog of design patterns for Generative AI applications. It does not contain any executable logic that runs automatically without user intervention.\n- [COMMAND_EXECUTION]: Examples within references/AGENTIC-SYSTEMS.md demonstrate the use of subprocess.run for processing Graphviz DOT files and sqlite3 for database queries. These are presented as educational templates for implementing sandboxed code execution in AI agents.\n- [EXTERNAL_DOWNLOADS]: The skill references standard machine learning libraries (e.g., transformers, vllm, langchain, pydantic-ai) and uses well-known services like OpenAI, Anthropic, and NVIDIA's Triton Inference Server for illustrative purposes. These are recognized as well-known technology services.\n- [CREDENTIALS_UNSAFE]: Example code uses placeholders such as your_openai_api_key and provides guidance on using environment variables for secret management, adhering to security best practices.\n- [PROMPT_INJECTION]: The content specifically addresses prompt injection risks and documents various mitigation patterns (e.g., Action-Selector, Dual-LLM, Guardrails) to teach developers how to build secure AI systems.\n- [PRIVILEGE_ESCALATION]: references/PERF-SYSTEM-TUNING.md contains shell commands utilizing sudo for OS-level performance tuning (e.g., disabling swappiness). These are provided as documented steps for human operators to optimize their GPU clusters and do not represent a silent escalation attempt by the agent.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 29, 2026, 11:13 PM