building-with-llms

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • [Prompt Injection] (SAFE): No malicious override markers or jailbreak attempts detected. The skill provides checklists to specifically mitigate prompt injection in downstream applications.
  • [Data Exposure & Exfiltration] (SAFE): No hardcoded credentials or access to sensitive local paths. The skill explicitly warns against handling secrets or PII in its engineering checklists.
  • [Obfuscation] (SAFE): No encoded strings, zero-width characters, or homoglyphs identified in the content.
  • [Unverifiable Dependencies & Remote Code Execution] (SAFE): No remote scripts or unauthorized package installations. It advises human-in-the-loop validation when using coding agents.
  • [Indirect Prompt Injection] (LOW): The skill is a planning tool that ingests user requirements.
  • Ingestion points: User-provided use cases and constraints (SKILL.md).
  • Boundary markers: Deliverables are segmented into clear sections and templates (TEMPLATES.md).
  • Capability inventory: None; the skill is documentation-based and does not execute system commands or network requests.
  • Sanitization: Recommends schema validation and automated checks for LLM outputs (CHECKLISTS.md).
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:25 PM