security-prompts

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • Prompt Injection (SAFE): The skill contains templates designed to guide AI agents toward secure code generation. No attempts to bypass safety filters or override system instructions for malicious purposes were detected.
  • Data Exposure & Exfiltration (SAFE): No hardcoded credentials or sensitive file paths were found. The templates reference standard environment variables for configuration and utilize established services like Clerk, Stripe, and Convex.
  • Indirect Prompt Injection (SAFE): While the skill defines endpoints that ingest untrusted user data (e.g., 01_secure_form.md), it explicitly mandates robust mitigations.
  • Ingestion points: Public form submissions and file upload endpoints in 01_secure_form.md and 05_file_upload.md.
  • Boundary markers: Templates include explicit 'SECURITY REQUIREMENTS' and 'VERIFICATION' sections to define safety boundaries.
  • Capability inventory: Resulting implementations perform database writes (Convex), file management (S3/Uploadthing), and payment processing (Stripe).
  • Sanitization: Templates require the use of safeTextSchema, validateRequest(), and Zod validation to sanitize all incoming user input.
  • Unverifiable Dependencies (SAFE): The skill references standard, reputable libraries for the Next.js ecosystem (e.g., Zod, Clerk, Stripe). No suspicious or unversioned remote package installations were found.
  • Privilege Escalation (SAFE): No use of sudo, chmod 777, or other privilege escalation techniques was identified. Commands are restricted to standard development and testing workflows.
  • Dynamic Execution (SAFE): No patterns of runtime code generation or unsafe deserialization were found. The skill generates static source code for API routes via prompting.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:31 PM