secure-ai
π Skill: Secure AI (v1.1.0)
Executive Summary
The secure-ai architect is the primary defender of the AI integration layer. In 2026, where AI agents have high levels of autonomy and access, the risk of Prompt Injection, Data Leakage, and Privilege Escalation is paramount. This skill focuses on building "Unbreakable" AI systems through multi-layered defense, structural isolation, and zero-trust orchestration.
π Table of Contents
- Core Security Philosophies
- The "Do Not" List (Anti-Patterns)
- Prompt Injection Defense
- Zero-Trust for AI Agents
- Secure Server Action Patterns
- Audit and Compliance Monitoring
- Reference Library
ποΈ Core Security Philosophies
- Isolation is Absolute: User data must never be treated as system instruction.
- Least Privilege for Agents: Give agents only the tools they need for the current sub-task.
- Human Verification of Destruction: Destructive actions require a human signature.
- No Secrets in Client: All AI logic and keys reside in
server-onlyenvironments. - Adversarial mindset: Assume the user (and the agent) will try to bypass your rules.
π« The "Do Not" List (Anti-Patterns)
| Anti-Pattern | Why it fails in 2026 | Modern Alternative |
|---|---|---|
| Instruction Mixing | Prone to prompt injection. | Use Structural Roles (System/User). |
| Thin System Prompts | Easily bypassed via roleplay. | Use Hierarchical Guardrails. |
| Unlimited Tool Use | Risk of massive data exfiltration. | Use Capability-Based Scopes. |
| Static API Keys | Leaks result in total system breach. | Use OIDC & Dynamic Rotation. |
| Unvalidated URLs | Direct path for indirect injection. | Use Sandboxed Content Fetching. |
π‘οΈ Prompt Injection Defense
We use a "Defense-in-Depth" strategy:
- Input Boundaries:
--- USER DATA START ---. - Guardian Models: Fast pre-scanners for malicious patterns.
- Content Filtering: Built-in safety settings on Gemini 3 Pro.
See References: Prompt Injection for blueprints.
π€ Zero-Trust for AI Agents
- Non-Human Identity (NHI): Verifiable identities for every agent.
- WASM Sandboxing: Running generated code in isolated runtimes.
- HITL (Human-in-the-Loop): Mandatory sign-off for financial or data-altering events.
π Reference Library
Detailed deep-dives into AI Security:
- Prompt Injection Defense: Multi-layered isolation.
- Agentic Zero-Trust: Managing autonomous actors.
- Secure Server Actions: Bridging the frontend safely.
- Audit Protocols: Monitoring agent behavior.
Updated: January 22, 2026 - 20:50
More from yuniorglez/gemini-elite-core
filament-pro
Master of Filament v4 (2026), specialized in Custom Data Sources, Nested Resources, and AI-Augmented Admin Panels.
80remotion-expert
Senior Specialist in Remotion v4.0+, React 19, and Next.js 16. Expert in programmatic video generation, sub-frame animation precision, and AI-driven video workflows for 2026.
58tailwind4-expert
Senior expert in Tailwind CSS 4.0+, CSS-First architecture, and modern Design Systems. Use when configuring themes, migrating from v3, or implementing native container queries.
49pdf-pro
Master of PDF engineering, specialized in AI-driven extraction, high-fidelity Generation (Puppeteer), and PDF 2.0 Security.
46threejs-expert
Senior WebGPU & 3D Graphics Architect for 2026. Specialized in Three.js v172+, WebGPU-first rendering, TSL (Three Shader Language), and high-performance React 19 integration via `@react-three/fiber` and `@react-three/drei`. Expert in building immersive, low-latency, and accessible 3D experiences for the modern web.
37ui-ux-specialist
Senior Accessibility & Frontend Engineer. Expert in WCAG 2.2 standards, Semantic HTML, and Inclusive Design for 2026.
37