gcp-agent-safety-gatekeeper
gcp-agent-safety-gatekeeper
This skill implements the Python integration layer for Model Armor. Grounded in security_blog.md, it provides the safety_util functions needed to intercept prompts, sanitize them against your security policy, and handle safety triggers in your FastAPI backend.
Usage
Ask Antigravity to:
- "Add a safety gatekeeper to my agent backend"
- "Implement Model Armor prompt sanitization in Python"
- "Create a safety utility to parse Model Armor findings"
- "Handle prompt injection errors in my FastAPI app"
Integration Pattern
- Client Initialization: Configures the
ModelArmorClientwith the correct regional endpoint. safety_util.py: A robust parser that convertsSanitizeUserPromptResponseinto a list of human-readable security triggers (e.g., "Prompt Injection", "PII: Person names").- Application Interception: Logic to block or sanitize prompts before they reach the GenAI model or agent orchestrator.
Boilerplate Implementation
Refer to scripts/safety_util.py for the core parsing logic.
More from googlecloudplatform/devrel-demos
go-backend-dev
Specialist in implementing robust HTTP services and APIs in Go. Activates for "endpoint", "handler", "API", "server".
41latest-software-version
>
34go-project-setup
>
26video-description
Generates optimized descriptions for video platforms from transcripts and supplementary material. Use when the user asks for a video description or provides a transcript for video preparation.
17agent-containerizer
Generates a standard Dockerfile that includes both Python and Node.js environments for AI agents.
4gcp-agent-shadow-deployer
Implements the "Dark Canary" pattern for Cloud Run, allowing agents to be evaluated in production without serving user traffic.
4