gcp-agent-safety-gatekeeper
SKILL.md
gcp-agent-safety-gatekeeper
This skill implements the Python integration layer for Model Armor. Grounded in security_blog.md, it provides the safety_util functions needed to intercept prompts, sanitize them against your security policy, and handle safety triggers in your FastAPI backend.
Usage
Ask Antigravity to:
- "Add a safety gatekeeper to my agent backend"
- "Implement Model Armor prompt sanitization in Python"
- "Create a safety utility to parse Model Armor findings"
- "Handle prompt injection errors in my FastAPI app"
Integration Pattern
- Client Initialization: Configures the
ModelArmorClientwith the correct regional endpoint. safety_util.py: A robust parser that convertsSanitizeUserPromptResponseinto a list of human-readable security triggers (e.g., "Prompt Injection", "PII: Person names").- Application Interception: Logic to block or sanitize prompts before they reach the GenAI model or agent orchestrator.
Boilerplate Implementation
Refer to scripts/safety_util.py for the core parsing logic.
Weekly Installs
1
Repository
googlecloudplat…el-demosGitHub Stars
251
First Seen
4 days ago
Security Audits
Installed on
mcpjam1
claude-code1
junie1
windsurf1
zencoder1
crush1