gcp-agent-eval-metric-configurator
gcp-agent-eval-metric-configurator
This skill helps you configure sophisticated automated evaluation metrics. Grounded in evaluation_blog.md, it supports computation-based, rubric-based, and managed Vertex AI metrics.
Usage
Ask Antigravity to:
- "Configure Grounding metrics for my researcher agent"
- "Add a Tool Use Quality evaluator to my pipeline"
- "Set up a ResponseMatch check against my reference answers"
- "Configure an adaptive rubric for style alignment"
Metric Taxonomy
- Computation-Based: JSON validity, Execution trajectory matching.
- Managed Rubric-Based (Vertex AI):
GROUNDING: Ensures responses are fully supported by context (RAG).TOOL_USE_QUALITY: Checks if the right tool was called with correct parameters (no reference needed).
- Adaptive Rubrics: Use LLM-as-a-judge to grade responses based on unique criteria generated for each prompt.
Metric Templates
Refer to resources/metric_templates.json for standard definitions.
More from googlecloudplatform/devrel-demos
go-backend-dev
Specialist in implementing robust HTTP services and APIs in Go. Activates for "endpoint", "handler", "API", "server".
41go-reviewer
Expert code reviewer focusing on idiomatic Go, concurrency safety, and clean code principles. Activates for "review", "idiomatic", "refactor".
41go-architect
Expert in Go project scaffolding, standard layout compliance, and dependency management. Activates for "new project", "structure", "layout".
36go-test-expert
Expert in Go testing patterns, table-driven tests, httptest, benchmarking, and fuzzing. Activates for "test", "fail", "benchmark", "debug", "fuzz".
35latest-software-version
>
34go-project-setup
>
26