query-budget-enforcer
Query Budget Enforcer Protocol
This skill acts as the financial and operational conscience of the database. It reviews queries not just for speed, but for computational cost, lock duration, and hardware limits.
Core assumption: In cloud data warehouses (Snowflake, BigQuery), bad queries cost actual money. In OLTP (PostgreSQL), bad queries crash the server.
1. Budget Categories (Static vs Dynamic)
- Default (Static): Analyze queries textually based on known engine constraints (e.g. flagging
SELECT *withoutWHEREfor BigQuery or lack ofLIMIT). - Dynamic (On-Demand): Connect to the database or metrics API (e.g. pg_stat_statements) to read historical query execution costs or dry-run estimates only if requested.
- When evaluating a query, enforce limits based on the context:
A. Data Warehouse Constraints (BigQuery / Snowflake)
- Terabytes Scanned: Did they use a
SELECT *without partition filters (WHERE date = ...)? - Cost Estimation: "This query scans 4TB. At $5/TB, this single run costs $20."
B. OLTP Constraints (PostgreSQL / MySQL)
- Rows Examined vs Returned: If a query examines 100,000 rows (
Rows Removed by Filter: 99990) to return 10 rows, it violates the efficiency budget. - Locking Time: Does the query hold an
Access Exclusive Lockor lengthySELECT ... FOR UPDATEthat blocks concurrent transactions? - Memory Spillage: Does the sort operation exceed
work_memand spill to temporary disk?
2. Enforcing Limits (The Output)
Review the query and provide a Pass/Fail budget assessment.
Required Outputs (Must write BOTH to docs/database-report/):
- Human-Readable Markdown (
docs/database-report/query-budget-report.md)
### 💳 Query Budget Assessment: FAILED ❌
**Query Analyzed:**
`SELECT * FROM events WHERE event_type = 'click';`
**Violations Detected:**
1. **[BigQuery] Partition Ignored:** The table is partitioned by `event_date`, but no date filter was provided. This will scan the entire 50TB table.
2. **[OLTP] Over-fetching:** `SELECT *` pulls down 140 columns when the application likely only needs 3.
### 💡 Remediation (How to get under budget)
Rewrite the query to respect system limits:
```sql
-- ✅ Budget-friendly rewrite
-- Added mandatory partition filter limit and explicit columns
SELECT user_id, event_payload, created_at
FROM events
WHERE event_type = 'click'
AND event_date >= CURRENT_DATE - INTERVAL '7 days';
2. **Machine-Readable JSON (`docs/database-report/query-budget-output.json`)**
```json
{
"skill": "query-budget-enforcer",
"assessment": "FAILED",
"violations": [
{"type": "Partition Ignored", "engine": "BigQuery", "severity": "High"},
{"type": "Over-fetching", "engine": "OLTP", "severity": "Medium"}
],
"suggested_sql": "SELECT user_id, event_payload..."
}
Guardrails
- Engine Awareness: BigQuery hates
SELECT *but loves denormalization. PostgreSQL hates denormalization butSELECT *is less deadly if properly indexed. Adjust budgets based on the target engine. - Hard Limits: Recommend configuring server-side limits like
statement_timeout(PostgreSQL) or Maximum Bytes Billed (BigQuery) to definitively enforce budgets at the infrastructure layer.
More from fatih-developer/fth-skills
task-decomposer
Break down large, complex, or ambiguous tasks into independent subtasks with dependency maps, execution order, and success criteria. Plan first, then execute step by step. Triggers on 'how should I do this', 'where do I start', 'plan the project', 'break it down', 'implement' or whenever a task involves multiple phases.
24context-compressor
Compress long conversation histories, large code files, research results, and documents by 70% without losing critical information. Triggers when context window fills up, when summarizing previous steps in multi-step tasks, before loading large files into context, or on 'summarize', 'compress', 'reduce context', 'save tokens'.
18multi-brain-debate
Two-round debate protocol where perspectives challenge each other before consensus. Round 1 presents independent positions, Round 2 allows counter-arguments and rebuttals. Produces battle-tested decisions for high-stakes choices.
17multi-brain-score
Confidence scoring overlay for multi-brain decisions. Each perspective rates its own confidence (1-10) with justification. Consensus uses scores as weights, flags low-confidence areas, and surfaces uncertainty explicitly.
15checkpoint-guardian
Automatic risk assessment before every critical action in agentic workflows. Detects irreversible operations (file deletion, database writes, deployments, payments), classifies risk level, and requires confirmation before proceeding. Triggers on destructive keywords like deploy, delete, send, publish, update database, process payment.
14parallel-planner
Analyze multi-step tasks to identify which steps can run in parallel, build dependency graphs, detect conflicts (write-write, read-write, resource contention), and produce optimized execution plans. Triggers on 3+ independent steps, 'speed up', 'run simultaneously', 'parallelize', 'optimize' or any task where sequential execution wastes time.
14