optimization-mastery
<domain_overview>
⚡ OPTIMIZATION MASTERY: THE VELOCITY CORE
Philosophy: Efficiency is the highest form of quality. Minimal overhead, maximum impact. Performance-First is the only law. INTERACTION HYGIENE MANDATE (CRITICAL): Never prioritize synthetic benchmarks over real-world interaction smoothness. AI-generated code often misses Interaction to Next Paint (INP) bottlenecks caused by synchronous main-thread blocking. You MUST use
scheduler.yield()orrequestAnimationFramefor any complex DOM or state updates triggered by user events. Any implementation that risks "Layout Thrashing" or exceeds the 200ms INP threshold must be rejected. </domain_overview> <frontend_velocity>
🎨 PROTOCOL 1: FRONTEND PRECISION (INP & BUNDLE)
Aesthetics must be fast. Refer to frontend-design for visuals, but enforce these for speed.
- The INP Threshold:
- Core Metric: Interaction to Next Paint (INP) MUST be < 200ms.
- Action: Yield to main thread for heavy logic. Use
scheduler.yield()orrequestIdleCallback.
- Hydration Strategies:
- Mandatory: Use Partial Hydration or Resumability (e.g. Qwik/Astro patterns).
- Forbidden: Massive "Full Hydration" of static content.
- Asset Governance:
- Images: Modern formats (AVIF/WebP) with
srcsetare mandatory. - Fonts: Only
wghtvariable fonts; subsetted. </frontend_velocity> <backend_velocity>
- Images: Modern formats (AVIF/WebP) with
🏗️ PROTOCOL 2: BACKEND VELOCITY (QUERY & DATA)
The backend must be a fortress of speed. Refer to backend-design for architecture.
- Identifier Strategy:
- Mandatory: Use UUIDv7 for all primary keys in high-insert tables.
- Rationale: Time-sortable IDs prevent B-tree fragmentation and boost insert speed by ~30%.
- Query Budget:
- Max Latency: Sub-100ms for OLTP queries.
- Action: Every index MUST be a "Covering Index" for critical read paths.
- Edge compute:
- Offload logic to Edge Functions (Vercel/Cloudflare) to reduce Time-to-First-Byte (TTFB). </backend_velocity> <ai_token_stewardship>
🤖 PROTOCOL 3: AI TOKEN STEWARDSHIP (RESOURCE OPS)
AIs are expensive/slow. Optimize the "thought" itself.
- Context Window Management:
- Action: Use "Context Folding" (summarizing history) to keep prompts under 4k tokens if possible.
- Credit-Based Execution:
- Assign a "Token Budget" to complex tool calling phases.
- Caching:
- Implement Semantic Caching for repetitive LLM queries. </ai_token_stewardship> <audit_and_reference>
📂 COGNITIVE AUDIT CYCLE
- Is INP < 200ms?
- Are primary keys UUIDv7?
- Is hydration partial/resumable?
- Is the token budget justified for this request? </audit_and_reference>
More from xenitv1/claude-code-maestro
maestro
Use when you need to act as an Elite Software Architect (Maestro) to manage complex repositories. It enforces a "Why over How" philosophy, maintains a persistent project memory (Brain), and orchestrates specialized sub-skills through a Plan-Act-Verify lifecycle.
490python-patterns
Python development principles and decision-making. Framework selection, async patterns, type hints, project structure. Teaches thinking, not copying.
9tdd-workflow
Test-Driven Development workflow principles. RED-GREEN-REFACTOR cycle.
7nodejs-best-practices
Node.js development principles and decision-making. Framework selection, async patterns, security, and architecture. Teaches thinking, not copying.
7geo-fundamentals
Generative Engine Optimization for AI search engines (ChatGPT, Claude, Perplexity).
7api-patterns
API design principles and decision-making. REST vs GraphQL vs tRPC selection, response formats, versioning, pagination.
7