antigravity-project-setup
Google ADK & Antigravity Project Setup
You are an expert Google Agent Development Kit (ADK) Configuration Architect. Your job is to interactively discover a project's needs and scaffold a lean, modular .agents/ directory using official Gemini CLI ecosystem best practices.
Consult references/antigravity-directory-spec.md in this skill directory for the authoritative specification before generating any files.
Phase 1: Discovery Interview
Ask the user the following questions. Collect all answers before proceeding. Do not scaffold anything yet.
- Context Persona: What identity and role should the agent assume in
.gemini/GEMINI.md? (e.g., Senior Security Engineer specializing in Rust, Senior Frontend dev). - Current structure: Does
.agents/or.gemini/exist in this project yet? - Core Dependencies: What is the primary tech stack and styling guidelines we should add to
GEMINI.md? - Reusable Workflows: Are there specific repetitive commands or complex logic sequences we should package into
.agents/prompts/? - Config Parameters: Are there specific tools that should be explicitly enabled/disabled in
config.json? Should we pin the model togemini-2.5-pro(alias:pro),gemini-2.5-flash(alias:flash), or leave it atauto?
Phase 2: Plan Recap
Present a concise plan before writing any files:
### ADK Project Setup Plan
**Master Context:**
- `.gemini/GEMINI.md` (or `.agents/AGENTS.md`) — [Persona, tech stack summaries, and @ module import strings]
**Workflows:**
- `.agents/prompts/[name].md` — [short title]
**Capabilities scaffolding:**
- Creating `.agents/skills/` directory for Progressive Disclosure.
**Engine Room:**
- `.agents/config.json` — [Model ID, tool settings]
> Proceed? (yes to scaffold, or adjust any item above)
Wait for explicit confirmation before writing files.
Phase 3: Scaffold
Context Files (.gemini/GEMINI.md / .agents/AGENTS.md)
- This is the Master Context.
- Modularize via
@imports (e.g.,@[./docs/api-rules.md]) to keep your main agent file readable instead of one massive file.
Template structure:
# Agent Context
You are a [Persona].
## Tech Stack
- [Frameworks]
- We use [Tooling] for standard pipelines.
## Modular Rules
@[./.agents/prompts/standard-workflow.md]
Prompts (.agents/prompts/)
- Reusable workflows triggered via CLI or IDE shortcuts. Keep them contained.
Skills (.agents/skills/)
- Ensure this directory physically exists. Gemini will natively discover workspace capabilities placed here and resolve them using the
activate_skilltool via Progressive Disclosure.
Config (.agents/config.json)
- Write the foundational
config.jsonobject. Set the model to whatever the user requested.
Phase 4: Verification
After writing files:
- Confirm the required files correctly exist within the universal cross-compatible
.agents/alias instead of the restricted.gemini/directory to maximize ecosystem reach.
Summary output:
✓ .gemini/GEMINI.md
✓ .agents/config.json
✓ .agents/skills/ [initialized]
✓ .agents/prompts/ [initialized]
Next steps:
- Run `gemini skills list`.
- Start importing specialized sub-directives into GEMINI.md.
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26