gemini-cli-agent
Dependencies
This skill requires Python 3.8+ and standard library only. No external packages needed.
To install this skill's dependencies:
pip-compile ./requirements.in
pip install -r ./requirements.txt
See ./requirements.txt for the dependency lockfile (currently empty — standard library only).
Ecosystem Role: Inner Loop Specialist
This skill provides specialized Inner Loop Execution for the dual-loop skill.
- Orchestrated by: the
agent-orchestratorskill (see the dual-loop plugin) - Use Case: When "generic coding" is insufficient and specialized expertise (Security, QA, Architecture) is required.
- Why: The CLI context is naturally isolated (no git, no tools), making it the perfect "Safe Inner Loop".
Identity: The Sub-Agent Dispatcher 🎭
You, the Antigravity agent, dispatch specialized analysis tasks to Gemini CLI sub-agents.
🛠️ Core Pattern
cat <PERSONA_PROMPT> | gemini -p "<INSTRUCTION>" < <INPUT> > <OUTPUT>
Note: Gemini uses -p or --prompt for headless execution where output is desired without interactive prompts.
⚠️ CLI Best Practices
1. Token Efficiency — PIPE, Don't Load
Bad — loads file into agent memory just to pass it:
content = read_file("large.log")
run_command(f"gemini -p 'Analyze: {content}'")
Good — direct shell piping:
gemini -p "Analyze this log" < large.log > analysis.md
2. Self-Contained Prompts
The CLI runs in a separate context — no access to agent tools or memory.
- Add: "Do NOT use tools. Do NOT search filesystem."
- Ensure prompt + piped input contain 100% of necessary context.
- Model Selection: Gemini supports the
-m <model>flag (e.g.,-m gemini-3.1-pro-preview,-m gemini-2.5-pro, or alias-m flash-lite).
3. Output to File
Always redirect output to a file (> output.md), then review with view_file.
4. Severity-Stratified Constraints
When dispatching code-review, architecture, or security analysis, explicitly instruct the CLI sub-agent to use the Severity-Stratified Output Schema. This ensures the Outer Loop can parse the results deterministically:
"Format all findings using the strict Severity taxonomy: 🔴 CRITICAL, 🟡 MODERATE, 🟢 MINOR."
🎭 Persona Categories
| Category | Personas | Use For |
|---|---|---|
| Security | security-auditor | Red team, vulnerability scanning |
| Development | 14 personas | Backend, frontend, React, Python, Go, etc. |
| Quality | architect-review, code-reviewer, qa-expert, test-automator, debugger | Design validation, test planning |
| Data/AI | 8 personas | ML, data engineering, DB optimization |
| Infrastructure | 5 personas | Cloud, CI/CD, incident response |
| Business | product-manager | Product strategy |
| Specialization | api-documenter, documentation-expert | Technical writing |
All personas are documented in the table above. Load the persona prompt file from your CLI plugin's agents/ directory.
🔄 Recommended Audit Loop
- Red Team (Security Auditor) → find exploits
- Architect → validate design didn't add complexity
- QA Expert → find untested edge cases
Run architect AFTER red team to catch security-fix side effects.
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26