do-setup
Project Setup
Role
You are a senior developer advocate responsible for project initialization, tooling configuration, and ensuring the agent-assisted development environment is correctly set up.
Autonomous Execution Policy
CRITICAL: NEVER pause, stop, or wait for user input during execution. Proceed through ALL steps autonomously without asking the user to "continue", "proceed", or confirm intermediate results. The ONLY acceptable reason to stop and ask the user is when there is a genuine doubt or ambiguity that cannot be resolved by reading the project files.
Execution Constraints
CRITICAL: This skill MUST NOT execute the application, run tests, start servers, compile code, or perform any runtime validation. Its sole purpose is to analyze the project structure and produce the configuration document. All analysis must be done by reading files and inspecting the directory structure — never by running the application.
Procedures
Preamble: Parse Invocation Argument
Check if the user invoked this skill with the argument agents (e.g., /do-setup agents).
- If the argument is
agents: set mode =agents-only. Skip Steps 1–4 and 6. Execute only Step 0 (AI tool detection) and Step 5 (install agents). - If no argument or any other argument: set mode =
full. Execute all steps in order.
Step 0: Detect AI Tool Environment Before doing anything else, determine which AI tool is executing this skill:
- Check for
.claude/directory in the project root → Claude Code → config file:CLAUDE.md - Check if
.github/copilot-instructions.mdalready exists → GitHub Copilot → config file:.github/copilot-instructions.md - Check if
.github/directory exists butcopilot-instructions.mddoes not → likely GitHub Copilot → config file:.github/copilot-instructions.md - Check for any of the following Cursor AI indicators → Cursor AI:
.cursor/rules/directory exists → config file:.cursor/rules/project.mdc, skills dirs:.cursor/rules/.cursor/mcp.jsonexists → confirms Cursor AI (use together with rules detection).cursorrulesfile exists → config file:.cursorrules, skills dirs: none (legacy format)
- Check for Opencode indicators → Opencode → config file:
AGENTS.md:opencode.jsonexists in the project root, OR.opencode/directory exists in the project root, ORAGENTS.mdalready exists in the project root, OR- The current tool context identifies itself as Opencode
- If none of the above, infer from the current tool context. When in doubt, default to
CLAUDE.md.
Store the resolved config file path, the detected AI tool name, and the skills directories internally. Use them consistently throughout all remaining steps.
Step 1: Initialize Project Configuration The initialization strategy depends on the detected AI tool:
- Claude Code: Execute the
/initskill/command to generate the initialCLAUDE.md. Wait for it to complete before proceeding. - Opencode: No bash CLI available for initialization (
opencode initis not a valid command — opencode's CLI interprets the first positional argument as a project path, soopencode initwould try tocdinto a directory namedinitand fail). CreateAGENTS.mdat the project root directly using theWritetool if it doesn't exist. Do not invokeopencode initvia Bash under any circumstances. - GitHub Copilot: No built-in init command. Create
.github/copilot-instructions.mdif it doesn't exist. - Cursor AI: No built-in init command. Create the config file as determined in Step 0:
- If using
.cursor/rules/project.mdc: create the.cursor/rules/directory if needed, then create the file with the following frontmatter header:--- description: Project instructions and conventions for the development orchestrator globs: alwaysApply: true --- - If using
.cursorrules(legacy): create the file at the project root.
- If using
For all tools: if the config file already exists, proceed to Step 2 without overwriting it.
Step 2: Deep Project Analysis
- Read the project configuration file at the path determined in Step 0, and
README.mdif it exists. - Read root config files if they exist:
package.json,go.mod,pom.xml,build.gradle,build.gradle.kts,docker-compose.yml,tsconfig.json,settings.gradle,.nvmrc,Makefile,Dockerfile. - Scan directory structure recursively, ignoring:
- Dependencies:
node_modules/,.venv/,venv/,vendor/,.gradle/,.m2/ - Build:
target/,build/,dist/,out/,.next/,__pycache__/ - Hidden: any path starting with
.(except.claude/,.github/, and.cursor/) - Binaries/media:
*.jar,*.class,*.png,*.jpg,*.pdf
- Dependencies:
- Read representative files from each layer (e.g., a controller, a use case, a repository) to understand adopted patterns.
- Build an internal summary with:
- Main stack and versions
- Adopted architecture (Clean Arch, MVC, DDD, etc.)
- Naming and organization patterns
- System purpose
- External integrations (queues, databases, APIs)
- Check test infrastructure:
- Look for a
testscript inpackage.json(or equivalent for the stack). - Scan for test files (
*.test.*,*.spec.*,__tests__/,test/,tests/). - If neither is found, include in the project configuration file output: "⚠️ AVISO: Nenhuma infraestrutura de testes detectada. O DO Framework exige que testes passem antes de marcar tasks como concluídas. Configure um test runner antes de usar
do-execute-task."
- Look for a
Step 3: Identify Relevant Skills
- List all available skills by scanning the AI tool's skills directories:
- Claude Code:
.claude/skills/ - Opencode:
.agents/skills/ - GitHub Copilot:
.github/(look for instruction files) - Cursor AI:
.cursor/rules/(scan all.mdcfiles) For each directory, list every skill/rule file found and read its content.
- Claude Code:
- EXCLUDE all
do-*skills entirely — they are internal workflow skills and must NOT appear anywhere in the output artifact. Only evaluate technology/library skills (e.g.,claude-api,find-skills). - For each remaining (non-
do-*) skill, read its descriptor file (SKILL.mdfor Claude Code, the.mdccontent for Cursor AI). - Based on the Step 2 summary, evaluate if the skill is relevant to the project.
- A skill is relevant if it covers at least one of:
- The project's primary language or framework
- The adopted architecture
- An identified pattern or integration (queues, database, API, etc.)
Step 4: Update the project configuration file Merge the following sections into the project configuration file at the path determined in Step 0. Preserve all existing content and append or update only the sections below:
## Project Summary
- **Purpose:** [system description]
- **Stack:** [main technologies and versions]
- **Architecture:** [adopted pattern]
- **Integrations:** [external services]
## Available Skills
**NOTE: Only list non-`do-*` skills here. Never include workflow skills (do-setup, do-create-prd, do-create-techspec, do-create-tasks, do-execute-task, do-execute-review, do-execute-qa, etc.) in this table.**
| Skill | Path | When to use |
|-------|------|-------------|
| [name] | [skills-dir]/[skill]/SKILL.md | [usage context] |
## Uncovered Skills
| Technology | Note |
|------------|------|
| [e.g., Java/Quarkus] | No skill available locally — add manually to the skills directory |
## Project Conventions
- **Naming:** [file and folder naming patterns]
- **Directory structure:** [relevant paths per layer]
- **Output patterns:** [where to generate files, templates used]
Step 5: Install Orchestration Agents & Commands
Based on the AI tool detected in Step 0, install the agent-execute-task worker subagent and the slash command for orchestration. The orchestration logic of the queue lives in different places depending on the AI tool's subagent nesting policy:
- Claude Code, Cursor, GitHub Copilot: subagents cannot spawn other subagents by default (Claude Code docs; Copilot requires
chat.subagents.allowInvocationsFromSubagents=true, which is off by default). The orchestration loop lives inside the slash command/prompt itself (runs on the main agent), and onlyagent-execute-taskis installed as a subagent. Each task is delegated via the platform's task tool (one nesting level — allowed). - Opencode: subagent nesting is allowed natively. Both
agent-execute-all-tasks(orchestrator) andagent-execute-task(worker) are installed as subagents.
Naming convention: skills and commands keep the
do-prefix; agents use theagent-prefix to avoid name collisions with the underlying skill.
-
Locate the
agents/subdirectory inside thedo-setupskill directory by searching for**/do-setup/agentsusing Glob. This directory was copied alongside theSKILL.mdwhen the user rannpx skills add. -
For each detected AI tool, create the target directories if they don't exist and copy the corresponding files:
Claude Code (if
.claude/was detected):- Run
mkdir -p .claude/agents .claude/commands - Copy
<skill-dir>/agents/claude/agents/agent-execute-task.md→.claude/agents/agent-execute-task.md - Copy
<skill-dir>/agents/claude/commands/do-execute-all-tasks.md→.claude/commands/do-execute-all-tasks.md - Do NOT copy any
agent-execute-all-tasks.md— orchestration is embedded in the slash command (Claude Code does not allow subagents to spawn other subagents). - If a previous install left a stale
.claude/agents/agent-execute-all-tasks.mdin the project, delete it (rm -f .claude/agents/agent-execute-all-tasks.md).
Cursor AI (if
.cursor/was detected):- Run
mkdir -p .cursor/agents .cursor/commands - Copy
<skill-dir>/agents/cursor/agents/agent-execute-task.md→.cursor/agents/agent-execute-task.md - Copy
<skill-dir>/agents/cursor/commands/do-execute-all-tasks.md→.cursor/commands/do-execute-all-tasks.md - Do NOT copy any
agent-execute-all-tasks.md— orchestration is embedded in the slash command. - If a previous install left a stale
.cursor/agents/agent-execute-all-tasks.md, delete it (rm -f .cursor/agents/agent-execute-all-tasks.md).
GitHub Copilot (if
.github/was detected):- Run
mkdir -p .github/agents .github/prompts - Copy
<skill-dir>/agents/github/agents/agent-execute-task.agent.md→.github/agents/agent-execute-task.agent.md - Copy
<skill-dir>/agents/github/prompts/do-execute-all-tasks.prompt.md→.github/prompts/do-execute-all-tasks.prompt.md - Do NOT copy any
agent-execute-all-tasks.agent.md— orchestration is embedded in the prompt. - If a previous install left a stale
.github/agents/agent-execute-all-tasks.agent.md, delete it (rm -f .github/agents/agent-execute-all-tasks.agent.md).
Opencode (if Opencode was detected in Step 0):
- Run
mkdir -p .opencode/agents .opencode/commands - Copy
<skill-dir>/agents/opencode/agents/agent-execute-all-tasks.md→.opencode/agents/agent-execute-all-tasks.md - Copy
<skill-dir>/agents/opencode/agents/agent-execute-task.md→.opencode/agents/agent-execute-task.md - Copy all
<skill-dir>/agents/opencode/commands/*.md→.opencode/commands/
- Run
-
Confirm to the user which files were installed and for which tools.
Use Bash to run
cpcommands.<skill-dir>is the path returned by the Glob search for**/do-setup/agents(without the trailing/agents).
Step 6: Report Results & Sync Progress (Mandatory)
- SYNC INTERNAL PROGRESS: Once the project configuration file is updated, use the
TaskUpdatetool to mark all corresponding items in your internal task tracking ascompleted. - ARTIFACT PATH VERIFICATION: Before reporting, confirm the config file was written to the exact path resolved in Step 0. Read the file back to verify it exists and contains the expected content.
- Provide a summary of the setup performed.
- COMPLIANCE CHECK: Before responding to the user, verify:
- Is the project configuration file saved at the correct path (resolved in Step 0)?
- Did you accurately identify the project stack and skills?
Output Language
Todos os artefatos gerados (seções do arquivo de configuração do projeto, resumos) devem ser escritos em Português do Brasil (PT-BR). Apenas exemplos de código, nomes de variáveis e caminhos de arquivos permanecem em inglês.
Error Handling
- If no config files are found (no
package.json,go.mod, etc.), warn the user that the project may not be initialized and ask for clarification about the stack. - If the project configuration file does not exist, create it from scratch.
- If the project configuration file already exists, merge new sections without overwriting user-written content — append or update only the sections defined in Step 4.
- If the skills directory is empty or missing, report that no skills are available and suggest the user install skills.
- If the directory scan reveals an unrecognizable project structure, document what was found and ask the user for guidance.
References
- Output: Project configuration file (e.g.,
CLAUDE.mdfor Claude Code,AGENTS.mdfor Opencode,.github/copilot-instructions.mdfor GitHub Copilot,.cursor/rules/project.mdcor.cursorrulesfor Cursor AI) - Skills directories:
- Claude Code:
.claude/skills/(each skill has aSKILL.md) - Opencode:
.agents/skills/(each skill has aSKILL.md) - GitHub Copilot:
.github/(instruction files) - Cursor AI:
.cursor/rules/(.mdcfiles with frontmatter)
- Claude Code:
More from fabio-barboza/development-orchestrator
do-execute-task
Implements feature tasks by loading required skills, reading PRD/TechSpec context, analyzing dependencies, executing the implementation with tests, and performing an automatic code review. Marks tasks as complete in tasks.md. Use when the user asks to implement a task, execute a task, or start working on a specific task number. Do not use for creating tasks, running QA, or bug fixing.
31do-create-techspec
Creates Technical Specifications from existing PRDs, translating product requirements into architectural decisions and implementation guidance. Performs deep project analysis, uses Context7 MCP for technical research and Web Search for business rules. Use when the user asks to create a tech spec, define architecture, or plan implementation for a feature with an existing PRD. Do not use for PRD creation, task breakdowns, or direct code implementation.
30do-execute-qa
Validates feature implementation against PRD, Tech Spec, and Tasks through E2E testing via available MCP tools, accessibility verification (WCAG 2.2), and visual analysis. Documents all bugs found with screenshot evidence and generates a comprehensive QA report. Use when the user asks to run QA, validate a feature, or test implementation completeness. Do not use for code review, bug fixing, or task implementation.
29do-create-tasks
Converts PRD and Tech Spec into a detailed, sequenced list of implementation tasks. Each task is a functional, incremental deliverable with its own test suite. Outputs tasks.md and individual task files. Use when the user asks to create tasks, break down work, or plan implementation from an existing PRD and Tech Spec. Do not use for PRD creation, tech spec creation, or actual code implementation.
28do-execute-review
Performs comprehensive code review by analyzing git diff, verifying conformance with project rules, validating test suites, and checking adherence to Tech Spec and Tasks. Generates a structured code review report with severity-classified findings. Use when the user asks for a code review, wants to validate code quality, or needs pre-merge verification. Do not use for QA testing, bug fixing, or task implementation.
28do-execute-qa-bugfix
Recebe o caminho de um arquivo de bug em qa-bugs/ (ex: qa-bugs/bug-01-alta-formulario.md), analisa a causa raiz, implementa a correção com testes de regressão, valida a suite de testes e atualiza o status do arquivo. Use quando o usuário pedir para corrigir um bug específico encontrado no QA. Não use para correções em lote — invoque uma vez por bug. Não use para implementação de novas features, code review ou execução de QA.
27