ralph-prd
Ralph PRD Generation
Generate prd.json files that define scoped work items for autonomous agent execution. Each item has explicit completion criteria and verification steps.
When to Use
- Batch migrations (API changes, library upgrades, lint fixes)
- Large-scale refactoring across many files
- Any task decomposable into independent, verifiable units
- Work that benefits from "done" being explicitly defined
PRD Structure
{
"instructions": "<markdown with context, examples, constraints>",
"items": [
{
"id": "<unique identifier>",
"category": "<task category>",
"description": "<what needs to be done>",
"file": "<target file path>",
"steps": [
"<action step>",
"<verification step>"
],
"passes": false,
"skipped": null
}
]
}
Field Reference
| Field | Purpose |
|---|---|
instructions |
Markdown embedded in PRD - transformation examples, docs links, constraints |
id |
Unique identifier (typically file path or task name) |
category |
Groups related items |
description |
Human-readable summary |
steps |
Actions + verification commands |
passes |
false initially, true when complete |
skipped |
null or "<reason>" if task cannot be completed |
Generation Workflow
PRD Generation Progress:
- [ ] Step 1: Define scope (what files/items are affected?)
- [ ] Step 2: Gather input data (lint output, file list, API changes)
- [ ] Step 3: Design item granularity (per-file, per-error, per-component?)
- [ ] Step 4: Define verification steps (type-check, tests, lint)
- [ ] Step 5: Write instructions (examples, constraints, skip conditions)
- [ ] Step 6: Generate items (script or manual)
- [ ] Step 7: Review sample items
Clarifying Questions
Before generating, resolve these with the user:
Granularity
- Per-file? Per-error? Per-component?
- Trade-off: fewer items = less overhead, more items = finer progress tracking
Verification Steps
- What commands confirm completion?
- Type-check? Tests? Lint? Build?
- Which tests - related test file only, or broader?
Instructions Content
- What context does the executing agent need?
- Before/after examples?
- Links to documentation?
- Type casting or naming conventions?
Skip Conditions
- What should cause an item to be skipped rather than fixed?
- Example: "class component requires manual refactor"
Path Format
- Relative or absolute paths?
- ID format (filename only risks collisions)
Instructions Section Best Practices
The instructions field is markdown that the executing agent reads. Include:
- Violation/task types with before/after examples
- Scope rules - what's in bounds, what's out
- Skip conditions - when to mark
skipped: "<reason>"instead of fixing - Links to relevant documentation
- Type/naming conventions specific to the codebase
Keep instructions focused. The agent discovers patterns; instructions provide guardrails.
Verification Steps
Each item should have at least one verification step. Common patterns:
"steps": [
"Fix all N lint errors for rule-name",
"Run yarn type-check:go - must pass",
"Run yarn test <path> - if test exists"
]
For test detection, check:
__tests__/<filename>.test.{ts,tsx,js,jsx}<filename>.test.{ts,tsx,js,jsx}sibling__tests__/integration/<filename>.test.*
Example: Generating from Lint Output
Input: JSON array of lint errors grouped by file
const prd = {
instructions: `## Migration Instructions...`,
items: lintErrors.map(entry => ({
id: entry.filePath.replace(REPO_ROOT + '/', ''),
category: 'migration',
description: `Fix violations in ${path.basename(entry.filePath)}`,
file: entry.filePath,
errorCount: entry.errorCount,
steps: [
`Fix all ${entry.errorCount} lint errors`,
'Run yarn type-check:go - must pass',
...(testExists ? [`Run yarn test ${testPath}`] : [])
],
passes: false,
skipped: null
}))
};
Anti-Patterns
Vague verification
// Bad
"steps": ["Fix the issue", "Make sure it works"]
// Good
"steps": ["Fix lint error on line 42", "Run yarn type-check:go - must pass"]
Missing skip conditions
If some items can't be completed (e.g., requires larger refactor), define skip conditions in instructions so agents mark skipped instead of attempting impossible fixes.
Over-scoped items
Items that touch many files are harder to verify and resume. Prefer one file per item for file-based migrations.
Under-specified instructions
The executing agent shouldn't have to guess conventions. Specify type casting, naming patterns, import sources.
More from third774/dotfiles
opensrc
Fetch source code for npm, PyPI, or crates.io packages and GitHub/GitLab repos to provide AI agents with implementation context beyond types and docs. Use when needing to understand how a library works internally, debug dependency issues, or explore package implementations.
90natural-writing
Write like a human, not a language model. Avoid AI-tell vocabulary, formulaic structures, and hollow emphasis. Apply to ALL written output including prose, documentation, comments, and communication. Use when drafting prose, documentation, comments, or any written output that should sound human.
66agent-skills
Author and improve Agent Skills following the agentskills.io specification. Use when creating new SKILL.md files, modifying existing skills, reviewing skill quality, or organizing skill directories with proper naming, descriptions, and progressive disclosure.
31documenting-code-comments
Standards for writing self-documenting code and best practices for when to write (and avoid) code comments. Use when auditing, cleaning up, or improving inline code documentation.
28customizing-opencode
Configure OpenCode via opencode.json, agents, commands, MCP servers, custom tools, plugins, themes, keybinds, and permissions. Use when setting up or modifying OpenCode configuration.
23adversarial-code-review
Review code through hostile perspectives to find bugs, security issues, and unintended consequences the author missed. Use when reviewing PRs, auditing codebases, or before critical deployments.
21