daily-releases
<release_args>$ARGUMENTS</release_args>
Daily Releases
Create GitHub Releases with AI-categorized changelogs for every day that had commits. Uses the same pipeline as /create-merge-request-changelog — real AI analysis, not template substitution.
Automatic Invocation
When this skill is activated, immediately begin processing without asking the user. Parse any arguments from <release_args/>:
--start-date YYYY-MM-DD Only process days on or after this date
--end-date YYYY-MM-DD Only process days on or before this date (default: today)
--branch BRANCH Git branch (default: origin/main)
--dry-run Preview without creating releases
Process
Requires GITHUB_TOKEN for release status checks (list) and publishing.
Working directory: Run all commands from the repository root. Paths below assume cwd is the repo root.
Step 1: List days to process
uv run .claude/skills/daily-releases/scripts/list_daily_ranges.py [--branch BRANCH] [--start-date ...] [--end-date ...] [-R OWNER/REPO]
This outputs a JSON array. Each entry has:
{
"date": "2026-02-21",
"tag": "v2026.02.21",
"base_ref": "<parent-commit-hash>",
"head_ref": "<last-commit-hash-of-day>",
"commit_count": 12,
"release_exists": true,
"needs_update": false
}
Skip entries where release_exists: true and needs_update: false — those are up to date.
For --dry-run, print the list and stop.
Step 2: For each day that needs a release
Work through days chronologically. For each day, the pipeline collects data, buckets it by token budget, analyses each bucket with a Haiku subagent, synthesises the results, then formats and publishes. Days with few commits pass through a single bucket with no synthesis overhead.
2a. Collect dataset
uv run .claude/skills/daily-releases/scripts/collect_day_dataset.py \
<base_ref> <head_ref> ./daily-releases/<date>/ [-R OWNER/REPO]
Writes ./daily-releases/<date>/dataset/:
files.json— changed source files with status and line countscommits.json— commits with SHA, message, files touchedissues.json— GitHub issues/PRs referenced or closed (empty if no token)diffs/<sanitized_path>.diff— per-file unified diff for each source file
Source files: *.py .js .cjs .mjs .ts .tsx .sh .md .json .yaml .yml
Excluded: dist/ build/ node_modules/ vendor/ .venv/ and similar build outputs.
2b. Create token-bounded buckets
uv run .claude/skills/daily-releases/scripts/bucket_day_data.py \
./daily-releases/<date>/ [--token-limit 100000]
Token limit defaults to env var DAILY_RELEASES_TOKEN_LIMIT or 100000.
Groups source files by directory module, fills buckets greedily keeping each under the token limit (measured with tiktoken cl100k_base as a proxy).
Writes ./daily-releases/<date>/buckets/bucket_NNN/:
manifest.json—{bucket_id, files, token_count, commit_shas}content.txt— file diffs followed by commit messages for this bucket
Prints a summary listing bucket count and token sizes.
2c. Analyse each bucket (delegate — do NOT read bucket files yourself)
For each bucket_NNN/ directory found under ./daily-releases/<date>/buckets/:
Agent(
subagent_type="general-purpose",
model="claude-haiku-4-5-20251001",
prompt="""
Read: ./daily-releases/<date>/buckets/bucket_NNN/content.txt
Apply the Per-Bucket Analysis Prompt from:
.claude/skills/daily-releases/references/synthesis_prompt.md
Write the structured JSON output to:
./daily-releases/<date>/summaries/bucket_NNN.json
Report "bucket_NNN.json written" when done.
"""
)
Replace <date> and NNN with actual values before emitting each Agent() call.
Buckets may be processed in parallel — each writes to its own summary file.
After all agents return, verify each summaries/bucket_NNN.json exists. Stop with
an error if any is missing.
2d. Synthesise summaries into analysis.json
If exactly one bucket exists: promote its JSON directly — copy
summaries/bucket_001.json to analysis.json, adding a statistics block from
dataset/files.json counts (commit_count, files_changed, lines_added,
lines_deleted). No synthesis agent needed.
If two or more buckets exist:
Agent(
subagent_type="general-purpose",
model="claude-haiku-4-5-20251001",
prompt="""
Apply the Day Synthesis Prompt from:
.claude/skills/daily-releases/references/synthesis_prompt.md
Read all bucket summary files:
./daily-releases/<date>/summaries/bucket_001.json
./daily-releases/<date>/summaries/bucket_002.json
... (list all that exist)
Also read ./daily-releases/<date>/dataset/files.json for statistics counts.
Write the merged analysis JSON to: ./daily-releases/<date>/analysis.json
Report "analysis.json written" when done.
"""
)
After the agent returns, verify ./daily-releases/<date>/analysis.json exists.
Stop with an error if missing.
2e. Format into release notes
uv run .claude/skills/create-merge-request-changelog/scripts/format_mr_description.py \
./daily-releases/<date>/analysis.json \
--no-preview \
--output ./daily-releases/<date>/description.md
2f. Publish the release
uv run .claude/skills/daily-releases/scripts/publish_daily_release.py \
--date <date> \
--tag <tag> \
--head-ref <head_ref> \
--notes-file ./daily-releases/<date>/description.md
Add --keep-existing-tag=false if updating a release that already has the correct
tag commit.
Step 3: Report
After processing all days, print a summary:
Processed N days:
- Created: X new releases
- Updated: Y existing releases
- Skipped: Z already up to date
Reference files
- ./scripts/list_daily_ranges.py — list days + commit ranges
- ./scripts/collect_day_dataset.py — per-file diff + commit + issues extraction into
dataset/ - ./scripts/bucket_day_data.py — token-bounded semantic bucketing into
buckets/ - ./scripts/publish_daily_release.py — create/update git tag + GitHub release
- ./references/synthesis_prompt.md — per-bucket analysis prompt + day synthesis prompt
- ../create-merge-request-changelog/scripts/format_mr_description.py — render analysis.json to markdown
Reference paths above are relative to this skill directory; CLI commands use repo-root paths.
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6