schedule-review
Schedule Task Review
Review a scheduled task by running two parallel sub-agent checks using TaskCreate:
- Structure check — Validates directory layout, file formats, and schema
- Quality check — Assesses whether the task meets the user's intent
Workflow
- Ask the user which task to review (or infer from context)
- Locate the task directory:
- If inside a workspace: check
{workspace}/schedules/{task-name}/ - Otherwise: check
~/.tdx/schedule-tasks/{task-name}/ - Use
schedule_getto find the task if the path is unclear
- If inside a workspace: check
- Read the task's TASK.md and schedule.yaml
- Launch both checks in parallel using TaskCreate
- Wait for results using TaskGet
- Present a unified report with pass/fail and actionable suggestions
Step 1: Read Task Files
Read {task-dir}/TASK.md
Read {task-dir}/schedule.yaml
List files in scripts/, reference/, data/
Step 2: Launch Parallel Checks
Use TaskCreate to spawn two sub-agents simultaneously.
Sub-agent 1: Structure Check
TaskCreate with prompt:
"Review this scheduled task for structural correctness.
TASK.md content:
{paste TASK.md content}
schedule.yaml content:
{paste schedule.yaml content}
Files in task directory:
{list of files}
Check the following:
1. TASK.md has valid YAML frontmatter with `name` and `description`
2. schedule.yaml has required fields (name, schedule, enabled) and only valid optional fields (status, catch_up, skills, permissions, notify, context, goal, skill, output). Note: `skills` (list of capability packs/MCP tools) and `skill` (workspace skill to invoke) serve different purposes and can coexist.
2a. If status field is present, it must be either 'configured' or 'template'. Warn if missing (defaults to 'configured').
3. Task name in TASK.md matches schedule.yaml name
4. Cron expression is valid and not too frequent (minimum 5 minutes)
5. permissions.allow lists all tools referenced in TASK.md steps (Bash for scripts, Write for output, slack_* for notifications)
6. Scripts referenced in TASK.md steps exist in scripts/
7. Reference files mentioned in TASK.md exist in reference/
8. data/ files described in TASK.md exist if the task expects prior state
9. No Slack channels or notification targets hardcoded in TASK.md (should be in schedule.yaml notify section only)
10. output.md is mentioned as a required output in Steps
11. If `goal` is set, verify the goal file exists in the workspace (workspace tasks only)
12. If `output.note: true` is set, verify this is a workspace task (inside schedules/ directory)
Report: PASS/FAIL for each item, with specific fix instructions for failures."
Sub-agent 2: Quality Check
TaskCreate with prompt:
"Review this scheduled task for quality and fitness for purpose.
TASK.md content:
{paste TASK.md content}
schedule.yaml content:
{paste schedule.yaml content}
The user's original request was: {describe what the user asked for}
Check the following:
1. Steps are clear and unambiguous — could another agent execute them without guessing?
2. Steps are in a logical order (fetch → process → analyze → output → notify)
3. Error handling is addressed (what to do if a script fails, API is down, etc.)
4. output.md requirements are clear (what should be in the summary?)
5. The schedule frequency matches the task's purpose (e.g., not checking hourly for a daily report)
6. Skills listed are appropriate for the task
7. If data/ is used, the update cycle is clear (when to save, when to compare)
8. The task is self-contained — no implicit dependencies on external state
9. For workspace tasks: does the goal/skill configuration make sense for the intended purpose?
Report: Rate each item as GOOD/NEEDS IMPROVEMENT/MISSING, with specific suggestions."
Step 3: Collect and Report
Wait for both tasks to complete using TaskGet, then present a unified report:
## Task Review: {task-name}
### Structure: {PASS/FAIL}
- [x] Valid TASK.md frontmatter
- [x] Valid schedule.yaml schema
- [ ] Missing permission: Bash (needed for scripts/fetch.sh)
...
### Quality: {GOOD/NEEDS IMPROVEMENT/MISSING}
- [x] Steps are clear
- [ ] No error handling for script failure
- [ ] output.md format not specified
...
### Recommended Fixes
1. Add `Bash` to permissions.allow
2. Add error handling step: "If fetch script fails, retry once..."
3. Specify output.md format in a `## Output Format` section
More from treasure-data/td-skills
pytd
Expert assistance for using pytd (Python SDK) to query and import data with Treasure Data. Use this skill when users need help with Python-based data analysis, querying Presto/Hive, importing pandas DataFrames, bulk data uploads, or integrating TD with Python analytical workflows.
20tdx-basic
Executes tdx CLI commands for Treasure Data. Covers `tdx databases`, `tdx tables`, `tdx describe`, `tdx query`, `tdx auth setup`, context management with profiles/sessions, and output formats (JSON/TSV/table). Use when users need tdx command syntax, authentication setup, database/table exploration, schema inspection, or query execution.
3workflow
Manages TD workflows using `tdx wf` commands. Covers project sync (pull/push/clone), running workflows, monitoring sessions/attempts, task timeline visualization, retry/kill operations, and secrets management. Use when users need to manage, monitor, or debug Treasure Workflow projects via tdx CLI.
3journey
Load when the client wants to create, edit, or manage a CDP customer journey. Use for building journey YAML with segments, activations, and stage steps, modifying journey stages or flow logic (decision points, condition waits, A/B tests), or pushing journey changes to Treasure Data. Also load when the client wants to analyze journey performance, query journey tables, create journey dashboards, or generate journey action reports.
2parent-segment-analysis
Query and analyze CDP parent segment database data. Use `tdx ps desc -o` to get output database schema, then query customers and behavior tables. Use when exploring parent segment data, building reports, or analyzing customer attributes and behaviors.
2connector-config
Writes connector_config for segment/journey activations using `tdx connection schema <type>` to discover available fields. Use when configuring activations - always run schema command first to see connector-specific fields.
2