skills/skill.new.ndhy.com/aihub-workflow-agent

aihub-workflow-agent

SKILL.md

AIHub Workflow Agent

Use the bundled CLI at scripts/aihub_cli.py instead of hand-writing HTTP calls whenever possible.

Base URL

https://bv.new.ndhy.com/api/agent/aihub

Authentication

Prefer one of these token sources, in order:

  1. AIHUB_AGENT_TOKEN environment variable
  2. ~/.aihub-agent/config.json written by configure

Initialize once:

python scripts/aihub_cli.py configure --token "bvk_your_token_here"

Do not commit tokens to git.

Core usage

List presets

python scripts/aihub_cli.py presets

Trigger a workflow

python scripts/aihub_cli.py run \
  --type "其他:AI自生成互动" \
  --inputs '{"query":"请生成一个最简单的互动组件配置:一个只有标题、说明文字和单个确认按钮的教学互动卡片,主题是牛顿第二定律。"}'

Query status

python scripts/aihub_cli.py status <runId>

Fetch outputs

python scripts/aihub_cli.py outputs <runId>

Long-running workflows: prefer script polling, not LLM polling

For Dify/self-hosted AIHub, a workflow may run for minutes or more than an hour. Do not keep an LLM session alive just to wait.

Preferred pattern:

  1. Use run to trigger the workflow and capture runId
  2. Return/store the runId immediately
  3. Hand off waiting to the CLI/script layer
  4. Only use an LLM again when you need to interpret or summarize the final outputs

Recommended polling patterns

Short synchronous polling

Use only for short workflows.

python scripts/aihub_cli.py poll \
  --type "计算题" \
  --inputs '{"node_list":"[]","style_render":"","global_context":"{}","style_theme":""}' \
  --interval 5 \
  --timeout 300

Long asynchronous polling

For long workflows, use watch with a sparse backoff schedule so the script does the waiting instead of the LLM.

Recommended schedules:

  • Balanced: 120,300,600,1200,2400,3600
  • Long-job conservative: 300,600,1200,2400,3600
  • Very long jobs: 300,600,1200,2400,3600,3600,3600

Example:

python scripts/aihub_cli.py watch <runId> --schedule 300,600,1200,2400,3600

This means poll after 5m, then 10m, then 20m, then 40m, then hourly.

Background watcher pattern

When a caller needs non-blocking execution:

  1. Trigger with run
  2. Save runId plus caller context (issue/task/document/message target)
  3. Start a background watcher process using watch, or enqueue the run into scripts/aihub_watcher.py
  4. When the watcher sees succeeded, fetch outputs and hand them to the next system
  5. When the watcher sees failed, report the error and stop

This pattern is reusable for:

  • Paperclip
  • cron jobs
  • local automation
  • future non-Paperclip agents

Generic watcher queue

Use the bundled watcher script when you need a reusable, non-LLM polling queue:

python scripts/aihub_watcher.py enqueue <runId> --label "job name" --schedule 300,600,1200,2400,3600
python scripts/aihub_watcher.py once
python scripts/aihub_watcher.py daemon --interval 60

The watcher stores state in ~/.aihub-agent/watch-jobs.json and is intentionally upstream-agnostic: it tracks run status and outputs, but does not assume Paperclip or any specific callback target.

Workflow selection

Use references/workflow-presets.md when you need the exact public workflowType input fields required to call /workflows/run correctly. Use references/workflow-inventory.md when you need the broader local workflow archive, orchestration chain, or registry/local snapshot differences. Use references/api-endpoints.md when you need the raw HTTP contract.

Quick rule:

  • Free-form interactive generation → 其他:AI自生成互动
  • Structured quiz/data tasks → pick the specific preset
  • Full orchestration entrypoints like 互动游戏生产总线 and 互动游戏生产基座 are documented in workflow-presets.md as advanced appId-only workflows, not public workflowType presets.
  • For 其他:AI自生成互动, note that the registry snapshot maps it to Web互动生产-Agent版, while the local workflow archive currently contains Web互动生产-互动组件v2-workfolw.yml as the closest artifact.
  • If unsure, inspect presets first and choose the most specific matching workflow

Failure handling

If a workflow does not finish in the current polling window:

  • do not call it a failure automatically
  • report/run-store the runId
  • leave it to the watcher to continue polling later

Treat these separately:

  • token/auth failure
  • trigger failure
  • run failed
  • run still running
  • outputs not ready yet

Expected outputs

Most workflows return an outputs.config object or equivalent structured payload. For large outputs:

  • summarize the result for humans
  • store or hand off the raw JSON separately
  • avoid dumping huge JSON blobs into normal chat replies unless explicitly asked
Installs
1
First Seen
Apr 8, 2026