aihub-workflow-agent
AIHub Workflow Agent
Use the bundled CLI at scripts/aihub_cli.py instead of hand-writing HTTP calls whenever possible.
Base URL
https://bv.new.ndhy.com/api/agent/aihub
Authentication
Prefer one of these token sources, in order:
AIHUB_AGENT_TOKENenvironment variable~/.aihub-agent/config.jsonwritten byconfigure
Initialize once:
python scripts/aihub_cli.py configure --token "bvk_your_token_here"
Do not commit tokens to git.
Core usage
List presets
python scripts/aihub_cli.py presets
Trigger a workflow
python scripts/aihub_cli.py run \
--type "其他:AI自生成互动" \
--inputs '{"query":"请生成一个最简单的互动组件配置:一个只有标题、说明文字和单个确认按钮的教学互动卡片,主题是牛顿第二定律。"}'
Query status
python scripts/aihub_cli.py status <runId>
Fetch outputs
python scripts/aihub_cli.py outputs <runId>
Long-running workflows: prefer script polling, not LLM polling
For Dify/self-hosted AIHub, a workflow may run for minutes or more than an hour. Do not keep an LLM session alive just to wait.
Preferred pattern:
- Use
runto trigger the workflow and capturerunId - Return/store the
runIdimmediately - Hand off waiting to the CLI/script layer
- Only use an LLM again when you need to interpret or summarize the final outputs
Recommended polling patterns
Short synchronous polling
Use only for short workflows.
python scripts/aihub_cli.py poll \
--type "计算题" \
--inputs '{"node_list":"[]","style_render":"","global_context":"{}","style_theme":""}' \
--interval 5 \
--timeout 300
Long asynchronous polling
For long workflows, use watch with a sparse backoff schedule so the script does the waiting instead of the LLM.
Recommended schedules:
- Balanced:
120,300,600,1200,2400,3600 - Long-job conservative:
300,600,1200,2400,3600 - Very long jobs:
300,600,1200,2400,3600,3600,3600
Example:
python scripts/aihub_cli.py watch <runId> --schedule 300,600,1200,2400,3600
This means poll after 5m, then 10m, then 20m, then 40m, then hourly.
Background watcher pattern
When a caller needs non-blocking execution:
- Trigger with
run - Save
runIdplus caller context (issue/task/document/message target) - Start a background watcher process using
watch, or enqueue the run intoscripts/aihub_watcher.py - When the watcher sees
succeeded, fetch outputs and hand them to the next system - When the watcher sees
failed, report the error and stop
This pattern is reusable for:
- Paperclip
- cron jobs
- local automation
- future non-Paperclip agents
Generic watcher queue
Use the bundled watcher script when you need a reusable, non-LLM polling queue:
python scripts/aihub_watcher.py enqueue <runId> --label "job name" --schedule 300,600,1200,2400,3600
python scripts/aihub_watcher.py once
python scripts/aihub_watcher.py daemon --interval 60
The watcher stores state in ~/.aihub-agent/watch-jobs.json and is intentionally upstream-agnostic: it tracks run status and outputs, but does not assume Paperclip or any specific callback target.
Workflow selection
Use references/workflow-presets.md when you need the exact public workflowType input fields required to call /workflows/run correctly.
Use references/workflow-inventory.md when you need the broader local workflow archive, orchestration chain, or registry/local snapshot differences.
Use references/api-endpoints.md when you need the raw HTTP contract.
Quick rule:
- Free-form interactive generation →
其他:AI自生成互动 - Structured quiz/data tasks → pick the specific preset
- Full orchestration entrypoints like
互动游戏生产总线and互动游戏生产基座are documented inworkflow-presets.mdas advancedappId-only workflows, not publicworkflowTypepresets. - For
其他:AI自生成互动, note that the registry snapshot maps it toWeb互动生产-Agent版, while the local workflow archive currently containsWeb互动生产-互动组件v2-workfolw.ymlas the closest artifact. - If unsure, inspect presets first and choose the most specific matching workflow
Failure handling
If a workflow does not finish in the current polling window:
- do not call it a failure automatically
- report/run-store the
runId - leave it to the watcher to continue polling later
Treat these separately:
- token/auth failure
- trigger failure
- run failed
- run still running
- outputs not ready yet
Expected outputs
Most workflows return an outputs.config object or equivalent structured payload.
For large outputs:
- summarize the result for humans
- store or hand off the raw JSON separately
- avoid dumping huge JSON blobs into normal chat replies unless explicitly asked