agently-triggerflow-model-integration
Agently TriggerFlow Model Integration
This skill covers how TriggerFlow chunks integrate with Agently model requests. It focuses on async-first request execution inside flow handlers, request isolation per step or per item, multiple concurrent model requests, and using delta or instant streaming inside the flow. It does not cover provider setup, prompt-config files, or the standalone details of output-schema design.
Prerequisite: Agently >= 4.0.8.5.
Scope
Use this skill for:
- creating Agently model requests inside TriggerFlow chunks
- choosing between
Agently.create_request(),agent.create_request(), andagent.create_temp_request() - async-first response handling inside flow handlers
- single model request per step
- multiple model requests in one workflow through
batch(...),for_each(...), or controlledasyncio.gather(...) - reusing one response inside a flow step through
get_response() - using
deltaorinstant/streaming_parseinside flow logic - using structured streaming to emit downstream flow events or runtime-stream items earlier
Do not use this skill for:
- provider setup, auth, proxy, timeout, or
client_options - detailed
.output(...)schema design rules andensure_keysbehavior - runtime-stream lifecycle or interrupt mechanics as the primary topic
- flow config export/import or execution save/load
Workflow
- Start with references/request-lifecycle-in-flow.md to choose the right request object and async response shape.
- If the task involves several model requests in one workflow, read references/multi-request-patterns.md.
- If the task uses
deltaorinstantinside the flow, read references/streaming-and-dispatch.md. - If the task is an end-to-end recipe such as a planning loop, SSE endpoint, or fan-out summarization flow, read references/integration-recipes.md.
- If behavior still looks wrong, use references/troubleshooting.md.
Core Mental Model
TriggerFlow does not replace Agently model requests. It orchestrates when and how they run.
The normal model-integration pattern is:
- a chunk decides that model work should happen
- the chunk creates or prepares a request object
- the chunk consumes the response as final data,
delta, orinstant - the chunk either returns data, emits flow events, or writes into runtime stream
Agently guidance for this skill should remain async-first:
- prefer
async_start(),async_get_data(),async_get_text(), andget_async_generator(...) - prefer async chunk handlers
- use sync wrappers only for sync-only demos or scripts
Selection Rules
- one simple model step that only needs a final parsed result -> request
async_start()/async_get_data() - one model step that needs text plus metadata or streaming plus final data ->
get_response()first - model request should inherit agent role or stable settings ->
agent.create_request() - model request should not inherit agent prompt or extension handlers ->
agent.create_temp_request() - one input must fan out into several model calls with clear orchestration structure ->
batch(...) - a list of items should each trigger a model call ->
for_each(concurrency=...) - several independent model calls belong to one chunk and do not need their own flow routing -> controlled
asyncio.gather(...) - plain text stream should drive UI or logs inside the flow ->
delta - structured output should drive field-level updates or early downstream work ->
instant/streaming_parse - model stream items should fan out into signal-driven downstream work -> consume the stream in one chunk,
async_emit(...)custom events, then route withwhen(...) - runtime-stream lifecycle itself is the main topic -> also use
agently-triggerflow-interrupts-and-stream - output schema shape, field order, or
ensure_keysis the main topic -> also useagently-output-control
Important Boundaries
instantdoes not create more model requests by itself; it only exposes structured nodes earlier- if
instantoutput should trigger more model work, route completed nodes into controlled TriggerFlow events,for_each(concurrency=...), or other bounded orchestration - avoid unbounded task spawning directly inside a stream consumer loop
- provider setup belongs in
agently-model-setup
References
references/source-map.mdreferences/request-lifecycle-in-flow.mdreferences/multi-request-patterns.mdreferences/streaming-and-dispatch.mdreferences/integration-recipes.mdreferences/troubleshooting.md
More from agentera/agently-skills
agently-playbook
Use when the user wants to build, initialize, validate, optimize, or refactor a model-powered assistant, internal tool, automation, evaluator, or workflow from a business scenario or common problem statement, including project-structure refactors or starter skeletons that may separate model setup, prompt config, and orchestration, even if the request also mentions a UI, app shell, or local model service such as Ollama, and it is still unclear whether the solution should stay a single request, add supporting capabilities, or become orchestration. The user does not need to mention Agently explicitly.
27agently-prompt-management
Use when the user is shaping how one model request or request family should be instructed or templated, including prompt slots, input/instruct/info layering, mappings, recursive placeholder injection, prompt config, YAML or config-file-driven prompt behavior, and reusable prompt structure.
27agently-model-setup
Use when the request is already narrowed to wiring a model endpoint, env vars, settings-file-based model config, `${ENV.xxx}` placeholders, `auto_load_env=True`, or connectivity check for a model-powered feature, including local Ollama, dotenv-loaded DeepSeek or other OpenAI-compatible settings, plugin namespace placement, auth, request options, and minimal verification.
27agently-langchain-to-agently
Use when a migration is already known to stay on the LangChain agent side, including agent setup, tools, structured output, retrieval, and short-term memory.
25agently-triggerflow
Use when the user needs workflow orchestration such as branching, concurrency, approvals, waiting and resume, runtime stream, restart-safe execution, mixed sync/async function or module orchestration, event-driven fan-out, process-clarity refactors that make stages explicit, performance-oriented refactors that collapse split requests, or workflow definitions and chunk-level runtime metadata that must stay visible for debugging and visualization. The user does not need to say TriggerFlow explicitly.
25agently-output-control
Use when the user wants stable structured fields, required keys, reliable machine-readable sections, or downstream-consumable output from one model request, including prompt-config-owned output contracts, `.output(...)`, field ordering, `ensure_keys`, and structured streaming.
25