agently-input-composition
Agently Input Composition
This skill covers how to compose model input in Agently before the request is sent. It focuses on prompt slots, prompt layering, quick prompt methods, placeholder mappings, serializable prompt data, low-level chat_history, and attachments. It does not cover model setup, output schema control, YAML/JSON prompt template files, or session lifecycle management.
Prerequisite: Agently >= 4.0.8.5.
Scope
Use this skill for:
- choosing between agent-level and request-level prompt state
- using
set_agent_prompt(...)andset_request_prompt(...) - using quick prompt methods such as
system(),role(),rule(),user_info(),input(),info(),instruct(),examples(), andattachment() - deciding when to use
always=True - composing input with standard prompt slots
- using placeholder mappings in prompt keys and values
- passing lists, dicts, and other serializable data as prompt content
- using low-level
chat_historyas input-side context - using
attachment()for rich-content input - inspecting prompt materialization with
to_text()orto_messages(...)
Do not use this skill for:
- provider, endpoint, auth, proxy, or timeout setup
.output(...),ensure_keys, response streaming, or result parsing- YAML/JSON prompt file loading or prompt round-tripping
- session activation, session resizing, or long-lived memory management
- TriggerFlow runtime stream composition
Workflow
- If the task is about persistent versus one-request prompt state, read references/prompt-layers.md.
- If the task is about what each slot means or which slot to use, read references/input-slots.md.
- If the task is about convenience methods such as
role()orinput(), read references/quick-methods.md. - If the task is about variable substitution, nested data, or custom prompt keys, read references/mappings-and-serialization.md.
- If the task is about low-level
chat_history, rich content, message rendering, or attachments, read references/chat-history-and-attachments.md. - If the behavior still looks wrong, use references/troubleshooting.md.
Core Mental Model
Agently input composition has two layers:
- persistent agent prompt state
- one-request prompt state
get_response() snapshots both layers into one response and then clears the request prompt. That is why some prompt content persists across turns and some does not.
Prompt generation then materializes that combined state into either:
- plain prompt text through
to_text() - chat-style messages through
to_messages(...)
This matters most when low-level chat_history or attachment is involved, because rich content is only faithfully represented in message mode.
Selection Rules
- long-lived baseline instructions -> agent-level prompt
- one-turn question or transient context -> request-level prompt
- explicit
systemordevelopermessage control ->set_agent_prompt(...)orset_request_prompt(...) - stable assistant identity or persona -> usually
system()orrole(..., always=True) - supporting facts or retrieved context ->
info(...) - explicit behavioral constraints ->
instruct(...)orrule(...) - few-shot demonstrations ->
examples(...) - manual multi-turn conversation context without session lifecycle management ->
chat_history - image or rich-content input ->
attachment(...) - repeated prompt templates with variable substitution -> placeholder mappings
- exact inspection of multimodal payloads ->
to_messages(rich_content=True) - activated session, session-backed
chat_history, or automatic turn recording ->agently-session-memo
Minimal Valid Prompt Rule
At least one of these must be present for a normal prompt:
inputinfoinstructoutputattachment
If all of them are empty and no custom extra prompt keys are present, prompt generation fails.
system, developer, and chat_history alone do not satisfy this rule.
References
references/source-map.mdreferences/prompt-layers.mdreferences/input-slots.mdreferences/quick-methods.mdreferences/mappings-and-serialization.mdreferences/chat-history-and-attachments.mdreferences/troubleshooting.md