agently-prompt-management
Agently Prompt Management
Use this skill when the core problem is how prompt state should be structured before one request or request family runs.
Native-First Rules
- prefer
input(...),instruct(...),info(...), andoutput(...)over concatenated prompt strings - move reusable prompt structure into prompt config or YAML instead of ad hoc literals
- keep runtime variables as
${...}placeholders in prompt files and inject them through mappings at load time - keep task-specific request contracts in prompt config, and keep only widely reused persona setup in small code-side factories
- when the output contract is stable and shared across a request family, keep it in prompt config such as
.request.outputinstead of rebuilding it ad hoc in Python - keep prompt composition separate from transport and orchestration
- use config files as an editable bridge when UI or product teams need to adjust prompt-driven behavior without rewriting workflow code
Anti-Patterns
- do not flatten business context into one opaque string unless the task is trivial
- do not rebuild prompt templates through ad hoc
.format(...)or string concatenation when prompt mappings already fit - do not scatter stable prompt or output contracts across multiple Python helpers when one prompt config can own them
- do not use prompt config files as a substitute for workflow state
Read Next
references/overview.md
More from agentera/agently-skills
agently-playbook
Use when the user wants to build, initialize, validate, optimize, or refactor a model-powered assistant, internal tool, automation, evaluator, or workflow from a business scenario or common problem statement, including project-structure refactors or starter skeletons that may separate model setup, prompt config, and orchestration, even if the request also mentions a UI, app shell, or local model service such as Ollama, and it is still unclear whether the solution should stay a single request, add supporting capabilities, or become orchestration. The user does not need to mention Agently explicitly.
27agently-model-setup
Use when the request is already narrowed to wiring a model endpoint, env vars, settings-file-based model config, `${ENV.xxx}` placeholders, `auto_load_env=True`, or connectivity check for a model-powered feature, including local Ollama, dotenv-loaded DeepSeek or other OpenAI-compatible settings, plugin namespace placement, auth, request options, and minimal verification.
27agently-langchain-to-agently
Use when a migration is already known to stay on the LangChain agent side, including agent setup, tools, structured output, retrieval, and short-term memory.
25agently-triggerflow
Use when the user needs workflow orchestration such as branching, concurrency, approvals, waiting and resume, runtime stream, restart-safe execution, mixed sync/async function or module orchestration, event-driven fan-out, process-clarity refactors that make stages explicit, performance-oriented refactors that collapse split requests, or workflow definitions and chunk-level runtime metadata that must stay visible for debugging and visualization. The user does not need to say TriggerFlow explicitly.
25agently-output-control
Use when the user wants stable structured fields, required keys, reliable machine-readable sections, or downstream-consumable output from one model request, including prompt-config-owned output contracts, `.output(...)`, field ordering, `ensure_keys`, and structured streaming.
25agently-agent-extensions
Use when the user wants tool use, MCP access, HTTP or streaming API exposure, auto-function helpers, wait-for-key behavior, or optional `agently-devtools` observation, evaluation, and playground integration through Agently-native extension surfaces rather than custom wrappers first.
25