agently-model-setup
Agently Model Setup
Use this skill for provider wiring and transport setup before request logic is discussed.
Native-First Rules
- default to async-first guidance when the configured model will be used from services, streaming paths, or concurrent workflows
- when settings live in files, prefer
Agently.load_settings("yaml_file", path, auto_load_env=True) - use
Agently.set_settings(...)oragent.set_settings(...)for inline mappings or host-owned overrides - prefer settings files with
${ENV.xxx}placeholders for base URL, model, and auth - put provider settings under the namespace read by the owning plugin. For
OpenAICompatible, preferplugins.ModelRequester.OpenAICompatible.* - call the matching settings loader with
auto_load_env=Truewhen the payload may rely on.env - if the app must fail fast, validate required env names in the integration layer before calling Agently
- after loading, verify the effective provider activation, base URL, model, and auth presence instead of assuming the file shape was correct
- keep provider setup outside business workflow logic and prompt files
Anti-Patterns
- do not hardcode provider-specific parsing into request code
- do not bake secrets or environment-specific endpoints into committed Python code when settings plus env placeholders fit
- do not leave provider config at a root-level namespace that the active plugin will not read
- do not let sync-only samples dictate the architecture of async-capable services
- do not mix model setup with output parsing or workflow design
Read Next
references/overview.md
More from agentera/agently-skills
agently-playbook
Use when the user wants to build, initialize, validate, optimize, or refactor a model-powered assistant, internal tool, automation, evaluator, or workflow from a business scenario or common problem statement, including project-structure refactors or starter skeletons that may separate model setup, prompt config, and orchestration, even if the request also mentions a UI, app shell, or local model service such as Ollama, and it is still unclear whether the solution should stay a single request, add supporting capabilities, or become orchestration. The user does not need to mention Agently explicitly.
33agently-prompt-management
Use when the user is shaping how one model request or request family should be instructed or templated, including prompt slots, input/instruct/info layering, mappings, recursive placeholder injection, prompt config, YAML or config-file-driven prompt behavior, and reusable prompt structure.
33agently-langchain-to-agently
Use when a migration is already known to stay on the LangChain agent side, including agent setup, tools, structured output, retrieval, and short-term memory.
31agently-triggerflow
Use when the user needs workflow orchestration such as branching, concurrency, approvals, waiting and resume, runtime stream, restart-safe execution, mixed sync/async function or module orchestration, event-driven fan-out, process-clarity refactors that make stages explicit, performance-oriented refactors that collapse split requests, or workflow definitions and chunk-level runtime metadata that must stay visible for debugging and visualization. The user does not need to say TriggerFlow explicitly.
31agently-output-control
Use when the user wants stable structured fields, required keys, reliable machine-readable sections, or downstream-consumable output from one model request, including prompt-config-owned output contracts, `.output(...)`, field ordering, `ensure_keys`, and structured streaming.
31agently-agent-extensions
Use when the user wants tool use, MCP access, HTTP or streaming API exposure, auto-function helpers, wait-for-key behavior, or optional `agently-devtools` observation, evaluation, and playground integration through Agently-native extension surfaces rather than custom wrappers first.
31