agently-embeddings
Agently Embeddings
This skill covers embedding requests in Agently. It focuses on OpenAICompatible embeddings setup, request shape, input batching, async usage, parsed vector results, and the embedding-agent handoff used by vector-store integrations. It does not cover general chat/completions setup, structured output control, prompt-template management, or full retrieval pipeline design.
Prerequisite: Agently >= 4.0.8.5.
Agently is async-first at the runtime layer. Prefer async_start() or async_get_data() when the caller can use async APIs. Use batching first for texts that belong to one embeddings job, then use async concurrency for overlapping embedding jobs.
Scope
Use this skill for:
- configuring
OpenAICompatiblewithmodel_type="embeddings" - choosing between
base_url,full_url, auth, proxy, timeout,client_options, andrequest_optionsfor embeddings - understanding that embeddings requests are built from
input, not from chat-style prompt assembly - sending one text or a batch of texts through
input(...) - understanding how non-scalar input is serialized before it is sent
- consuming parsed embedding vectors through
start(),get_data(),async_start(), orasync_get_data() - understanding the parsed result shape for single-input and batch-input requests
More from agentera/agently-skills
agently-playbook
Use when the user wants to build, initialize, validate, optimize, or refactor a model-powered assistant, internal tool, automation, evaluator, or workflow from a business scenario or common problem statement, including project-structure refactors or starter skeletons that may separate model setup, prompt config, and orchestration, even if the request also mentions a UI, app shell, or local model service such as Ollama, and it is still unclear whether the solution should stay a single request, add supporting capabilities, or become orchestration. The user does not need to mention Agently explicitly.
41agently-prompt-management
Use when the user is shaping how one model request or request family should be instructed or templated, including prompt slots, input/instruct/info layering, mappings, recursive placeholder injection, prompt config, YAML or config-file-driven prompt behavior, and reusable prompt structure.
41agently-model-setup
Use when the request is already narrowed to wiring a model endpoint, env vars, settings-file-based model config, `${ENV.xxx}` placeholders, `auto_load_env=True`, or connectivity check for a model-powered feature, including local Ollama, dotenv-loaded DeepSeek or other OpenAI-compatible settings, plugin namespace placement, auth, request options, and minimal verification.
41agently-langchain-to-agently
Use when a migration is already known to stay on the LangChain agent side, including agent setup, tools, structured output, retrieval, and short-term memory.
39agently-triggerflow
Use when the user needs workflow orchestration such as branching, concurrency, approvals, waiting and resume, runtime stream, restart-safe execution, mixed sync/async function or module orchestration, event-driven fan-out, process-clarity refactors that make stages explicit, performance-oriented refactors that collapse split requests, or workflow definitions and chunk-level runtime metadata that must stay visible for debugging and visualization. The user does not need to say TriggerFlow explicitly.
39agently-output-control
Use when the user wants stable structured fields, required keys, reliable machine-readable sections, or downstream-consumable output from one model request, including prompt-config-owned output contracts, `.output(...)`, field ordering, `ensure_keys`, and structured streaming.
39