together-code-interpreter
Together Code Interpreter
Overview
Use Together Code Interpreter when the user wants to execute Python remotely in a managed sandbox.
Typical fits:
- stateful Python sessions
- data analysis and chart generation
- agent-generated code execution
- file uploads into a remote runtime
When This Skill Wins
- The user wants remote execution rather than local shell execution
- Session state needs to persist across multiple calls
- The result may include display outputs such as charts
- A lightweight managed runtime is enough; no custom infra is required
Hand Off To Another Skill
- Use
together-gpu-clustersfor full infrastructure control or larger distributed jobs - Use
together-dedicated-containersfor custom containerized runtime logic - Use
together-chat-completionsif the user only wants generated code, not executed code
Quick Routing
- Remote execution with session reuse
- Response schema and session listing
- MCP-style access for agent workflows
Workflow
- Decide whether the task needs code execution or only code generation.
- Start a session with
client.code_interpreter.execute(). - Reuse
session_idwhen the workflow depends on prior state. - Inspect
stdout,stderr, structured outputs, and display outputs separately. - List sessions only when the user needs operational visibility or cleanup.
High-Signal Rules
- Python scripts require the Together v2 SDK (
together>=2.0.0). If the user is on an older version, they must upgrade first:uv pip install --upgrade "together>=2.0.0". - Treat
session_idas part of the workflow state. - Inspect
response.errorsbefore assuming a run succeeded. plt.show()with the Agg backend does not reliably producedisplay_dataoutputs. To retrieve charts, save the figure to aBytesIObuffer withfig.savefig(), base64-encode it, and print the encoded string to stdout. Parse it from thestdoutoutput on the client side. See the chart example in scripts/execute_with_session.py.- Use this skill when the user benefits from remote stateful execution, not just because Python is involved.
- If the task outgrows the sandbox model, hand off to GPU clusters or dedicated containers.
Resource Map
- API reference: references/api-reference.md
- Alternative access patterns: references/api-reference.md
- Python workflow: scripts/execute_with_session.py
- TypeScript workflow: scripts/execute_with_session.ts
Official Docs
More from zainhas/skills
together-audio
Use this skill for Together AI audio workflows: text-to-speech over REST, streaming, or realtime WebSocket APIs, plus speech-to-text transcription, translation, diarization, timestamps, and live transcription. Reach for it whenever the user needs audio in or audio out on Together AI rather than generic chat generation, image or video creation, or model training.
1together-images
Use this skill for Together AI image workflows: text-to-image generation, image editing with Kontext, FLUX model selection, LoRA-based styling, reference-image guidance, and local image downloads. Reach for it whenever the user wants to generate or edit images on Together AI rather than create videos or build text-only chat applications.
1together-video
Use this skill for Together AI video workflows: text-to-video generation, image-to-video with keyframe control, model and dimension selection, polling asynchronous jobs, and downloading completed videos. Reach for it whenever the user wants motion generation on Together AI rather than still-image generation or text-only inference.
1together-embeddings
Use this skill for Together AI embedding, retrieval, and reranking workflows: generating dense vectors, building semantic search or RAG pipelines, and using rerank models behind dedicated endpoints. Reach for it whenever the user needs vector representations or retrieval quality improvements rather than direct text generation.
1together-gpu-clusters
Use this skill for Together AI GPU clusters and raw infrastructure workflows: provisioning on-demand or reserved clusters, choosing Kubernetes vs Slurm, attaching shared storage, scaling, getting credentials, and operating cluster-backed ML or HPC jobs. Reach for it when the user needs multi-node compute or infrastructure control rather than a managed model endpoint.
1together-fine-tuning
Use this skill for Together AI fine-tuning workflows: LoRA or full fine-tuning, DPO preference tuning, VLM training, function-calling tuning, reasoning tuning, and BYOM uploads. Reach for it whenever the user wants to adapt a model on custom data rather than only run inference, evaluate outputs, or host an existing model.
1