speech-to-text
Speech-to-Text — Saaras
[!IMPORTANT] Auth:
api-subscription-keyheader — NOTAuthorization: Bearer. Base URL:https://api.sarvam.ai/v1
Model
saaras:v3 — 23 languages, 5 output modes (transcribe, translate, verbatim, translit, codemix), auto language detection.
Quick Start (Python)
from sarvamai import SarvamAI
client = SarvamAI()
response = client.speech_to_text.transcribe(
file=open("audio.wav", "rb"),
model="saaras:v3",
mode="transcribe"
)
print(response.transcript)
Quick Start (JavaScript/TypeScript)
import { SarvamAIClient } from "sarvamai";
import * as fs from "fs";
const client = new SarvamAIClient({ apiSubscriptionKey: "YOUR_SARVAM_API_KEY" });
const response = await client.speechToText.transcribe({
file: fs.createReadStream("audio.wav"),
model: "saaras:v3",
mode: "transcribe"
});
console.log(response.transcript);
Batch API (Long Audio + Diarization)
job = client.speech_to_text_job.create_job(
model="saaras:v3",
mode="transcribe",
language_code="hi-IN",
with_diarization=True,
num_speakers=2
)
job.upload_files(file_paths=["meeting.mp3"])
job.start()
job.wait_until_complete()
job.download_outputs(output_dir="./output")
Supports audio up to 1 hour, up to 8 speakers, all 5 output modes.
WebSocket Streaming
import asyncio, base64
from sarvamai import AsyncSarvamAI
async def stream_audio():
client = AsyncSarvamAI()
async with client.speech_to_text_streaming.connect(
model="saaras:v3",
high_vad_sensitivity=True,
flush_signal=True
) as ws:
with open("audio.wav", "rb") as f:
audio_base64 = base64.b64encode(f.read()).decode("utf-8")
await ws.transcribe(audio=audio_base64, encoding="audio/wav", sample_rate=16000)
await ws.flush()
response = await ws.recv()
print(response)
asyncio.run(stream_audio())
Supports sessions up to 8 hours. Use sample_rate=8000 for telephony audio.
Gotchas
| Gotcha | Detail |
|---|---|
| REST: 30s limit | Audio >30s fails. Use Batch API or WebSocket for longer files. |
| JS method name | client.speechToText.transcribe({...}) — camelCase, NOT speech_to_text. File via fs.createReadStream(). |
| WebSocket codecs | Only wav, pcm_s16le, pcm_l16, pcm_raw. MP3/AAC/OGG NOT supported for streaming. |
| WebSocket audio | Must be base64-encoded. Use sample_rate=8000 for telephony audio. |
| Flush signal | flush_signal=True + await ws.flush() forces immediate transcription boundary. |
| Short audio detection | Set language_code explicitly for audio <3 seconds — auto-detection needs more signal. |
Full Docs
Fetch streaming protocol, batch API SDK examples, and codec details from:
- https://docs.sarvam.ai/llms.txt — comprehensive docs index
- STT Overview
- Streaming API
- Batch API + Diarization
- Rate Limits
More from sarvamai/skills
translate
Translate text between English and Indian languages using Sarvam AI (Sarvam-Translate, Mayura). Handles content translation and app localization across 22+ languages with mode control, script options, and numeral formats. Use when translating or localizing content for Indian users.
55text-to-speech
Convert text to natural speech using Sarvam AI's Bulbul v3 model. Handles audio generation, voiceovers, and voice interfaces for 11 Indian languages with 30+ voices. Supports REST, HTTP streaming, WebSocket, and pronunciation dictionaries. Use when generating spoken audio from text.
54chat
Chat completions using Sarvam AI LLMs (Sarvam-105B, Sarvam-30B). Handles AI chat, text generation, reasoning, coding, and multilingual conversations in Indian languages. OpenAI-compatible API. Use when building chatbots, Q&A systems, agents, or any LLM feature targeting Indian users.
44voice-agents
Build conversational voice agents using Sarvam AI with LiveKit or Pipecat. Handles voice assistants, phone bots, IVR, and real-time conversational AI for Indian languages. Integrates Sarvam STT (Saaras v3), TTS (Bulbul v3), and LLM (Sarvam-30B) with low-latency streaming. Use when creating voice-enabled applications or real-time speech pipelines.
41