skills/platform-eaglelab.tcl.com/turing-meeting-summary

turing-meeting-summary

SKILL.md

Turing Meeting Summarisation

Browse and search your Turing meeting summaries via the Turing API.

Authentication

Check if <skill_dir>/auth.env exists and contains an API key. If it does, use it. If not, ask the user to provide their Turing API key:

"I need your Turing API key (one-time setup). You can generate one in the Turing platform: Profile menu → API Key Management. Paste it here and I'll save it so you won't need to provide it again."

Once received, save it to <skill_dir>/auth.env:

TURING_API_KEY=<key>

Use these headers on every request:

  • Authorization: Bearer <api_key>
  • client: tcl-aigc-portal
  • environment: live
  • accept: application/json

Base URL

https://live-turing.cn.llm.tcljd.com

Endpoints

List meetings

GET /api/v1/portal/me/meeting-summaries

Query parameters:

  • size=20 — always fixed at 20
  • keyword=<term> — when user searches by topic/keyword
  • access_type=owned|shared|participant — optional filter (default: owned)
  • status — optional filter
  • sort_field=created_at — default sort
  • sort_order=desc — newest first

Create transcript from text

POST /api/v1/portal/me/transcripts/from-text

Request body (application/json):

{
  "transcript_text": "<plain text, 10–500,000 chars>"
}

Response (TranscriptFromTextResponse):

  • transcript_id — use this to generate the meeting summary
  • audio_type — will be text_input
  • task_run_status — will be completed (synchronous)
  • created_at — unix timestamp

Generate meeting summary

POST /api/v1/portal/me/meeting-summaries

Request body (application/json):

{
  "transcript_id": "<from previous step>",
  "meeting_topic": "Weekly sync",
  "meeting_participants": ["Alice", "Bob"],
  "meeting_start_time": "2026-03-24T10:00:00+01:00",
  "timezone": "Europe/Warsaw",
  "meeting_type": "general",
  "meeting_language": "auto",
  "detail_level": "standard"
}

Required fields: transcript_id, meeting_topic, meeting_start_time, timezone, meeting_participants.

⚠️ meeting_participants is required for general meetings despite what the API schema suggests. The API returns a validation error without it.

Optional fields:

  • meeting_typegeneral (default) | interview | training
  • meeting_languageauto (default) | chinese | english
  • detail_levelconcise | standard (default) | detailed
  • meeting_location — free-text string
  • meeting_extra_info — additional context for the AI
  • meeting_summary_id — if updating an existing summary

Response: streaming events (MeetingSummaryEvent), final event contains:

  • meeting_summary_id — the created summary ID
  • ai_summary — the generated markdown summary

Get raw transcript

GET /api/v1/portal/transcripts/runs/{transcript_id}

Where {transcript_id} is from the meeting detail response (e.g. transcript_32nXILck0UdP90AHCHM2o).

Returns the original transcript text:

  • transcript — raw text (can be 10K–500K+ chars), includes speaker labels and timestamps
  • task_run_statuscompleted when ready
  • audio_typetext_input (imported) or audio source type
  • segments — structured segments (may be null for text imports)
  • edited_segments — user-edited segments (use if not null)

Meeting detail

GET /api/v1/portal/me/meeting-summaries/{id}

Where {id} is meeting_summary.id from the list response (e.g. meeting-summary_32Lf09Ajhq3vsL0cALkm1).

Returns the full AI-generated summary:

  • ai_summary — markdown text generated by AI
  • edited_summary — user-edited version, use this if not null, otherwise use ai_summary

Only available for meetings with meeting_status: summary_finished.

Response structure (list)

Each item in data.items[] contains:

  • meeting_summary.id — ID for detail endpoint calls
  • meeting_summary.meeting_topic — the meeting title
  • meeting_summary.meeting_start_time — unix timestamp
  • meeting_summary.meeting_end_time — unix timestamp
  • meeting_summary.timezone — e.g. Europe/Warsaw
  • meeting_summary.meeting_participants[] — list of participant objects or email strings
  • transcript.duration_seconds — meeting duration in seconds
  • meeting_status — e.g. summary_finished, ready_for_summary
  • access_typeowned, shared, or participant
  • sharer — name/info of person who shared (if access_type is shared)

Behaviour

  1. Load the API key from <skill_dir>/auth.env. If missing, follow the Authentication setup above before proceeding.

  2. Map user intent:

    • Browse ("show my recent meetings") → list only, render as table, no detail calls
    • Search ("find meeting about X", "when did I discuss X", "search meetings for X") → list with keyword=X, render as table
    • Insights ("what were the key takeaways", "action items from meeting X", "what was discussed about Y", "summarise my meetings on Z", or any query requiring understanding of meeting content) → see Insights flow below
    • Import ("import this transcript", "upload meeting transcript", "import meeting from file") → see Import flow below
  3. Import flow

    Determine the source type from the user's input and follow the matching path, then proceed to Common steps.

    3a. Local file

    User provides a file path on disk.

    Text extraction:

    • If PDF: run python3 <skill_dir>/scripts/extract_pdf_text.py <input.pdf> <output.txt>. This handles font warnings, cleans artifacts, and outputs clean text. Do NOT use the pdf tool for extraction — it's slower and may truncate.
    • If plain text (.txt, .srt, .vtt, etc.): read the file directly.

    Clean the extracted text: remove PDF artifacts (font warnings, parser errors, blank line runs). Verify length is 10–500,000 chars; if too large, inform the user and suggest truncating or splitting.

    Metadata extraction:

    • Participants: scan for repeating name + timestamp patterns (e.g. SomeName HH:MM:SS). Strip prefixes like @.
    • Topic, time, timezone: parse any structured header in the file. Use best-effort; formats vary.
    • Defaults when not inferable: timezone from USER.md, start time = now, topic = filename without extension, detail_level = standard, meeting_type = general.

    → Proceed to 3d. Common steps.


    3b. Pasted transcript text

    User pastes raw transcript text directly into chat. No extraction needed — use the text as-is. Verify length is 10–500,000 chars.

    Metadata extraction:

    • Participants: scan for repeating name + timestamp patterns. Strip prefixes like @.
    • Topic, time, timezone: parse any structured header in the text. Use best-effort.
    • Defaults when not inferable: timezone from USER.md, start time = now, topic = "Pasted transcript", detail_level = standard, meeting_type = general.

    → Proceed to 3d. Common steps.


    3c. Feishu doc URL

    User provides a Feishu docx URL (e.g. https://xxx.feishu.cn/docx/TOKEN). Extract the doc_token from the URL.

    Finding the transcript:

    Call feishu_doc with action: "read". Then call action: "list_blocks" and scan all bullet blocks (block type 12) for embedded link.url values pointing to a feishu.cn/docx/ URL — Feishu AI meeting notes typically include a "Links" section at the bottom with a bullet linking to a separate verbatim transcript document. This link is invisible in the plain text output and only appears in block data. If found, read that linked doc — it contains the full transcript with speaker labels and timestamps, and is what should be imported. If no linked transcript is found, fall back to the plain text of the original doc.

    Verify length is 10–500,000 chars.

    Metadata extraction:

    • Participants: scan for @Name HH:MM:SS patterns. Strip @ prefixes.
    • Topic, time, timezone: Feishu meeting notes typically include Title:, Time:, Participants: fields, or Chinese equivalents (会议主题:, 会议时间:). Prefer these over inferred values.
    • Defaults when not inferable: timezone from USER.md, start time = now, topic = doc title, detail_level = standard, meeting_type = general.

    → Proceed to 3d. Common steps.


    3d. Common steps

    Confirm metadata with the user before submitting. Present inferred values (topic, start time, timezone, participants) and ask:

    "I'll use these details. Want to change anything or add a location/extra context?"

    Wait for confirmation or corrections before proceeding.

    For meeting_language: use the language explicitly requested by the user. If not specified, match the language the user is currently writing in. Do NOT default to auto.

    Call POST /api/v1/portal/me/transcripts/from-text with the transcript text → get transcript_id.

    Call POST /api/v1/portal/me/meeting-summaries with transcript_id + confirmed metadata.

    Display the generated ai_summary and confirm success with the meeting_summary_id.

  4. Insights flow:

    • Search list with relevant keyword(s)
    • Filter results to meeting_status: summary_finished only
    • Use judgement based on the user's query to decide how many meeting details to fetch:
      • A question about a specific meeting → fetch that meeting's detail
      • A question spanning a topic or time period → fetch details for all relevant matches and synthesise
    • Fetch detail for each selected meeting: GET .../meeting-summaries/{meeting_summary.id}
    • Use edited_summary if not null, otherwise ai_summary
    • If the summary lacks sufficient detail to answer the query (e.g. user asks for specific quotes, individual arguments, exact wording, who said what, or any question the AI summary is too high-level to answer):
      • Get transcript_id from the meeting detail response
      • Fetch raw transcript: GET /api/v1/portal/transcripts/runs/{transcript_id}
      • Use transcript field (raw text with speaker labels and timestamps)
      • Search the transcript for relevant sections and quote directly
      • Attribute statements to speakers using the speaker labels in the transcript
      • Note: transcripts can be very large — search for keywords rather than processing the entire text
    • Answer the user's query directly using the summary (and transcript where needed), in markdown
    • Always cite which meeting(s) the answer draws from (topic + date)

Output format

Browse / Search — markdown table

# Topic Date Duration Status Access
1 ADR-147 review 2026-03-15 14:00 82 min Done Owned
  • Topic — from meeting_summary.meeting_topic
  • Date — format meeting_summary.meeting_start_time as YYYY-MM-DD HH:mm using meeting_summary.timezone
  • Durationtranscript.duration_seconds / 60, rounded, shown as X min
  • Statussummary_finishedDone, ready_for_summaryReady, processingProcessing, others as-is
  • AccessownedOwned, sharedShared by [sharer name], participantParticipant

Follow the table with a short context line, e.g.:

4 meetings found — showing top 20. Use "search meetings for X" to filter.

Insights — free-form markdown

Answer the user's question directly. Structure the response to match the query (e.g. bullet points for action items, pros/cons lists for decisions). Always include a citation block at the end:


Sources: ADR-147 review (2026-01-30), Kick-start technical handover (2026-01-29)

Import — confirmation message

After a successful import, display:

Meeting imported: {topic} Summary ID: {meeting_summary_id} Transcript source: text import ({char_count} chars)

{ai_summary — first ~500 chars or a concise excerpt}

Full summary is now available in your Turing meeting list.

Scope

  • ✅ Browse recent meetings (top 20)
  • ✅ Keyword search
  • ✅ Meeting summaries and insights (with raw transcript drill-down)
  • ✅ Persistent API key (one-time setup, stored in auth.env)
  • ✅ Import text transcripts from file
  • 💡 Upload local audio/video recordings — requires presigned S3 upload flow, not yet supported
  • 💡 Summarise with custom agenda — not yet supported
  • 💡 Share meeting summaries with other users — not yet supported
  • 💡 Export summary to Feishu — not yet supported
  • 💡 Export summary to file (Word, PDF, Markdown) — not yet supported
  • 💡 Browsing pagination — not yet supported
Installs
6
First Seen
Mar 31, 2026