lark-event
Lark Events
Prerequisite: Read
../lark-shared/SKILL.mdfirst for authentication,--as user/botswitching,Permission deniedhandling, and safety rules.
Core commands
| Command | Purpose |
|---|---|
lark-cli event list [--json] |
List all subscribable EventKeys |
lark-cli event schema <EventKey> [--json] |
Show an EventKey's params and output schema |
lark-cli event consume <EventKey> [flags] |
Blocking consume; events → stdout NDJSON |
lark-cli event status [--json] [--fail-on-orphan] |
Inspect the local bus daemon status |
lark-cli event stop [--all] [--force] |
Stop the bus daemon |
Common flags
| Flag | Description |
|---|---|
--param key=value / -p |
Business params (repeatable; comma-separated for multi-value). Unknown keys fail with valid names listed inline |
--jq <expr> |
jq expression to filter / transform each event; empty output skips the event |
--max-events N |
Exit after N events. Default 0 = unlimited |
--timeout D |
Exit after duration D (e.g. 30s, 2m). Default 0 = no timeout. Whichever of --max-events / --timeout fires first wins |
--output-dir <dir> |
Write each event as a file (relative paths only; prevents traversal) |
--quiet |
Suppress stderr diagnostics. AI should not use this — it silences the ready marker |
--as user|bot|auto |
Identity for the session (see lark-shared) |
Examples
# Default: stream every event for the key (no filter, no projection)
lark-cli event consume im.message.receive_v1 --as bot
# Grab one sample event to inspect payload shape
lark-cli event consume im.message.receive_v1 --max-events 1 --timeout 30s --as bot
# Run for 10 minutes then auto-exit
lark-cli event consume im.message.receive_v1 --timeout 10m --as bot
# Consume multiple EventKeys concurrently (one shape per process, no dispatcher)
lark-cli event consume im.message.receive_v1 --as bot > receive.ndjson &
lark-cli event consume im.message.reaction.created_v1 --as bot > reaction.ndjson &
wait
Call flow
lark-cli event list --json→ pick a legal keylark-cli event schema <key> --json→ readresolved_output_schema+jq_root_pathto determine field pathslark-cli event consume <key> [--jq '<expr>']→ consume
Subprocess contract
Ready marker
event consume's stderr emits a fixed line [event] ready event_key=<key>. Parent processes should block on stderr until this line appears, then start reading stdout. Do not fall back to sleep.
stdin EOF = graceful exit
event consume treats stdin close as a shutdown signal (wired for AI subprocess callers). < /dev/null / nohup / systemd's default StandardInput=null will cause an immediate graceful exit (stderr reason: signal). To keep running:
- Feed stdin a source that never EOFs:
< <(tail -f /dev/null) - Or run bounded:
--max-events N/--timeout D
Exit codes & reason
On exit, the last stderr line is [event] exited — received N event(s) in Xs (reason: ...).
| exit code | reason | Trigger |
|---|---|---|
| 0 | reason: limit |
--max-events reached |
| 0 | reason: timeout |
--timeout reached |
| 0 | reason: signal |
Ctrl+C / SIGTERM / stdin EOF |
| non-0 | Error: ... (no exited line) |
Startup / runtime failure (permissions, network, params, config) |
Orchestrators should treat reason: limit/timeout/signal (all exit 0) as "business completion" and non-zero as "failure".
Never kill -9
Avoid kill -9 on consume processes: for EventKeys with a PreConsume hook (those that register server-side subscriptions via OAPI), kill -9 skips the OAPI unsubscribe and leaks server-side subscriptions (symptoms: "subscription already exists" on restart, duplicate event delivery). Prefer SIGTERM or closing stdin.
One consume, one EventKey (multi-key = multi-shell)
The command takes exactly one positional argument; k1,k2 and wildcards are unsupported. Listening to N keys means N subprocesses — this is intentional:
- One shape per process stdout; no dispatcher logic required in the AI
- Fault isolation (one key failing doesn't affect others)
- Independent
--as/--jq/--max-events/--timeoutper key
All N consumers share a single bus daemon (UDS local IPC), so the overhead is small
Writing jq via schema
event schema <key> --json is the source of truth for writing --jq. Four things to look at:
(1) Where fields start — see jq_root_path
- Value
"."→ fields are at the top level, write.chat_id - Value
".event"→ fields are inside a V2 envelope, write.event.chat_id
(2) Field list and types — see resolved_output_schema.properties.<name>
Each field carries type / description, and some also have format. Snippet (from event schema im.message.receive_v1 --json):
{
"chat_id": {"type":"string", "format":"chat_id", "description":"Chat ID, prefixed with oc_"},
"sender_id": {"type":"string", "format":"open_id", "description":"Sender open_id, prefixed with ou_"},
"create_time": {"type":"string", "format":"timestamp_ms", "description":"Send time as ms-epoch string"}
}
(3) Field semantics — see the format tag
Lark-defined semantic tags (not JSON Schema's standard format). Common values: open_id / chat_id / message_id / timestamp_ms / email. Purpose: distinguish "same string type, different meanings" fields so you can reverse-lookup via API or convert formats.
(4) Decoded state — read the field's description
event consume runs Process hooks that may pre-decode some payload fields (flattening V2 envelopes, rendering .content to plain text, etc.) — behavior differs from raw OAPI. Always read the field's description before writing jq, especially for generic field names like content / data / body / payload.
Why it matters: blindly applying fromjson to an already-decoded text field makes jq error on every event and silently drop it — the consumer looks alive but emits nothing, with only a single WARN line buried on stderr. (This is the general behavior: any jq runtime error skips the event with a one-line WARN; the loop does not abort.)
Don't shortcut the schema: when projecting event schema --json with jq, do not strip .description from properties — that's the field that tells you whether a field is already decoded. Dump the full property objects, not just keys.
Aside: --param's valid parameters also live in the schema — the params section lists name / type / required / enum / default / description; section missing = this key accepts no --param.
Topic index
| Topic | Reference | Coverage |
|---|---|---|
| IM | references/lark-event-im.md |
Catalog of 11 IM EventKeys + shape notes (flat vs V2 envelope) + im.message.receive_v1 field gotchas (sender_id is open_id only; .content is plain text except for interactive cards) + common jq recipes (filter by chat_type / message_type / sender) |
More from feishu/cn
lark-drive
飞书云空间:管理云空间中的文件和文件夹。上传和下载文件、创建文件夹、复制/移动/删除文件、查看文件元数据、管理文档评论、管理文档权限、订阅用户评论变更事件、修改文件标题(docx、sheet、bitable、file、folder、wiki);也负责把本地 Word/Markdown/Excel/CSV 以及 Base 快照(.base)导入为飞书在线云文档(docx、sheet、bitable)。当用户需要上传或下载文件、整理云空间目录、查看文件详情、管理评论、管理文档权限、修改文件标题、订阅用户评论变更事件,或要把本地文件导入成新版文档、电子表格、多维表格/Base 时使用。
20lark-openapi-explorer
飞书/Lark 原生 OpenAPI 探索:从官方文档库中挖掘未经 CLI 封装的原生 OpenAPI 接口。当用户的需求无法被现有 lark-* skill 或 lark-cli 已注册命令满足,需要查找并调用原生飞书 OpenAPI 时使用。
20lark-wiki
飞书知识库:管理知识空间、空间成员和文档节点。创建和查询知识空间、查看和管理空间成员、管理节点层级结构、在知识库中组织文档和快捷方式。当用户需要在知识库中查找或创建文档、浏览知识空间结构、查看或管理空间成员、移动或复制节点时使用。
20lark-doc
飞书云文档(v2):创建和编辑飞书文档。使用本 skill 时,docs +create、docs +fetch、docs +update 必须携带 --api-version v2;默认使用 DocxXML 格式(也支持 Markdown)。创建文档、获取文档内容(支持 simple/with-ids/full 三种导出详细度,以及 full/outline/range/keyword/section 五种局部读取模式,可按目录、block id 区间、关键词或标题自动成节只拉部分内容以节省上下文)、更新文档(八种指令:str_replace/block_insert_after/block_copy_insert_after/block_replace/block_delete/block_move_after/overwrite/append)、上传和下载文档中的图片和文件、搜索云空间文档。当用户需要创建或编辑飞书文档、读取文档内容、在文档中插入图片、搜索云空间文档时使用;如果用户是想按名称或关键词先定位电子表格、报表等云空间对象,也优先使用本 skill 的 docs +search 做资源发现。
20lark-sheets
飞书电子表格:创建和操作电子表格。支持创建表格、创建/复制/删除/更新工作表、读写单元格、追加行数据、查找内容、导出文件。当用户需要创建电子表格、管理工作表、批量读写数据、在已知表格中查找内容、导出或下载表格时使用。若用户是想按名称或关键词搜索云空间里的表格文件,请改用 lark-doc 的 docs +search 先定位资源。
20lark-im
飞书即时通讯:收发消息和管理群聊。发送和回复消息、搜索聊天记录、管理群聊成员、上传下载图片和文件(支持大文件分片下载)、管理表情回复。当用户需要发消息、查看或搜索聊天记录、下载聊天中的文件、查看群成员时使用。
19