skills/webrenew/memories/memories-mcp

memories-mcp

SKILL.md

memories-mcp

Connect AI agents to the memories.sh memory layer via MCP (Model Context Protocol).

The CLI is the primary interface for memories.sh — use memories generate to create native config files for each tool. The MCP server is a fallback for real-time access when static configs aren't enough. It's also the best choice for browser-based agents (v0, bolt.new, Lovable) where the CLI can't run.

Quick Start

# Local stdio transport (most reliable)
memories serve

# HTTP/SSE transport (for web clients like v0)
memories serve --sse --port 3030

# Cloud-hosted (no local install needed)
# Endpoint: https://memories.sh/api/mcp
# Header: Authorization: Bearer YOUR_KEY

Primary Tool: get_context

Always start with get_context — it returns active rules + relevant memories in one call:

get_context({ query: "authentication flow" })
→ ## Active Rules
→ - Always use TypeScript strict mode
→ ## Relevant to: "authentication flow"
→ 💡 DECISION (P) abc123: Chose JWT for stateless auth

Leave query empty to get just rules. Use limit to control memory count (default: 10).

For lifecycle-aware callers on local CLI MCP, pass compaction/session hints:

get_context({
  query: "checkout timeout",
  session_id: "sess_123",
  budget_tokens: 6000,
  turn_count: 6,
  turn_budget: 24,
  last_activity_at: "2026-02-26T23:00:00.000Z",
  inactivity_threshold_minutes: 45
})

These hints let the server trigger write-ahead checkpointing before destructive compaction.

When relationship extraction is enabled server-side, get_context may also return conflicts[] for contradiction-linked memories. Treat these as clarification prompts before taking irreversible actions.

Tool Selection Guide

Goal Tool When
Start a task get_context Beginning of any task — gets rules + relevant context
Save knowledge add_memory After learning something worth persisting
Resolve contradictory context get_context If conflicts[] is present, ask a disambiguating question and persist the answer
Find specific info search_memories Full-text search with prefix matching
Browse recent list_memories Explore what's stored, filter by type/tags
Get coding standards get_rules When you only need rules, not memories
Update a memory edit_memory Fix content, change type, update tags
Remove a memory forget_memory Soft-delete (recoverable)
Bulk remove memories bulk_forget_memories Filtered mass soft-delete by type, tags, age, pattern
Reclaim storage vacuum_memories Permanently purge all soft-deleted records
Start lifecycle session (local) start_session Begin explicit session tracking
Persist turn checkpoint (local) checkpoint_session Save meaningful event/checkpoint
End session (local) end_session Close or compact active session
Read/create session snapshot (local) snapshot_session Capture raw transcript snapshot
Run consolidation (local) consolidate_memories Merge duplicates and supersede stale truths
Add reminder (local) add_reminder Create cron-based reminder in local CLI DB
Run reminders (local) run_due_reminders Emit due reminders and advance schedule
Manage reminders (local) list_reminders, enable_reminder, disable_reminder, delete_reminder Inspect and control reminder lifecycle

Memory Types

When using add_memory, pick the right type:

  • rule — Coding standards, preferences, constraints (always returned by get_context)
  • decision — Architectural choices with rationale
  • fact — Project-specific knowledge (API limits, env vars, etc.)
  • note — General notes (default)
  • skill — Reusable agent workflows (use with category and metadata)

Scopes

  • project (default) — Scoped to current git repo, detected automatically
  • global — Applies everywhere, set global: true in add_memory
  • project override — Set project_id: "github.com/org/repo" in add_memory (or start_memory_stream) to force project scope when the MCP process is running outside that repo

Do not send both global: true and project_id in the same call.

Streaming Memory Tools

For collecting content from SSE sources (v0 artifacts, streaming responses):

  1. start_memory_stream({ type?, tags?, global?, project_id? }) → returns stream_id
  2. append_memory_chunk({ stream_id, chunk }) (repeat for each piece)
  3. finalize_memory_stream({ stream_id }) → creates memory + triggers embedding
  4. cancel_memory_stream({ stream_id }) → discard if aborted

Lifecycle Tools (Local CLI MCP)

These tools are currently available when running memories serve locally:

  1. start_session({ title?, client?, user_id?, metadata?, global?, project_id? })
  2. checkpoint_session({ session_id, content, role?, kind?, token_count?, turn_index?, is_meaningful? })
  3. end_session({ session_id, status? })
  4. snapshot_session({ session_id, source_trigger?, slug?, transcript_md?, message_count?, meaningful_only? })
  5. consolidate_memories({ types?, include_global?, global_only?, project_id?, dry_run?, model? })

MCP Resources

For clients that support MCP resources:

URI Content
memories://rules All active rules as markdown
memories://recent 20 most recent memories
memories://project/{id} Memories for a specific project

Transport Options

Transport Use Case Command
stdio Claude Code, Cursor, local tools memories serve
HTTP/SSE v0, web-based agents, remote memories serve --sse --port 3030
Cloud No local install, cross-device https://memories.sh/api/mcp + Authorization: Bearer KEY

Local-only tools: lifecycle + reminders + streaming. Hosted MCP focuses on tenant-routed core memory tools.

Reference Files

  • Client setup configs: See references/setup.md for copy-paste configs for every supported client
  • Full tool reference: See references/tools.md for all parameters, return formats, and examples
Weekly Installs
30
GitHub Stars
19
First Seen
Feb 16, 2026
Installed on
codex30
cursor29
github-copilot29
amp29
kimi-cli29
gemini-cli29