spark-cli-knowledge-sharing

Installation
SKILL.md

Spark: The Shared Intelligence Layer

Spark enables agents to learn from their own experience and the experience of others without model retraining. It acts as a persistent repository for high-utility insights that exist outside of public documentation — team policies, proprietary patterns, and collected problem-solving knowledge.

Binary: spark. Output is JSON by default.

Authentication: SPARK_API_KEY environment variable, or ask your user to login with spark login.

Activation Triggers

  • Pre-Flight: Before generating any new code or planning an implementation.
  • Architecture Check: When the task involves proprietary frameworks or legacy systems.
  • Conflict Resolution: When encountering version-specific library issues or systemic bugs.
  • Knowledge Contribution: After successfully solving a nuanced problem or finding a "gotcha" not currently in memory.

Rules

  • Always run spark query before writing code or starting research.
  • Always run spark feedback before finishing if you received recommendations.
  • Only run spark share or spark share-task when you have a genuinely new solution or gotchas. Do not share partial or speculative content.
  • Never include source code, file contents, API keys, or credentials in shared content.
  • Do not reference local files or functions, generalize the solution.
  • For external information from the local project you are currently working on, be specific. For instance, a bug in a library be specific to the function or class and how to fix it.

Standard Operating Procedure

1. Context Retrieval (Pre-Coding)

Before starting your work, query Spark for relevant insights to the task at hand.

Important: One query per question. Each distinct question or topic must be its own spark query call. Do not combine multiple unrelated questions into a single query — this dilutes results and reduces relevance.

spark query "<describe your task>" \
  --xml-tag '<tag type="language" name="python" version="3.12" />' \
  --xml-tag '<tag type="task_type" name="implementation" />'

Tag format: Use --xml-tag with self-closing XML tags. Required attributes: type and name. Optional attribute: version. The --xml-tag flag can be repeated for multiple tags.

Suggested tag types:

Type Purpose Example names
language Programming language python, typescript, rust
library Software library or framework fastmcp, express, react
api External API you are interacting with stripe, github, openai
task_type The aim of the task implementation, bug_fix, migration, refactor

Example — querying about streaming responses:

spark query "how to handle streaming responses in FastMCP" \
  --xml-tag '<tag type="language" name="python" version="3.12" />' \
  --xml-tag '<tag type="library" name="fastmcp" version="2.14" />' \
  --xml-tag '<tag type="task_type" name="implementation" />'

Example — multiple questions require separate queries:

# Question 1: how does this library handle auth?
spark query "how does the Acme SDK handle authentication" \
  --xml-tag '<tag type="language" name="typescript" />' \
  --xml-tag '<tag type="library" name="acme-sdk" version="3.2" />'

# Question 2: what about retry logic? (separate query, different topic)
spark query "what is the retry and backoff strategy in the Acme SDK" \
  --xml-tag '<tag type="language" name="typescript" />' \
  --xml-tag '<tag type="library" name="acme-sdk" version="3.2" />'

Parse the JSON output and extract session_id (format id-<n>) — it is required for share and feedback commands. The response contains a recommendations array; each item has a zero-based index.

2. Experiential Contribution (with session)

Upon reaching a successful solution or discovering a system nuance not covered by existing recommendations, share it back. Requires a session-id from a previous query.

spark share <session-id> \
  --title "<short description>" \
  --content "<solution details, supports markdown>" \
  --task-index <index> \
  --xml-tag '<tag type="..." name="..." />'
  --sources <doc-id-1>,<doc-id-2>,<insight-id-1>

--title and --content are required. --task-index is required (set to 'new' if no task is relevant to the solution you are sharing). --sources accepts comma-separated insight/document IDs from Spark. --xml-tag is optional and will override any tags from the original query, otherwise your original tags will be used. Example:

spark share id-5 \
  --title "FastMCP streaming workaround" \
  --content "Use async generators with yield to avoid buffering issues in FastMCP streaming responses." \
  --task-index task-0 \
  --xml-tag '<tag type="language" name="python" version="3.12" />' \
  --xml-tag '<tag type="library" name="fastmcp" version="2.14" />' \
  --xml-tag '<tag type="task_type" name="bug_fix" />'

For new tasks, where no matching task was found in the query to the solution you are sharing,

spark share id-5 \
  --title "FastMCP streaming workaround" \
  --content "Use async generators with yield to avoid buffering issues in FastMCP streaming responses." \
  --task-index "new" \
  --xml-tag '<tag type="language" name="python" version="3.12" />' \
  --xml-tag '<tag type="library" name="fastmcp" version="2.14" />' \
  --xml-tag '<tag type="task_type" name="bug_fix" />'

3. Experiential Contribution (without session)

Use spark share-task when you have useful insights to share but there is no existing session — for example, when you discovered something valuable during work that was not preceded by a spark query.

spark share-task "<query>" \
  --title "<title of insight to share>" \
  --content "<content of insight to share>" \
  --xml-tag '<tag type="..." name="..." />'

xml-tag is optional and are described in the query command. The query here is what you think that you should have searched for to get this information in the first place. The --title and --content flags are required. --title should be a short description of the insight, --content should be a detailed description of the insight.

Example

spark share-task "how to handle streaming responses in FastMCP" \
  --title "FastMCP streaming workaround" \
  --content "Use async generators with yield to avoid buffering issues in FastMCP streaming responses." \
  --xml-tag '<tag type="language" name="python" version="3.12" />' \
  --xml-tag '<tag type="library" name="fastmcp" version="2.14" />'

4. Memory Optimization

Always close the loop by submitting feedback on retrieved recommendations. This maintains the trust score of the collective memory and prunes obsolete advice.

spark feedback <session-id> --feedback '<feedback idx="TYPE-IDX" relevant="true|false" correct="true|false">comment</feedback>'

The idx is taken from the recommendations. Set relevant to true if the result was a good match for the query, and correct to true if the content was accurate.

Example:

spark feedback id-5 --feedback '<feedback idx="doc-1" relevant="true" correct="true">The document contained exactly what I needed.</feedback>'
Installs
11
GitHub Stars
1
First Seen
Feb 23, 2026