mission-statement
<invocation_args>$ARGUMENTS</invocation_args>
Mission Statement
What It Is (and Isn't)
A plugin mission statement is a decision-making anchor, not a feature description. It states what the plugin values, what it refuses to do, and how it resolves trade-offs — enabling AI agents to evaluate alignment without asking the human each time.
| Mission Statement Is | Mission Statement Is Not |
|---|---|
| A decision-making anchor | A feature list or capability overview |
| An explicit anti-pattern registry | A marketing or sales description |
| A trade-off resolution guide | A roadmap or version history |
| A verifiable alignment reference | A technical specification |
The mission.json Format
The file lives at the plugin root (not in .claude/plan/).
{
"status": "draft",
"mission": "One sentence. What this plugin is trying to be — not what it does.",
"values": [
"Value statement 1 — concrete, verifiable",
"Value statement 2"
],
"anti_patterns": [
"Specific behavior this plugin refuses to do",
"Another explicit refusal"
],
"escalation_triggers": [
"keyword", "another keyword phrase"
],
"trade_offs": {
"correctness_vs_speed": "correctness",
"breadth_vs_depth": "depth",
"explicit_vs_implicit": "explicit"
},
"out_of_scope": [
"Thing that looks related but belongs elsewhere"
],
"interview_backlog_item": "#NNN",
"validated_scenarios": []
}
Field definitions:
status—"draft"until interview completes and human approves; then"active"mission— single sentence; must pass the "bad twin" test (a bad version of this plugin could not claim the same statement)values— observable principles that guide decisions; must be verifiable against behavioranti_patterns— explicit refusals; what this plugin will not do even when askedescalation_triggers— keyword list for fast string-match in alignment checks; no LLM neededtrade_offs— when forced to choose, which side does this plugin takeout_of_scope— things that look adjacent but belong in other pluginsinterview_backlog_item— GitHub issue number of the interview task (added after backlog creation)validated_scenarios— list of known past decisions that this statement correctly predicts
Development Process
Three phases:
- AI Draft (immediate, this session) — source: discussion context + plugin files. Output:
mission.jsonwithstatus: "draft". The[draft]tag signals this is a hypothesis, not a decision. - Interview (async, via backlog task) — five structured questions asked to the human. Raw answers captured in backlog item. Output: updated
mission.jsonwith human-verified values. - Validation (after interview) — run 3 known past decisions through the statement. Does it predict the right choice? Output:
validated_scenariospopulated;status: "active".
The Five Interview Questions
These questions surface actual values, not stated ones. Ask them in order.
Q1 — The Non-Negotiable
"What is the one thing this plugin must never sacrifice, even to ship faster?"
Anchors values[0] — the primary principle.
Q2 — The Bad Twin
"What would a superficially similar but wrong version of this plugin do? What makes it wrong?"
Populates anti_patterns. A good mission statement the bad twin cannot also claim.
Q3 — The Trade-off
"When forced to choose between [breadth vs depth / correctness vs speed / explicit vs implicit], which does this plugin choose, and why?"
Ask all three. If the human says "both" — ask "if you could only have one." Answers populate trade_offs.
Q4 — The Removal Trigger
"What would make you remove this plugin from the marketplace entirely?"
Populates the most severe anti_patterns and escalation_triggers.
Q5 — The Anti-Pattern Example
"Give me a specific example of a 'fix' or 'improvement' this plugin should refuse to make, even if asked."
Most useful for alignment checks. Concrete refusals become escalation_triggers keywords.
AI Draft Procedure
When invoked (during Phase 0.6 of plugin lifecycle, or standalone):
- Read the plugin's existing files:
plugin.jsonor.claude-plugin/plugin.json,CLAUDE.mdif present,SKILL.mdfiles - Read
discuss-CONTEXT.mdif this is a new plugin creation - Draft
mission.jsonwithstatus: "draft". Populate all fields from observed design choices and stated preferences. - Write
mission.jsonto the plugin root directory - Create a backlog interview task via
mcp__plugin_dh_backlog__backlog_addwith title"Mission interview: {plugin-name}"and body containing the 5 questions and the current draft mission field - Update
mission.jsonwith"interview_backlog_item": "#NNN"using the created issue number - Report: path of draft written, backlog item number, 2-3 sentence summary of draft mission
Validation Scenario Format
After interview, validate by running known decisions through the statement. Add each to validated_scenarios:
{
"validated_scenarios": [
{
"decision": "Refused to add blanket noqa suppression to a linting plugin",
"predicted_by": "anti_patterns[0] + escalation_triggers",
"outcome": "correct"
}
]
}
Status becomes "active" when it correctly predicts at least 3 known decisions.
Standalone Invocation
Arguments: <invocation_args/>
<plugin-path>— Draft mission for an existing plugin. Read plugin files, draftmission.json, create backlog interview item.<plugin-path> --interview— Conduct the interview synchronously in this session. Ask Q1-Q5, updatemission.jsonfrom answers, move to validation.<plugin-path> --validate— Run validation scenarios. Ask human to confirm 3 past decisions, check predictions.
Relationship to Alignment Check
The escalation_triggers list is the fast path for alignment checking — pure string matching, no LLM:
- Check proposed action text against
escalation_triggers— if any keyword matches, escalate immediately - If no keyword match, check
anti_patternswith LLM reasoning against mission and values - If action contradicts
anti_patternsor moves away fromvalues, returnalignment: LOWwith the specific violated principle
The mission statement answers "what would the human say if they were watching?" — codified in advance.
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6