check
/dm:check — Unified Pre-Publish Quality Gate
This skill is the canonical pre-publish gate for marketing content. It wraps the evaluation suite (scripts/eval-runner.py) and produces a single pass/fail decision with actionable issues.
Use this skill before publishing any marketing content — blog posts, ad copy, emails, social posts, landing pages, press releases, or any branded copy.
Why this skill exists
In v3.0 and earlier, a global PreToolUse hook auto-ran a hallucination + brand-compliance check on every Write/Edit operation in every project. v3.1 removed that hook because it fired globally across all plugins and projects (Slack writes, GitHub PRs, code edits — all of it), causing friction in non-marketing work.
/dm:check replaces that automatic gate with an explicit user-invoked gate. The work is the same; the trigger is intentional.
What the check evaluates
The check delegates to scripts/eval-runner.py (the master eval orchestrator) which calls four sibling scripts:
| Dimension | Script | What it checks |
|---|---|---|
| Hallucination | hallucination-detector.py |
Unattributed statistics, placeholder URLs (example.com / your-site.com), unsupported superlatives ("best", "#1", "leading"), fabricated citations |
| Claims | claim-verifier.py (when --evidence provided) |
Cross-checks specific claims against a user-provided evidence file |
| Brand voice | brand-voice-scorer.py (when --brand provided) |
Scores content against the active brand's voice profile (formality, energy, humor, authority, prefer/avoid words) |
| Structure | output-validator.py (when --schema provided) |
Validates content matches expected schema (blog_post, email, ad_copy, social_post, landing_page, press_release, content_brief, campaign_plan) |
Plus content quality and readability scoring (always run).
Subcommands and modes
Default (run-quick)
/dm:check <file-path-or-content>
Runs the quick eval: hallucination detection + content quality + readability. Fast (~2 seconds), zero external dependencies. Use this for routine checks.
Full eval (run-full)
/dm:check <file-path-or-content> --full
Runs all 6 dimensions: hallucination + claims (if evidence provided) + brand voice (if brand provided) + structure (if schema provided) + content quality + readability. Use before publishing anything client-facing or external.
Compliance-focused (run-compliance)
/dm:check <file-path-or-content> --compliance --brand <slug> [--evidence <path>] [--schema <name>]
Runs hallucination + claims + brand voice + structure. Best for regulated industries (healthcare, financial services, alcohol, cannabis, gambling) where claim substantiation and brand-voice fidelity matter most.
With evidence file
/dm:check <file-path> --evidence <evidence-file.json>
When the content makes specific claims you want to substantiate, provide a JSON evidence file:
{
"evidence": [
{
"claim": "50% increase in conversions",
"source": "GA4 Q4 report",
"date": "2025-12-31",
"verified": true
},
{
"claim": "Trusted by Fortune 500 companies",
"source": "Customer roster (internal)",
"date": "2026-04-01",
"verified": true
}
]
}
The check will extract every claim from the content and flag any that don't match an evidence entry.
With schema validation
/dm:check <file-path> --schema blog_post
Validates the content matches the structural requirements of the named schema. Available schemas: blog_post, email, ad_copy, social_post, landing_page, press_release, content_brief, campaign_plan. Use --schema list to see all schemas with their requirements.
With brand voice check
/dm:check <file-path> --brand acme
Scores the content against the brand voice profile at ~/.claude-marketing/brands/acme/profile.json. Reports per-dimension breakdown (formality, energy, humor, authority) plus deviation from prefer/avoid word lists.
Output format
The check returns a unified report:
DM CHECK REPORT — <file or content snippet>
=============================================
Composite Score: 73.4 / 100 (Grade: B-)
Auto-Reject: NO
Dimensions:
Hallucination ............ 96/100 PASS (weight 0.40)
Content Quality .......... 78/100 PASS (weight 0.35)
Readability .............. 65/100 PASS (weight 0.25)
Issues Found:
CRITICAL: None
WARNING (2):
- Line 14: Unattributed statistic "76% of buyers prefer..."
Suggestion: cite source or rephrase as observation
- Line 22: Superlative "best in class" without substantiation
Suggestion: replace with measurable claim or proof point
Decision: PASS — safe to publish but address WARNINGs first
If any CRITICAL issue is found, decision = BLOCKED and the user is asked to fix before publishing.
How the skill operates
The skill follows this flow:
- Resolve the input. If the user passed a file path, read it. If they passed inline content, use it.
- Resolve options. If
--brandnot specified, attempt to load from active brand at~/.claude-marketing/brands/_active-brand.json. If--schemanot specified, infer from content type if obvious (blog markdown →blog_post, etc.) or skip structure check. - Build the eval-runner command. Choose action:
run-quick(default),run-full(with--full),run-compliance(with--compliance). - Execute via Bash.
python ${CLAUDE_PLUGIN_ROOT}/scripts/eval-runner.py --action run-quick --file <input> [--brand <slug>] [--evidence <path>] [--schema <name>] - Parse the JSON output. Extract composite score, grade, dimension scores, alerts, auto-reject decision.
- Format for the user. Present the human-readable report shown above. Lead with the decision (PASS / WARN / BLOCKED).
- If BLOCKED, refuse to recommend publishing. Always require the user to address CRITICAL issues before they proceed.
Scripts called
scripts/eval-runner.py— master orchestratorscripts/hallucination-detector.py— invoked by eval-runnerscripts/claim-verifier.py— invoked by eval-runner if--evidenceprovidedscripts/brand-voice-scorer.py— invoked by eval-runner if--brandprovidedscripts/output-validator.py— invoked by eval-runner if--schemaprovidedscripts/content-scorer.py— invoked by eval-runnerscripts/readability-analyzer.py— invoked by eval-runner
All scripts use stdlib only (except brand-voice-scorer which optionally uses nltk). No external API calls, no internet required.
Examples
Example 1: Quick check on a draft
User: /dm:check drafts/q2-launch-blog.md
Skill:
1. Read drafts/q2-launch-blog.md
2. Run python scripts/eval-runner.py --action run-quick --file drafts/q2-launch-blog.md
3. Parse JSON output:
composite_score: 81.2, grade: B+, auto_rejected: false
hallucination: 92/100 pass, content_quality: 76/100 pass, readability: 84/100 pass
alerts: 1 warning ("unattributed stat in line 14")
4. Format report:
DM CHECK REPORT — drafts/q2-launch-blog.md
============================================
Composite Score: 81.2 / 100 (Grade: B+)
Decision: PASS
Dimensions:
Hallucination ......... 92/100 pass
Content Quality ....... 76/100 pass
Readability ........... 84/100 pass
Issues Found:
WARNING (1):
- Line 14: Unattributed statistic "76% of marketers say..."
Suggestion: cite source or rephrase as observation
Decision: PASS — safe to publish; recommend addressing the WARNING first.
Example 2: Full eval with brand + evidence + schema
User: /dm:check drafts/healthcare-ad.md --full --brand healthfirst --evidence facts/q2-claims.json --schema ad_copy
Skill:
1. Read drafts/healthcare-ad.md
2. Run python scripts/eval-runner.py --action run-full --file drafts/healthcare-ad.md --brand healthfirst --evidence facts/q2-claims.json --schema ad_copy
3. Parse JSON output. Composite: 58.4, grade: D+, auto_rejected: true
4. Format report with CRITICAL issues highlighted
5. Decision: BLOCKED. Two unattributed health claims need substantiation before this can publish.
Example 3: Compliance check on regulated content
User: /dm:check drafts/financial-services-landing.md --compliance --brand finadvisor --evidence facts/finra-disclosures.json
Skill:
1. Read content
2. Run python scripts/eval-runner.py --action run-compliance --file drafts/financial-services-landing.md --brand finadvisor --evidence facts/finra-disclosures.json
3. Output prioritises hallucination + claim verification + brand voice + structure
4. Returns decision with FINRA-relevant issues highlighted
Example 4: Quick check on inline content
User: /dm:check "Our amazing product boosts conversion by 347% — visit example.com today!"
Skill:
1. Detect inline content (not a file path)
2. Write content to a temp file
3. Run quick eval
4. Report:
CRITICAL: 2
- Placeholder URL "example.com" — replace with real URL before publishing
- Unattributed statistic "347%" — fabricated stat or missing citation
Decision: BLOCKED
When to use which mode
| Scenario | Recommended mode |
|---|---|
| Routine content check during drafting | /dm:check <file> (quick) |
| Before publishing any external content | /dm:check <file> --full --brand <slug> |
| Regulated industry content (healthcare / financial / alcohol / cannabis / gambling) | /dm:check <file> --compliance --brand <slug> --evidence <facts> |
| Client-facing deliverable (Growth Plan, Yearly Planner, monthly report) | /dm:check <file> --full --brand <slug> |
| Ad copy specifically | /dm:check <file> --schema ad_copy --brand <slug> |
| Email specifically | /dm:check <file> --schema email --brand <slug> |
| Blog post specifically | /dm:check <file> --schema blog_post --brand <slug> |
Behaviour rules
- Never report PASS if there are CRITICAL issues. Always BLOCKED.
- Always report the composite score and grade. Even if PASS, surface room for improvement.
- Always include actionable suggestions. Each issue must be paired with a fix recommendation.
- Resolve the active brand if not specified. Check
~/.claude-marketing/brands/_active-brand.json. If no active brand, run without--brand(skip brand voice dimension). - Never modify the content. This skill only reports — the user (or the agent that produced the content) makes the fix.
- Surface skipped dimensions explicitly. If the user did not provide
--evidenceor--schema, note that the corresponding dimensions were skipped.
Related skills + commands
/dm:engagement growth-plan— produces Part 8 deliverable; should be checked with/dm:check --full --schema content_briefbefore client delivery/dm:content-engine— produces marketing content; recommended workflow is/dm:content-engine→ review →/dm:check→ publish/dm:eval-content— older legacy alias that will route to this skill in v3.2+
Related references
scripts/eval-runner.py— the master orchestrator this skill wrapsskills/context-engine/eval-framework-guide.md— full eval framework documentationskills/context-engine/eval-rubrics.md— per-dimension scoring rubricsdocs/architecture.mdSection 11 — eval framework architecture
More from indranilbanerjee/digital-marketing-pro
video-script
Write video scripts. Use when: creating YouTube, TikTok, Reels, LinkedIn, demo, or explainer video content.
113paid-advertising
Plan paid advertising campaigns. Use when: managing Google Ads, Meta Ads, LinkedIn Ads, bid strategy, or budget optimization.
53pdf-report
Generate branded PDF reports. Use when: creating executive summaries, campaign reports, or client deliverables.
47reputation-management
Manage brand reputation. Use when: handling reviews, crisis comms, negative press, sentiment, or recovery plans.
40landing-page-audit
Audit landing pages. Use when: scoring above-fold clarity, trust signals, form friction, message match, or mobile UX.
38media-plan
Create a paid media plan. Use when: building media buy schedules, cross-channel budget allocation, or creative rotation calendars.
36