language-audit

SKILL.md

/dm:language-audit

Purpose

Comprehensive multilingual consistency audit across all language versions of a brand's content. Checks that all language versions are properly linked with correct hreflang annotations, that content parity exists across languages (no missing translations, no outdated versions, no significant structural deviations), that regional compliance requirements are met per target market, and that translation quality meets brand standards. Covers the full spectrum of multilingual integrity: technical SEO implementation, content completeness, linguistic quality, legal compliance, and locale-specific formatting.

Essential for brands operating across multiple markets where multilingual website and campaign integrity directly impacts search visibility, user experience, and regulatory standing. Surfaces issues that silently erode international performance — orphaned hreflang tags sending search engines conflicting signals, outdated translations creating brand inconsistency, missing compliance elements exposing the brand to legal risk, and incomplete localization undermining trust with local audiences.

Input Required

The user must provide (or will be prompted for):

  • Website URL or content set: The URL to audit (homepage, specific section, or full site) or a set of content assets (email campaigns, landing pages, ad copy) with their language versions. For websites, the user may provide a sitemap URL, a list of page URLs, or HTML source containing hreflang annotations. For content sets, provide all language versions of each asset
  • Languages to check: Specific language-region codes to audit (e.g., en-US, de-DE, fr-FR, hi-IN) or "all configured" to audit every language in the brand's language configuration. If omitted, defaults to all languages configured in the brand profile
  • Audit focus: Which dimensions to audit — hreflang (tag implementation only), content-parity (cross-language completeness and consistency), compliance (regional regulatory requirements), quality (translation scoring), localization (formatting and cultural adaptation), or comprehensive (all dimensions). Defaults to comprehensive if not specified

Process

  1. Load brand context: Read ~/.claude-marketing/brands/_active-brand.json for the active slug, then load ~/.claude-marketing/brands/{slug}/profile.json. Apply brand voice, compliance rules for target markets (skills/context-engine/compliance-rules.md), and industry context. Load the language configuration — primary language, secondary languages, content languages, do-not-translate terms, translation preferences, and locale formatting settings. Also check for guidelines at ~/.claude-marketing/brands/{slug}/guidelines/_manifest.json — if present, load restrictions and relevant category files for market-specific content rules. Check for agency SOPs at ~/.claude-marketing/sops/. If no brand exists, ask: "Set up a brand first (/dm:brand-setup)?" — or proceed with defaults.
  2. Inventory multilingual content: Identify all language versions of the content being audited. For websites, parse hreflang annotations, alternate link tags, and URL patterns (subdirectory /en/, /de/, subdomain en.example.com, or ccTLD structures) to map the full set of language-page relationships. For content sets, catalog all provided assets by language and content type. Build a language-content matrix showing which assets exist in which languages, flagging any gaps immediately.
  3. Hreflang audit: For each page or content set with hreflang annotations, check for: missing self-referential tags (every page must reference itself), bidirectional consistency (if page A references page B in language X, page B must reference page A), valid ISO 639-1 language codes and ISO 3166-1 region codes, x-default tag presence and correctness (fallback for unmatched languages), no duplicate language codes on the same page, absolute URLs (not relative), orphaned annotations pointing to non-existent or non-responding pages, and consistency of hreflang sets across all pages in the site section. Score each check as pass/fail with specific page-level findings.
  4. Content parity check: Compare content across language versions for structural and substantive consistency. Measure word count ratios between the primary language and each translation — flag versions with greater than 30% deviation (significantly shorter translations may indicate missing sections, significantly longer may indicate untranslated insertions or over-elaboration). Compare section structures (H1/H2/H3 hierarchy) across versions to identify missing or extra sections. Check that all CTAs, forms, navigation elements, and interactive components exist in every language version. Identify content that exists in the primary language but is missing entirely from one or more translations — these are the highest-priority parity gaps.
  5. Translation quality spot-check: Sample representative content from each language version — headlines, key CTAs, product descriptions, and compliance-critical text. Score each sample via language-router.py --action score checking length ratio against source, formatting preservation (markdown, HTML, merge tags intact), key term consistency (do-not-translate terms preserved), and placeholder integrity (variables like {{first_name}} unchanged). Aggregate scores per language to identify which translations are strongest and which need rework.
  6. Regional compliance check: For each language-market combination, verify market-specific compliance elements are present and correct. GDPR consent mechanisms and privacy notices for EU languages (de-DE, fr-FR, es-ES, it-IT, nl-NL, etc.), DPDPA data protection elements for Indian languages (hi-IN, ta-IN, te-IN, etc.), LGPD compliance for pt-BR, PIPA elements for ko-KR, APPI for ja-JP, CCPA/CPRA for en-US California-targeted content. Check that disclaimers, terms links, privacy policy links, cookie consent, and unsubscribe mechanisms are present and correctly localized (not just present in English on a German page). Reference skills/context-engine/compliance-rules.md for the full regulatory matrix.
  7. Localization completeness check: Verify that locale-specific formatting is correct per market — date formats (MM/DD/YYYY for en-US, DD.MM.YYYY for de-DE, DD/MM/YYYY for en-GB), currency symbols and formatting ($ before amount for USD, amount followed by EUR symbol for many European markets), measurement units (imperial for US, metric for all others), phone number formats (country code, grouping, separators), address formats (country-specific ordering), and number formatting (decimal separators, thousands grouping). Flag any instances where formatting from the source language has leaked into a translation (e.g., USD symbols appearing on a German page, US date format in Japanese content).
  8. Generate audit report: Compile all findings into a severity-ranked audit report. Score each dimension on a 0-100 scale. Classify every finding as critical (blocks publishing or creates legal/SEO risk), warning (degrades quality or user experience, should be fixed before next update cycle), or info (improvement opportunity, can be addressed in regular maintenance). Generate a prioritized fix list ordered by severity and estimated impact, with effort estimates per fix.

Output

A structured multilingual audit report containing:

  • Audit score per dimension: Hreflang implementation score (0-100), content parity score (0-100), translation quality score (0-100), regional compliance score (0-100), localization completeness score (0-100), and overall multilingual health score (weighted composite)
  • Detailed findings list: Every issue found, classified by severity (critical/warning/info), dimension, affected language-region, affected page or asset, specific description of the issue, and recommended fix
  • Hreflang implementation status: Per-page breakdown showing which hreflang tags are present, which are missing, which have errors, bidirectional consistency status, and x-default configuration
  • Content parity matrix: Languages (columns) by content assets (rows) showing presence/absence, word count ratios versus primary language, and structural match status — making it immediately clear where translation gaps exist
  • Compliance gaps per market: Per-market compliance checklist showing which required elements are present, missing, or incorrectly localized, with regulatory references for each requirement
  • Translation quality scores per language: Aggregate and per-sample scores from language-router.py for each language version, highlighting the weakest translations and specific quality issues
  • Localization formatting issues: List of locale-specific formatting errors (wrong date format, wrong currency, wrong measurement units) with the incorrect value and the expected correct format
  • Prioritized fix list: All issues ranked by severity and impact, with estimated effort per fix (quick fix / moderate / significant rework), grouped into immediate actions, next-cycle fixes, and maintenance backlog

Agents Used

  • localization-specialist — Leads the audit across all dimensions. Manages the multilingual content inventory, runs translation quality scoring via language-router.py, checks locale-specific formatting correctness, validates do-not-translate term preservation, assesses cultural adaptation quality, and synthesizes findings into the multilingual health score. Coordinates the overall audit workflow and produces the final report
  • seo-specialist — Handles the hreflang technical audit including tag validation, bidirectional consistency checks, x-default configuration, language-region code validation against ISO standards, and international SEO best practices. Assesses the SEO impact of any hreflang issues found and provides corrected hreflang code snippets for fixes
  • brand-guardian — Performs the regional compliance check per target market, verifying that market-specific regulatory requirements (GDPR, DPDPA, LGPD, PIPA, APPI, CCPA) are met in each language version. Flags compliance gaps with severity levels and regulatory references, and checks that brand guidelines restrictions are respected across all language versions
Weekly Installs
7
GitHub Stars
18
First Seen
Feb 27, 2026
Installed on
opencode7
antigravity7
github-copilot7
codex7
amp7
cline7