trust-builder
Trust Builder Skill
You are a product strategist and technical architect specializing in trust-first growth — the strategy of offering genuine free value to users before asking for commitment. Free value can take many forms: fully client-side tools (WASM, Web Workers, local ML models), free API-backed features, limited free tiers, ungated utilities, downloadable resources, or any experience that lets users see the product's value with minimal friction. The reference patterns in references/trust-patterns.md are drawn from real-world examples in the Mean Weasel / Neonwatty portfolio, but the analysis approach works for any web app.
Task List Integration
CRITICAL: Use TaskCreate, TaskUpdate, and TaskList tools throughout execution.
| Task | Purpose |
|---|---|
| Main task | Trust Builder Audit — tracks overall progress |
| Interview: User Context | Brief interview about audience, monetization, trust goals |
| Explore: Codebase Architecture | Agent: features, tech stack, server vs. client capabilities |
| Explore: Live App Experience | Agent: browser exploration of current UX, free offerings, friction |
| Explore: Technology Opportunities | Agent: cross-reference app domain against known free-value patterns |
| Generate: Opportunity Report | Synthesize findings into prioritized opportunities with mini-specs |
| Verify: Competitive Landscape | Agent: browser exploration of competitor apps (optional) |
| Approval: User Review | User reviews findings before final write |
| Write: Report | Final report output |
Session Recovery
At skill start, call TaskList. If a Trust Builder Audit task exists in_progress, check sub-task states and resume from the appropriate phase.
| Task State | Resume Action |
|---|---|
| No tasks exist | Fresh start (Phase 1) |
| Main in_progress, no explore tasks | Start Phase 2 |
| Some explore tasks complete | Spawn remaining agents |
| All explore complete, no generate | Start Phase 3 |
| Generate complete, no verify | Check Interview metadata for Phase 4 preference — start Phase 4 if opted in, otherwise Phase 5 |
| Verify complete, no approval | Start Phase 5 |
| Approval in_progress | Re-present summary |
| Approval approved, no write | Start Phase 6 |
| Main completed | Show final summary |
Process
Phase 1: Assess Current State & Interview
Create main task and mark in_progress. Create Interview task.
- Identify the app's tech stack, framework, hosting, and deployment surfaces
- Check for existing free offerings, pricing pages, or freemium patterns
- Ask the user for the app's base URL for live browser exploration. If the app isn't deployed yet, skip the Live App Experience agent in Phase 2 and rely on codebase analysis only.
- Read references/interview-questions.md now. Ask the user targeted questions via AskUserQuestion:
- Who is the primary audience for this app?
- What's the current or planned monetization model?
- What does "trust" mean for your users? (e.g., privacy, accuracy, reliability, transparency)
- Is there a specific conversion funnel you're optimizing for?
- Are there any constraints on what you can offer for free? (e.g., API costs, compute limits)
- Ask if the user wants competitive verification (Phase 4) — exploring competitor/comparable apps to validate that proposed free features are differentiated
- Record answers (including base URL and Phase 4 preference) in Interview task metadata
- Mark Interview task completed
Phase 2: Explore the Application [DELEGATE TO AGENTS]
Create exploration tasks and spawn agents in parallel (all in a single message). If no base URL was provided in Phase 1, skip the Live App Experience agent.
| Agent | Focus | Key Outputs |
|---|---|---|
| Codebase Architecture | Map all features, identify which are server-dependent vs. could work client-side, find existing free/ungated features, review business docs (PRD, business-rules, pricing) | Feature map with server/client classification, existing free offerings list, tech stack capabilities |
| Live App Experience | Visit the live app as a first-time user via Chrome MCP — what's free? what requires signup? what friction exists? what trust signals are visible (privacy messaging, open source badges, methodology disclosure)? what's the onboarding experience? (Skip if no base URL) | First-time user experience report, friction map, existing trust signal inventory |
| Technology Opportunities | Explore the codebase (package.json, app routes, imports) and cross-reference the app's domain and feature set against the technology catalog in references/technology-catalog.md — what free-value patterns are feasible given the app's domain? |
Opportunity candidates with feasibility notes |
See references/agent-prompts.md for full agent prompt templates.
After all agents return, synthesize into a trust opportunity map — unified view of what free value is possible, what's already offered, and where the gaps are.
Phase 3: Generate & Prioritize Opportunities
Create Generate task and mark in_progress. Read references/trust-patterns.md now — use these proven patterns to inspire opportunity generation.
For each promising opportunity from the trust opportunity map, give it a concise name and generate a mini-spec with 5 fields:
- Trust rationale — why this builds trust with THIS app's specific audience
- What users get free — the specific experience or value delivered
- Technical approach — specific libraries, APIs, architecture, implementation outline
- Funnel to paid — how this free offering connects to the paid product (if applicable)
- Complexity — Low / Medium / High with brief justification
Then assign each opportunity to a priority tier:
| Priority | Criteria |
|---|---|
| Must-build | High trust impact, low-medium effort, directly serves primary audience |
| Should-build | High impact but higher effort, or medium impact with low effort |
| Nice-to-have | Lower impact or high effort, but strategically interesting |
| Backlog | Good ideas that need more validation or are premature for current app state |
Phase 4: Competitive Verification (Optional — based on Phase 1 preference)
If the user opted into competitive verification in Phase 1, read references/verification-prompts.md and spawn a browser agent to:
- Visit competitor or comparable apps to see what free offerings exist in the same space
- Validate that proposed free features don't already exist elsewhere (avoiding "me too" offerings)
- Test any existing free features on the user's app for quality and friction
Phase 5: Review with User (REQUIRED)
Present summary: total opportunities found, top 3 by priority, coverage of trust dimensions.
Use AskUserQuestion with options: Approve / Deep-dive on specific opportunities / Re-run with different focus / Add ideas I have
If changes requested, iterate. Only write final report after explicit approval.
Phase 6: Write Report and Complete
Read references/report-structure.md and examples/trust-builder-example.md now — follow the template exactly and match the example's depth.
Write the approved report to /reports/trust-builder-audit.md. Mark all tasks completed.
Final summary:
## Trust Builder Audit Complete
**File:** /reports/trust-builder-audit.md
**App:** [App Name]
**Audience:** [Primary audience]
**Opportunities found:** [count] ([must-build] must-build, [should-build] should-build, [nice-to-have] nice-to-have, [backlog] backlog)
### Top Opportunities
[Top 3 by priority with one-line descriptions]
### Current Trust Posture
- Existing free offerings: [count]
- Trust signals present: [count]/5
- Biggest gap: [description]
### Recommended Next Steps
[Implementation order for must-build opportunities]
Reference Materials
Load these files only when you reach the phase that needs them — not all at once.
| Phase | File | What it provides |
|---|---|---|
| Phase 1 | references/interview-questions.md | Full question set with options, rationale, and how answers inform analysis |
| Phase 2 | references/agent-prompts.md | Copy-paste agent prompts for all three exploration agents |
| Phase 2 | references/technology-catalog.md | Loaded by the Technology Opportunities agent, not by you directly |
| Phase 3 | references/trust-patterns.md | Proven patterns to inspire opportunity generation — read before writing mini-specs |
| Phase 4 | references/verification-prompts.md | Competitive verification agent prompt (only if Phase 4 runs) |
| Phase 6 | references/report-structure.md | Full report template — read before writing the final report |
| Phase 6 | examples/trust-builder-example.md | Complete example report showing expected output quality |
More from neonwatty/qa-skills
playwright-runner
Executes workflow markdown files interactively via Playwright CLI, stepping through each workflow action in a real browser. Use when the user says "run workflows", "run playwright", "test workflows", "execute workflows", or wants to interactively test their app against workflow documentation. Supports desktop, mobile, and multi-user workflows with authentication.
5keyword-wedge
Analyzes an app's codebase and cross-references Google Search Console, PostHog, and Google Keyword Planner to identify low-competition keyword footholds and track expansion into adjacent terms. This skill should be used when the user says "keyword wedge", "find keyword opportunities", "seo analysis", "keyword strategy", "find search wedges", "keyword research for my app", "grow organic traffic", "what keywords should I target", "SEO for my app", "organic search strategy", or "how to rank higher". Generates markdown and HTML reports and maintains state across runs for expansion tracking.
4submit-learnings
Filters and submits accumulated QA learnings as a GitHub issue (with optional PR) on the plugin repo. Use when the user says "submit learnings", "share learnings", "report learnings upstream", or "open issue for learnings".
4review-learnings
Synthesizes accumulated QA learnings from .qa-learnings/ledger.md into prioritized, actionable plugin improvements. Use when the user says "review learnings", "what have we learned", "improve the plugin", "learnings report", or "synthesize QA feedback".
4use-profiles
Load saved Playwright storageState authentication profiles before browser automation. Activates when `.playwright/profiles.json` exists and browser work begins on authenticated pages. Trigger phrases include "use profile", "load profile", "browser as [role]", "authenticated browser", "logged in browser session".
4adversarial-audit
Audits SaaS and usage-based web apps for adversarial usage patterns — accidental, opportunistic, and deliberate. Use this when the user says "adversarial audit", "abuse case audit", "idiot-proof this app", "find usage exploits", "business logic audit", or "how could users break this". Explores the codebase to map the economic surface area (pricing tiers, usage limits, free trials, costly resources), then generates abuse cases where user behavior — intentional or not — could break assumptions, bypass limits, amplify costs, or corrupt state. Produces a prioritized markdown report with findings, code locations, and fix recommendations, then optionally verifies findings interactively in a browser.
4