security-threat-model
Security Threat Model
Overview
Deliver an actionable AppSec-grade threat model specific to a repository or project path, anchoring every architectural claim to evidence in the codebase with explicit assumptions.
Core principle: Every component, data store, endpoint, and flow must be derived from actual codebase analysis — not generic assumptions.
Eight-Step Workflow
Step 1: Scope and Extract
Identify from the repository:
- Components and services
- Data stores and their sensitivity
- External integrations and dependencies
- Runtime entrypoints (network listeners, APIs, CLI, webhooks)
- Explicit out-of-scope items (document these)
Step 2: Derive Boundaries, Assets, Entry Points
Map trust boundaries between components, documenting:
- Protocol used
- Authentication mechanism
- Encryption in transit/at rest
- Input validation present
- Rate limiting applied
List assets driving risk. Identify all entry points attackers could reach.
Step 3: Calibrate Assets and Attacker Capabilities
Define realistic attacker goals based on actual exposure. Note non-capabilities explicitly — this prevents inflated severity assessments and keeps the model credible.
Step 4: Enumerate Threats as Abuse Paths
Map threats to assets and trust boundaries:
- Data exfiltration paths
- Privilege escalation vectors
- Integrity compromise opportunities
- Denial-of-service surfaces
Keep the count small but high quality. Generic threats add noise; specific abuse paths add value.
Step 5: Prioritize with Likelihood and Impact
Use qualitative reasoning (low / medium / high) with short justifications. Set overall priority (critical / high / medium / low) adjusted for existing controls.
Reference thresholds:
- Critical/High: pre-auth RCE, auth bypass, cross-tenant access, sensitive data exfiltration, key/token theft, model/config integrity compromise, sandbox escape
- Medium: targeted DoS of critical components, partial data exposure, rate-limit bypass with measurable impact, log poisoning affecting detection
- Low: low-sensitivity info leaks, noisy DoS with easy mitigation, issues requiring unlikely preconditions
Step 6: Validate with User
Before finalizing:
- Summarize key assumptions made
- Ask 1–3 targeted clarification questions
- Pause for feedback
Step 7: Recommend Mitigations
Distinguish existing controls from recommended ones. Tie each recommendation to:
- A specific file path or component
- The control type (validation, auth, encryption, rate limiting, etc.)
- An implementation hint
Step 8: Quality Check
Confirm before delivering:
- All entrypoints enumerated
- All trust boundaries mapped
- Runtime vs CI/build tooling separated
- User clarifications incorporated
- Assumptions documented
- Output matches the required template
Required Output Format
Write the final threat model to <repo-or-dir-name>-threat-model.md with these sections:
- Executive summary — Top risk themes and highest-risk areas
- Scope and assumptions — In-scope paths, explicit assumptions, open questions
- System model — Primary components, data flows, trust boundaries, Mermaid diagram
- Assets and security objectives — Table mapping assets to CIA goals
- Attacker model — Capabilities and non-capabilities
- Entry points and attack surfaces — Table with surfaces, boundaries, evidence
- Top abuse paths — 5–10 numbered attack sequences
- Threat model table — TM-001, TM-002… with source, prereqs, action, impact, controls, gaps, mitigations, detection, priority
- Criticality calibration — Definitions with examples
- Focus paths for security review — Table linking repo paths to threat IDs
Mermaid Diagram Constraints
- Use
flowchart TDorflowchart LR - Only
-->arrows - No
titleorstyledirectives - Node IDs: letters, numbers, underscores only; labels in quotes
- Edge labels: plain words and spaces only
Key Constraints
- Evidence anchors are mandatory for every major architectural claim (file path, config key, or code snippet)
- Redact any credentials or tokens encountered — describe only their presence and location
- Separate attacker-controlled inputs from operator-controlled and developer-controlled inputs
- Do not finalize the report before user validates assumptions (Step 6)
More from fimoklei/pm-ai-playbook
first-principles-decomposer
Break any problem down to fundamental truths, then rebuild solutions from atoms up. Use when user says "firstp", "first principles", "from scratch", "what are we assuming", "break this down", "atomic", "fundamental truth", "physics thinking", "Elon method", "bedrock", "ground up", "core problem", "strip away", or challenges assumptions about how things are done.
20optimize-docs
Condense markdown documentation for token efficiency while preserving all semantic meaning. Use when rules, documentation, or config files need optimization. Target 25-40% reduction through systematic condensation patterns.
18idea-challenger
Pre-launch red team analysis that identifies failure modes and validates assumptions before resource commitment. Use when evaluating new products/features/strategies, before significant resource allocation, when stakeholders seem overly optimistic, or when cost of failure would be high (reputation, budget, market position).
18inversion-strategist
Flip problems upside down - instead of "how to succeed", ask "how to definitely fail" then avoid those paths. Use when user says "invert", "inversion", "flip it", "opposite approach", "how would this fail", "avoid failure", "what NOT to do", "Munger", "anti-goals", "guarantee failure".
18when-stuck
Dispatch to the right problem-solving technique based on how you're stuck
15simplification-cascades
Find one insight that eliminates multiple components - "if this is true, we don't need X, Y, or Z
15