api-exploit-prover

SKILL.md

API Exploit Prover

When To Use

Use this after discovery identifies candidate API weaknesses.

Inputs

  • candidate_findings
  • target_base_url
  • auth_and_role_context
  • test_data_or_seed_objects
  • constraints (noise limits, forbidden write actions)

Confidence Model

  • C0: hypothesis only
  • C1: suspicious signal
  • C2: reproducible behavior anomaly
  • C3: exploit primitive proven
  • C4: business impact proven

Execution Workflow

Phase 1: Reproduction Baseline

  1. Replay original request as control.
  2. Capture stable baseline across repeated requests.
  3. Validate request preconditions (auth, ownership, object existence).

Phase 2: Alternative Technique Check

  1. Re-test with a different method than original lead.
  2. Vary payload shape and transport encoding.
  3. Confirm behavior survives minor variance.

Phase 3: Impact Escalation

  1. Attempt controlled state change or unauthorized data access.
  2. Test cross-tenant and cross-role boundaries where legal.
  3. Validate whether impact persists after session/token refresh.

Phase 4: Confounder Elimination

  1. Rule out caching and stale object state.
  2. Rule out test-environment race artifacts.
  3. Rule out expected business behavior incorrectly interpreted as vulnerability.

Phase 5: Classification

  1. confirmed only when exploit and impact are replayable.
  2. disputed when mitigation or expected behavior is proven.
  3. inconclusive when blockers prevent decision.

Technique Rules by Vulnerability Type

Type Rule
BOLA/BFLA Must show unauthorized object or action with foreign identifier
Injection Must show parser/engine effect beyond literal handling
Mass assignment Must show unauthorized field control and persisted impact
SSRF Must prove outbound request/control over target or metadata access
Rate abuse Must show bypass of intended limit with practical impact

Evidence Requirements

  • Exact request and response pairs.
  • Reproduction count and variance notes.
  • Auth role used in each attempt.
  • Clear impact statement tied to observable effect.

Output Contract

{
  "confirmed_findings": [],
  "disputed_findings": [],
  "inconclusive_findings": [],
  "evidence": [],
  "confidence": []
}

Failure Modes

  • Single-shot confirmation without retest.
  • Treating error differences as exploit proof.
  • Claiming impact without business-context validation.

Exit Criteria

  • Every finding has final status and explicit reason.
  • Confirmed findings include replayable impact proof.
  • Inconclusive findings list unblockers.

Detailed Operator Notes

Reproducibility Standard

  • Replay each confirmed case in a fresh session.
  • Replay with at least one payload or transport variant.
  • Keep one negative control request for every positive claim.

False-Positive Controls

  • For timing signals, compare against matched control payloads.
  • For authz signals, verify with ownership-correct and ownership-incorrect objects.
  • For parser signals, verify semantic effect, not just error shape changes.

Severity Calibration Inputs

  • Required attacker privilege.
  • Cross-tenant or single-tenant impact.
  • Ability to automate at scale.
  • Degree of data sensitivity.

Reporting Rules

  • Include exact request signatures (method, path, key headers, payload hash).
  • Include verification run count and consistency notes.
  • Include why alternative explanations were rejected.

Conditional Decision Matrix

Condition Action Evidence Requirement
Endpoint undocumented but reachable Add to inventory and prioritize authz checks request/response baseline + auth behavior
Auth behavior inconsistent across methods Split tests by method and content type per-method status + body signatures
Time-based anomaly only run matched control timing series repeated control/test timing traces
Object access differs by role escalate to cross-tenant/cross-role checks role-tagged replay proof
Validation differs by parser run semantic-equivalent content-type tests parser-path differential evidence

Advanced Coverage Extensions

  1. Add negative-object tests for soft-deleted or archived resources.
  2. Add replay-window tests for idempotency and duplicate processing.
  3. Add bulk endpoint abuse tests for partial authorization failures.
  4. Add asynchronous job handoff checks for stale permission snapshots.
  5. Add pagination/filter abuse checks for hidden data exposure.
Weekly Installs
1
GitHub Stars
4
First Seen
9 days ago
Installed on
zencoder1
amp1
cline1
openclaw1
opencode1
cursor1