coordination
Coordination
Inline. Holds context. Thinks before every action.
Workflow
P0: Ingest scope
↓
P1: Recon + read source code → write attack-chain.md + create experiments.md header
↓
┌→ P2: Think — read chain + experiments.md, dedup, design 1-2 experiments
│ P2b: Research (conditional) — see reference/creative-research.md
│ P3: Execute — spawn 1-2 executors with CHAIN_CONTEXT [+ RESEARCH_BRIEF]
│ P4: Integrate — read results, update chain, revise theory
│ No progress 2 batches → P4b
│ Goal → P5
└─ loop (max 30 experiments)
P4b: Reset — re-read all recon + source + chain. Creative Research (MANDATORY). Fresh theory.
P5: Validate + Report
Steps
- Recon + Source Code — read all accessible source code (see
formats/reconnaissance.md). Create{OUTPUT_DIR}/experiments.mdwith header row (see format below). - Think — write theory + next step to
attack-chain.md - Test — 1-2 executors per batch, integrate before next
- Validate — validators per-finding (see
skills/coordination/reference/VALIDATION.md) - Report — validated findings in
{OUTPUT_DIR}/artifacts/validated/→ Transilience PDF viaformats/transilience-report-style/SKILL.md(MANDATORY)
attack-chain.md
At {OUTPUT_DIR}/attack-chain.md. Updated every batch. Sections: services, surface, theory, tested, next.
Keep it terse — bullet points, no prose.
experiments.md
At {OUTPUT_DIR}/experiments.md. Append-only table — never prune, never rewrite. Format: formats/logs.md.
- P1: create header. P2: read → dedup → append row (result=pending). Executor updates on completion.
- Dedup: skip if same technique + target exists unless parameters differ meaningfully.
- 3-strike:
count(technique, result=fail) >= 3→ triggers rule 12.
tools/
Executors log every significant tool invocation to {OUTPUT_DIR}/tools/{NNN}_{tool}.md with input + output. See formats/logs.md.
Creative Research (P2b)
Triggers — research when ANY of:
- P4b reset (mandatory)
- 3-strike stuck detection fires (rule 12)
- New tech/framework discovered not in mounted skills
- No clear hypothesis at P2
Method: follow reference/creative-research.md. Synthesize model knowledge + online sources + skill cross-reference into a RESEARCH_BRIEF (max 10 lines) appended to executor prompt.
Do NOT research every batch. Most batches skip P2b entirely.
Spawning
Consult reference/context-injection.md before building any agent prompt.
executor = Read("skills/coordination/reference/executor-role.md")
chain = Read(f"{output_dir}/attack-chain.md")
experiments = Read(f"{output_dir}/experiments.md")
# Optional: if P2b produced a brief
# research = "RESEARCH_BRIEF:\n- [model] ...\n- [web] ...\n- [skills] ..."
# 1-2 executors per batch — pass only relevant PATT_URL, not full map
Agent(prompt=f"{executor}\nMISSION_ID: m-001\nEXPERIMENT_ID: E-001\n"
f"CHAIN_CONTEXT: {chain}\nEXPERIMENTS: {experiments}\n"
f"OBJECTIVE: ...\nSKILL_FILES: ...\nPATT_URL: ...\nOUTPUT_DIR: {output_dir}\n"
f"{research if research else ''}",
description="Blind SQLi /search", run_in_background=True)
# Wait. Read results. Think. Update attack-chain.md. THEN next batch.
# Validators — one per finding (BLIND REVIEW — see context-injection.md)
validator = Read("skills/coordination/reference/validator-role.md")
Agent(prompt=f"{validator}\nfinding_id: F-001\n"
f"FINDING_DIR: {output_dir}/findings/finding-001/\n"
f"TARGET_URL: ...\nOUTPUT_DIR: {output_dir}/artifacts",
run_in_background=True)
# After all validators complete:
# 1. Read artifacts/validated/ and artifacts/false-positives/
# 2. Verify each validated finding has findings/{id}/evidence/validation/validation-summary.md
# 3. Flag any finding that passed validation but has no proof
Pass only the relevant PATT_URL for this mission, not the full URL map.
Roles
| Role | File | Context |
|---|---|---|
| Executor | reference/executor-role.md |
Full chain + skills |
| Validator | reference/validator-role.md |
Evidence only (blind) |
See reference/context-injection.md for what each role receives and what is withheld.
Rules
- Autonomous. Never ask user.
- Think before acting. Write reasoning to attack-chain.md before every batch.
- Max 1-2 executors per batch. Recon can use more.
- Source code first. Understanding beats guessing.
- Pass chain context + specific PATT_URL to executors.
- 30 experiment cap.
- Stuck 2 batches → re-read everything, fresh theory.
- All output to OUTPUT_DIR.
- Report gate: validated findings exist → PDF report required. Read
formats/transilience-report-style/pentest-report.md. - After validators complete, verify each validated finding has
evidence/validation/validation-summary.md. Flag any that passed without proof. - Sequential flag progression. In multi-flag challenges (HTB machines), secure each flag before attempting the next. The user-flag path often provides the foothold needed for root.
- 3-strike stuck detection. If experiments.md shows >= 3 fail rows for the same technique, STOP. Write to attack-chain.md: (a) why it's failing, (b) is this path fundamentally blocked, (c) alternative paths. Do NOT continue retrying.
- Read before calling library internals. Before writing Python against any library's internal API (Impacket, ldap3, pyasn1), read the relevant source file first. Never guess function signatures. Prefer CLI tools (secretsdump.py, ticketer.py, getST.py) over raw API calls.
- Background command discipline. Before spawning a background command, state what specific result it will produce. No speculative tunnels, relays, or listeners without a concrete plan to use them.
- Creative Research triggers: P4b (mandatory), 3-strike stuck, new tech discovered, no hypothesis. Follow
reference/creative-research.md. Max 3 WebSearch + 2 WebFetch per cycle.
Token Discipline
- Internal output (chain, logs, reports): terse. Bullets, not paragraphs.
- Executor prompts: include only relevant skill files and PATT URL, not everything.
- Don't inject
patt-fetcher/SKILL.mdinto executor prompts. Pass only the relevant PATT_URL. - Don't inject skill files the executor won't use. Pick the 1-2 most relevant.
- attack-chain.md: max 50 lines. Prune old tested items to one-liners.
- User-facing output (reports, summaries): detailed and professional.
References
reference/ATTACK_INDEX.md · reference/OUTPUT_STRUCTURE.md · reference/VALIDATION.md · reference/GIT_CONVENTIONS.md · reference/context-injection.md · reference/creative-research.md · formats/INDEX.md
More from transilienceai/communitytools
hackerone
HackerOne bug bounty automation - parses scope CSVs, deploys parallel pentesting agents for each asset, validates PoCs, and generates platform-ready submission reports. Use when testing HackerOne programs or preparing professional vulnerability submissions.
50reconnaissance
Domain assessment and web application mapping - subdomain discovery, port scanning, endpoint enumeration, API discovery, and attack surface analysis.
40ai-threat-testing
Offensive AI security testing and exploitation framework. Systematically tests LLM applications for OWASP Top 10 vulnerabilities including prompt injection, model extraction, data poisoning, and supply chain attacks. Integrates with pentest workflows to discover and exploit AI-specific threats.
38osint
Open-source intelligence gathering - company repository enumeration, secret scanning, git history analysis, employee footprint, and code exposure discovery.
37social-engineering
Social engineering testing - phishing, pretexting, vishing, and physical security assessment techniques.
37source-code-scanning
Security-focused source code review and SAST. Scans for vulnerabilities (OWASP Top 10, CWE Top 25), CVEs in third-party dependencies/packages, hardcoded secrets, malicious code, and insecure patterns. Use when given source code, a repo path, or asked to "audit", "scan", "review" code security, or "check dependencies for CVEs".
35