hackerone
HackerOne Bug Bounty Hunting
Automates HackerOne workflows: scope parsing → parallel testing → PoC validation → submission reports.
Quick Start
1. Input: HackerOne program URL or CSV file
2. Parse scope and program guidelines
3. Deploy Pentester agents in parallel (one per asset)
4. Validate PoCs (poc.py + poc_output.txt required)
5. Generate HackerOne-formatted reports
Workflows
Option 1: HackerOne URL
- [ ] Fetch program data and guidelines
- [ ] Download scope CSV
- [ ] Parse eligible assets
- [ ] Deploy agents in parallel
- [ ] Validate PoCs
- [ ] Generate submissions
Option 2: CSV File
- [ ] Parse CSV scope file
- [ ] Extract eligible_for_submission=true assets
- [ ] Collect program guidelines
- [ ] Deploy agents
- [ ] Validate and generate reports
Scope CSV Format
Expected columns:
identifier- Asset URL/domainasset_type- URL, WILDCARD, API, CIDReligible_for_submission- Must be "true"max_severity- critical, high, medium, lowinstruction- Asset-specific notes
Use tools/csv_parser.py to parse.
Agent Deployment
Coordinator per asset — spawned inline using role prompts:
coordinator_role = Read("skills/coordination/SKILL.md")
Agent(prompt=f"{coordinator_role}\n\nTARGET: {asset_url}\nSCOPE: {program_guidelines}\nOUTPUT_DIR: ...",
run_in_background=True)
Parallel Execution:
- 10 assets = 10 coordinator agents in parallel
- Each spawns executor agents from
skills/coordination/reference/executor-role.md - Time: 2-4 hours vs 20-40 sequential
PoC Validation (CRITICAL)
Every finding MUST have:
poc.py- Executable exploit scriptpoc_output.txt- Timestamped execution proofworkflow.md- Manual steps (if applicable)- Evidence screenshots/videos
Experimentation: Test edge cases, verify impact, document failures.
Report Format
Required sections (HackerOne standard):
- Summary (2-3 sentences)
- Severity (CVSS + business impact)
- Steps to Reproduce (numbered, clear)
- Visual Evidence (screenshots/video)
- Impact (realistic attack scenario)
- Remediation (actionable fixes)
Use tools/report_validator.py to validate.
Output Structure
Per OUTPUT.md - Bug Bounty format:
{OUTPUT_DIR}/
├── findings/
│ ├── finding-001/
│ │ ├── report.md # HackerOne report
│ │ ├── poc.py # Validated PoC
│ │ ├── poc_output.txt # Proof
│ │ └── workflow.md # Manual steps
├── reports/
│ ├── submissions/
│ │ ├── H1_CRITICAL_001.md # Ready to submit
│ │ └── H1_HIGH_001.md
│ └── SUBMISSION_GUIDE.md
└── evidence/
├── screenshots/
└── http-logs/
Program Selection
High-Value:
- New programs (< 30 days)
- Fast response (< 24 hours)
- High bounties (Critical: $5,000+)
- Large attack surface
Avoid:
- Slow response (> 1 week)
- Low bounties (Critical: < $500)
- Overly restrictive scope
Critical Rules
MUST DO:
- Validate ALL PoCs before reporting
- Sanitize sensitive data
- Test only
eligible_for_submission=trueassets - Follow program-specific guidelines
- Generate CVSS scores
NEVER:
- Report without validated PoC
- Test out-of-scope assets
- Include real user data
- Cause service disruption
Quality Checklist
Before submission:
- Working PoC with poc_output.txt
- Accurate CVSS score
- Step-by-step reproduction
- Visual evidence
- Impact analysis
- Remediation guidance
- Sensitive data sanitized
Tools
tools/csv_parser.py- Parse HackerOne scope CSVstools/report_validator.py- Validate report completenessskills/coordination/SKILL.md— Coordinator skill (spawns executors/validators)
Integration
Uses skills/coordination/SKILL.md for coordination workflow. Follows OUTPUT.md for submission format.
Common Rejections
Out of Scope: Check eligible_for_submission=true
Cannot Reproduce: Validate PoC, include poc_output.txt
Duplicate: Search disclosed reports, submit quickly
Insufficient Impact: Show realistic attack scenario
Usage
/hackerone <program_url_or_csv_path>
More from transilienceai/communitytools
reconnaissance
Domain assessment and web application mapping - subdomain discovery, port scanning, endpoint enumeration, API discovery, and attack surface analysis.
41social-engineering
Social engineering testing - phishing, pretexting, vishing, and physical security assessment techniques.
40ai-threat-testing
Offensive AI security testing and exploitation framework. Systematically tests LLM applications for OWASP Top 10 vulnerabilities including prompt injection, model extraction, data poisoning, and supply chain attacks. Integrates with pentest workflows to discover and exploit AI-specific threats.
39osint
Open-source intelligence gathering - company repository enumeration, secret scanning, git history analysis, employee footprint, and code exposure discovery.
38source-code-scanning
Security-focused source code review and SAST. Scans for vulnerabilities (OWASP Top 10, CWE Top 25), CVEs in third-party dependencies/packages, hardcoded secrets, malicious code, and insecure patterns. Use when given source code, a repo path, or asked to "audit", "scan", "review" code security, or "check dependencies for CVEs".
36techstack-identification
OSINT-based technology stack identification. Discovers company tech stacks using passive reconnaissance across 17 intelligence domains. Given a company name (and optional domain hint), infers frontend, backend, infrastructure, and security technologies using publicly available signals.
35