skills/mukul975/anthropic-cybersecurity-skills/performing-brand-monitoring-for-impersonation

performing-brand-monitoring-for-impersonation

SKILL.md

Performing Brand Monitoring for Impersonation

Overview

Brand impersonation attacks exploit consumer trust through lookalike domains, fake social media profiles, counterfeit mobile apps, and phishing sites that mimic legitimate brands. In 2025, brand impersonation remained one of the most costly cyber threats, with AI-generated phishing emails achieving a 54% click-through rate. This skill covers building a comprehensive brand monitoring program that detects domain squatting, social media impersonation, fake mobile apps, unauthorized logo usage, and dark web brand mentions using automated scanning and alerting.

Prerequisites

  • Python 3.9+ with dnstwist, requests, beautifulsoup4, Levenshtein, tweepy libraries
  • API keys: VirusTotal, Google Safe Browsing, Twitter/X API, Shodan
  • List of brand assets: domains, trademarks, logos, executive names
  • Certificate Transparency monitoring (Certstream or crt.sh)
  • Understanding of domain registration and TLD landscape

Key Concepts

Attack Surface

Brand impersonation spans multiple channels: domain squatting (typosquatting, homoglyphs, TLD variations), phishing sites (cloned websites with stolen branding), social media (fake profiles impersonating executives or company), mobile apps (counterfeit apps in app stores), email spoofing (display name and domain impersonation), and dark web (brand mentions in forums, marketplaces).

Detection Approaches

Effective brand monitoring combines proactive scanning (domain permutation with dnstwist, CT log monitoring), web crawling (screenshot comparison, logo detection), social media monitoring (profile name matching, post content analysis), app store monitoring (name and icon similarity detection), and dark web monitoring (forum scraping, marketplace tracking).

Risk Prioritization

Not all impersonation is malicious. Risk factors include: active web content (especially login pages), SSL certificate present, MX records configured (email receiving capability), visual similarity to legitimate site, recent registration date, and hosting in regions associated with cybercrime.

Practical Steps

Step 1: Multi-Channel Brand Monitoring System

import subprocess
import requests
import json
from datetime import datetime
from urllib.parse import urlparse
import Levenshtein

class BrandMonitor:
    def __init__(self, brand_config):
        self.brand_name = brand_config["name"]
        self.domains = brand_config["domains"]
        self.keywords = brand_config["keywords"]
        self.executive_names = brand_config.get("executives", [])
        self.logo_hash = brand_config.get("logo_hash", "")
        self.findings = []

    def scan_domain_squatting(self):
        """Detect typosquatting and lookalike domains."""
        all_results = []
        for domain in self.domains:
            cmd = ["dnstwist", "--registered", "--format", "json",
                   "--nameservers", "8.8.8.8", "--threads", "30", domain]
            try:
                result = subprocess.run(cmd, capture_output=True, text=True, timeout=300)
                if result.returncode == 0:
                    domains = json.loads(result.stdout)
                    registered = [d for d in domains if d.get("dns_a") or d.get("dns_aaaa")]
                    all_results.extend(registered)
                    print(f"[+] Domain squatting scan for {domain}: "
                          f"{len(registered)} registered lookalikes")
            except (subprocess.TimeoutExpired, Exception) as e:
                print(f"[-] Error scanning {domain}: {e}")

        for entry in all_results:
            self.findings.append({
                "type": "domain_squatting",
                "indicator": entry.get("domain", ""),
                "fuzzer": entry.get("fuzzer", ""),
                "dns_a": entry.get("dns_a", []),
                "ssdeep_score": entry.get("ssdeep_score", 0),
                "detected_at": datetime.now().isoformat(),
            })
        return all_results

    def check_google_safe_browsing(self, urls, api_key):
        """Check URLs against Google Safe Browsing API."""
        url = f"https://safebrowsing.googleapis.com/v4/threatMatches:find?key={api_key}"
        body = {
            "client": {"clientId": "brand-monitor", "clientVersion": "1.0"},
            "threatInfo": {
                "threatTypes": ["MALWARE", "SOCIAL_ENGINEERING", "UNWANTED_SOFTWARE"],
                "platformTypes": ["ANY_PLATFORM"],
                "threatEntryTypes": ["URL"],
                "threatEntries": [{"url": u} for u in urls],
            },
        }
        resp = requests.post(url, json=body, timeout=15)
        if resp.status_code == 200:
            matches = resp.json().get("matches", [])
            print(f"[+] Google Safe Browsing: {len(matches)} threats found")
            return matches
        return []

    def monitor_social_media_impersonation(self, platform="twitter"):
        """Detect social media profiles impersonating brand or executives."""
        suspicious_profiles = []
        # Search for profiles with similar names
        for name in self.executive_names + [self.brand_name]:
            # Using a general search approach
            search_url = f"https://api.twitter.com/2/users/by/username/{name.replace(' ', '')}"
            # Note: In production, use authenticated Twitter API
            suspicious_profiles.append({
                "search_term": name,
                "platform": platform,
                "note": "Requires authenticated API access for full search",
            })
        return suspicious_profiles

    def monitor_app_stores(self):
        """Check for fake mobile apps impersonating the brand."""
        fake_apps = []
        for keyword in self.keywords:
            # Google Play Store search (unofficial)
            url = f"https://play.google.com/store/search?q={keyword}&c=apps"
            try:
                resp = requests.get(url, timeout=15, headers={
                    "User-Agent": "Mozilla/5.0"
                })
                if resp.status_code == 200:
                    # Parse results for brand name matches
                    from bs4 import BeautifulSoup
                    soup = BeautifulSoup(resp.text, "html.parser")
                    app_links = soup.find_all("a", href=lambda h: h and "/store/apps/details" in h)
                    for link in app_links:
                        app_name = link.get_text(strip=True)
                        if any(k.lower() in app_name.lower() for k in self.keywords):
                            fake_apps.append({
                                "name": app_name,
                                "url": f"https://play.google.com{link['href']}",
                                "platform": "google_play",
                                "keyword": keyword,
                            })
            except Exception as e:
                print(f"[-] App store search error: {e}")
        return fake_apps

    def generate_monitoring_report(self):
        report = {
            "brand": self.brand_name,
            "generated": datetime.now().isoformat(),
            "total_findings": len(self.findings),
            "findings_by_type": {},
            "high_priority": [],
        }
        for finding in self.findings:
            ftype = finding["type"]
            if ftype not in report["findings_by_type"]:
                report["findings_by_type"][ftype] = 0
            report["findings_by_type"][ftype] += 1

            # High priority: has web similarity or MX records
            if finding.get("ssdeep_score", 0) > 50:
                report["high_priority"].append(finding)

        with open(f"brand_monitoring_{self.brand_name.lower()}.json", "w") as f:
            json.dump(report, f, indent=2)
        print(f"[+] Brand monitoring report: {len(self.findings)} findings")
        return report

monitor = BrandMonitor({
    "name": "MyCompany",
    "domains": ["mycompany.com", "mycompany.org"],
    "keywords": ["mycompany", "mybrand", "myproduct"],
    "executives": ["CEO Name", "CTO Name"],
})
monitor.scan_domain_squatting()
report = monitor.generate_monitoring_report()

Step 2: Takedown Request Generation

def generate_takedown_request(finding, brand_info):
    """Generate abuse report for domain/site takedown."""
    request = f"""Subject: Abuse Report - Brand Impersonation / Phishing

Dear Abuse Team,

We are writing to report a domain that is impersonating {brand_info['name']}
for apparent phishing/fraud purposes.

Infringing Domain: {finding.get('indicator', '')}
IP Address: {', '.join(finding.get('dns_a', ['Unknown']))}
Detection Method: {finding.get('fuzzer', 'domain similarity analysis')}
Web Similarity Score: {finding.get('ssdeep_score', 'N/A')}%
Detection Date: {finding.get('detected_at', '')}

Our legitimate domain(s): {', '.join(brand_info['domains'])}

This domain appears to be impersonating our brand through {finding.get('fuzzer', 'typosquatting')}.
We request immediate suspension of this domain.

Evidence of infringement is available upon request.

Regards,
{brand_info['name']} Security Team
"""
    return request

Validation Criteria

  • Domain squatting detected through dnstwist permutation scanning
  • Google Safe Browsing checks identify known threats
  • Certificate transparency monitoring detects new phishing certificates
  • Social media monitoring identifies impersonation profiles
  • App store monitoring detects counterfeit applications
  • Takedown requests generated with required evidence

References

Weekly Installs
1
GitHub Stars
1.3K
First Seen
2 days ago
Installed on
zencoder1
amp1
cline1
opencode1
cursor1
kimi-cli1