firecrawl-agent
SKILL.md
firecrawl agent
AI-powered autonomous extraction. The agent navigates sites and extracts structured data (takes 2-5 minutes).
When to use
- You need structured data from complex multi-page sites
- Manual scraping would require navigating many pages
- You want the AI to figure out where the data lives
Quick start
# Extract structured data
firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json
# With a JSON schema for structured output
firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json
# Focus on specific pages
firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json
Options
| Option | Description |
|---|---|
--urls <urls> |
Starting URLs for the agent |
--model <model> |
Model to use: spark-1-mini or spark-1-pro |
--schema <json> |
JSON schema for structured output |
--schema-file <path> |
Path to JSON schema file |
--max-credits <n> |
Credit limit for this agent run |
--wait |
Wait for agent to complete |
--pretty |
Pretty print JSON output |
-o, --output <path> |
Output file path |
Tips
- Always use
--waitto get results inline. Without it, returns a job ID. - Use
--schemafor predictable, structured output — otherwise the agent returns freeform data. - Agent runs consume more credits than simple scrapes. Use
--max-creditsto cap spending. - For simple single-page extraction, prefer
scrape— it's faster and cheaper.
See also
- firecrawl-scrape — simpler single-page extraction
- firecrawl-browser — manual browser automation (more control)
- firecrawl-crawl — bulk extraction without AI
Weekly Installs
698
Repository
firecrawl/cliGitHub Stars
156
First Seen
2 days ago
Security Audits
Installed on
codex693
gemini-cli692
amp692
cline692
github-copilot692
kimi-cli692