agent-reach
Agent Reach — Usage Guide
Upstream tools for 13+ platforms. Call them directly.
Run source ~/.agent-reach/venv/bin/activate && agent-reach doctor to check which channels are available.
⚠️ Workspace & Environment Rules
ALWAYS run commands inside the Agent Reach virtual environment.
Before executing any CLI tool (agent-reach, mcporter, xreach, yt-dlp, python scripts), you MUST activate the environment:
source ~/.agent-reach/venv/bin/activate or chain it like source ~/.agent-reach/venv/bin/activate && <command>.
Never create files in the agent workspace. Use /tmp/ for temporary output and ~/.agent-reach/ for persistent data.
Web — Any URL
curl -s "https://r.jina.ai/URL"
Web Search (Exa)
source ~/.agent-reach/venv/bin/activate && mcporter call 'exa.web_search_exa(query: "query", numResults: 5)'
source ~/.agent-reach/venv/bin/activate && mcporter call 'exa.get_code_context_exa(query: "code question", tokensNum: 3000)'
Twitter/X (xreach)
source ~/.agent-reach/venv/bin/activate && xreach search "query" -n 10 --json # search
source ~/.agent-reach/venv/bin/activate && xreach tweet URL_OR_ID --json # read tweet (supports /status/ and /article/ URLs)
source ~/.agent-reach/venv/bin/activate && xreach tweets @username -n 20 --json # user timeline
source ~/.agent-reach/venv/bin/activate && xreach thread URL_OR_ID --json # full thread
YouTube (yt-dlp)
source ~/.agent-reach/venv/bin/activate && yt-dlp --dump-json "URL" # video metadata
source ~/.agent-reach/venv/bin/activate && yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
# download subtitles, then read the .vtt file
source ~/.agent-reach/venv/bin/activate && yt-dlp --dump-json "ytsearch5:query" # search
Bilibili (yt-dlp)
source ~/.agent-reach/venv/bin/activate && yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx"
source ~/.agent-reach/venv/bin/activate && yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL"
Server IPs may get 412. Use
--cookies-from-browser chromeor configure proxy.
curl -s "https://www.reddit.com/r/SUBREDDIT/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"
curl -s "https://www.reddit.com/search.json?q=QUERY&limit=10" -H "User-Agent: agent-reach/1.0"
Server IPs may get 403. Search via Exa instead, or configure proxy.
GitHub (gh CLI)
gh search repos "query" --sort stars --limit 10
gh repo view owner/repo
gh search code "query" --language python
gh issue list -R owner/repo --state open
gh issue view 123 -R owner/repo
小红书 / XiaoHongShu (mcporter)
source ~/.agent-reach/venv/bin/activate && mcporter call 'xiaohongshu.search_feeds(keyword: "query")'
source ~/.agent-reach/venv/bin/activate && mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")'
source ~/.agent-reach/venv/bin/activate && mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)'
source ~/.agent-reach/venv/bin/activate && mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/img.jpg"], tags: ["tag"])'
Requires login. Use Cookie-Editor to import cookies.
抖音 / Douyin (mcporter)
source ~/.agent-reach/venv/bin/activate && mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")'
source ~/.agent-reach/venv/bin/activate && mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")'
No login needed.
微信公众号 / WeChat Articles
Search (miku_ai):
source ~/.agent-reach/venv/bin/activate && python3 -c "
import asyncio
from miku_ai import get_wexin_article
async def s():
for a in await get_wexin_article('query', 5):
print(f'{a[\"title\"]} | {a[\"url\"]}')
asyncio.run(s())
"
Read (Camoufox — bypasses WeChat anti-bot):
cd ~/.agent-reach/tools/wechat-article-for-ai && source ~/.agent-reach/venv/bin/activate && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID"
WeChat articles cannot be read with Jina Reader or curl. Must use Camoufox.
LinkedIn (mcporter)
source ~/.agent-reach/venv/bin/activate && mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")'
source ~/.agent-reach/venv/bin/activate && mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)'
Fallback: curl -s "https://r.jina.ai/https://linkedin.com/in/username"
Boss直聘 (mcporter)
source ~/.agent-reach/venv/bin/activate && mcporter call 'bosszhipin.get_recommend_jobs_tool(page: 1)'
source ~/.agent-reach/venv/bin/activate && mcporter call 'bosszhipin.search_jobs_tool(keyword: "Python", city: "北京")'
Fallback: curl -s "https://r.jina.ai/https://www.zhipin.com/job_detail/xxx"
RSS
source ~/.agent-reach/venv/bin/activate && python3 -c "
import feedparser
for e in feedparser.parse('FEED_URL').entries[:5]:
print(f'{e.title} — {e.link}')
"
Troubleshooting
- Channel not working? Run
source ~/.agent-reach/venv/bin/activate && agent-reach doctor— shows status and fix instructions. - Twitter fetch failed? Ensure
undiciis installed:npm install -g undici. Configure proxy:source ~/.agent-reach/venv/bin/activate && agent-reach configure proxy URL.
Setting Up a Channel ("帮我配 XXX")
If a channel needs setup (cookies, Docker, etc.), fetch the install guide: https://raw.githubusercontent.com/Panniantong/agent-reach/main/docs/install.md
User only provides cookies. Everything else is your job.
More from gusibi/skills
akshare-data
Query Chinese and global financial data using the AKShare Python library. Use when asked to (1) fetch stock quotes, historical prices, or financial statements for A-shares/HK/US stocks, (2) query macroeconomic data like GDP/CPI/PMI, (3) get futures/options/bond/forex/fund data, (4) look up index data, or (5) retrieve alternative data like news sentiment. Covers 500+ data interfaces for stocks, futures, options, bonds, forex, funds, macro, indexes and alternative data.
65article-image
为文章生成精美的封面图及文中插图。自动分析文章结构,在合适的段落插入相关的手绘风格插图,提升阅读体验。触发词:"生成封面及插图"、"为文章配图"、"美化文章"。
2markitdown
Convert various document formats (PDF, PowerPoint, Word, Excel, Images, Audio, HTML, CSV, JSON, XML, ZIP) into text Markdown format. Use when asked to read, parse, or extract text from rich documents, slides, spreadsheets, or images, making them accessible to LLMs.
2llm-price-scraper
Scrape AI model pricing from provider web pages using agent-browser, extract model names and prices (token/image/video/audio), save JSON. Use when asked to (1) analyze/scrape model pricing from a given URL, (2) extract prices from a provider page, (3) merge multiple price JSON files into one.
2