wachi
wachi
Subscribe any link and get notified on change. Monitors URLs for new content, pushes notifications to 90+ services via apprise.
Install
# npm / bun (no install needed)
npx wachi --help
bunx wachi --help
# global install
npm i -g wachi
bun i -g wachi
# shell script
curl -fsSL https://raw.githubusercontent.com/ysm-dev/wachi/main/install.sh | sh
# homebrew
brew tap ysm-dev/tap && brew install wachi
Quick Start
# 1. Subscribe to any URL (auto-discovers RSS)
wachi sub "slack://xoxb-token/channel" "https://blog.example.com"
# 2. Check for new content (run on a schedule)
wachi check
# New posts get pushed to Slack. That's it.
Commands
wachi sub <apprise-url> <url> Subscribe URL to notification channel
-e, --send-existing Send all current items on next check (skip baseline)
wachi unsub <apprise-url> [url] Unsubscribe URL or remove entire channel
wachi ls List all channels and subscriptions
wachi check Check all subscriptions for changes
-c, --channel <apprise-url> Check specific channel only
-n, --concurrency <number> Max concurrent checks (default: 10)
-d, --dry-run Preview without sending or recording
wachi test <apprise-url> Send test notification
wachi upgrade Update wachi to latest version
Global flags: --json / -j for machine-readable output, --verbose / -V for detailed logs, --config / -C for custom config path.
How It Works
wachi subchecks if the URL has an RSS feed (auto-discovery via<link>tags and common paths)- If RSS found: store and use RSS for ongoing checks
- If no RSS: use LLM + agent-browser to identify CSS selectors via accessibility tree analysis
wachi checkfetches each subscription, compares against dedup table (SHA-256 hash), sends new items via apprise
Configuration
Config at ~/.config/wachi/config.yml (auto-created on first wachi sub).
# LLM config (only needed for non-RSS sites)
# Also settable via WACHI_LLM_API_KEY, WACHI_LLM_MODEL env vars
llm:
api_key: "sk-..."
model: "gpt-4.1-mini"
# Optional: summarize articles before sending
summary:
enabled: true
language: "en"
min_reading_time: 3 # minutes
# Channels managed by wachi sub/unsub
channels:
- apprise_url: "slack://xoxb-token/channel"
subscriptions:
- url: "https://blog.example.com"
rss_url: "https://blog.example.com/feed.xml"
All fields optional with sensible defaults. Empty config is valid.
Environment Variables
| Variable | Purpose |
|---|---|
WACHI_LLM_API_KEY |
LLM API key |
WACHI_LLM_MODEL |
LLM model name |
WACHI_LLM_BASE_URL |
LLM API base URL (default: OpenAI) |
WACHI_NO_AUTO_UPDATE |
Set to 1 to disable auto-update |
Notification Channels
Uses apprise URL format. Examples:
# Slack
wachi sub "slack://xoxb-token/channel" "https://example.com"
# Discord
wachi sub "discord://webhook-id/token" "https://example.com"
# Telegram
wachi sub "tgram://bot-token/chat-id" "https://example.com"
# Test channel works
wachi test "slack://xoxb-token/channel"
Full list: https://github.com/caronc/apprise/wiki
Scheduling
wachi check is stateless and one-shot. Use any scheduler:
# crnd (recommended)
crnd "*/5 * * * *" wachi check
# system cron
crontab -e
# */5 * * * * wachi check
Examples
# Blog (auto-discovers RSS)
wachi sub "slack://xoxb-token/channel" "https://blog.example.com"
# Hacker News (LLM identifies selectors)
wachi sub "discord://webhook-id/token" "https://news.ycombinator.com"
# YouTube channel
wachi sub "tgram://bot-token/chat-id" "https://youtube.com/@channel"
# URL without https:// (auto-prepended)
wachi sub "slack://token/channel" "blog.example.com"
# Send existing items on next check
wachi sub -e "discord://webhook-id/token" "https://news.ycombinator.com"
# Dry-run check
wachi check -d
# Check specific channel
wachi check -c "slack://xoxb-token/channel"
For detailed behavior (dedup model, error patterns, notification format, config schema), see references/spec.md.
More from ysm-dev/skills
duckdb-cli
Query and analyze data using the DuckDB CLI. Use when the user needs to run SQL queries, analyze CSV/Parquet/JSON files, create or query databases, export data, or perform any ad-hoc data analysis from the command line. Triggers include requests to "query a file", "analyze data", "run SQL", "read a CSV/Parquet/JSON", "create a database", "export to CSV/Parquet", or any data analysis task that benefits from SQL.
17web-search
DuckDuckGo web search from the command line using ddgr. Use this skill whenever you need to search the web for information -- looking up documentation, researching error messages, finding API references, checking current facts, comparing libraries, or answering questions that require up-to-date information. Trigger this proactively whenever a task would benefit from a web search, even if the user didn't explicitly ask you to search. Also use this when the user asks you to install or set up ddgr.
6ddgr
Search the web using ddgr (DuckDuckGo from the terminal) to find current information, documentation, error solutions, package versions, API references, or any real-world data. Use this skill whenever the user asks to look something up online, search the web, find recent information, check documentation, research a topic, troubleshoot an error message, or needs any data that might not be in your training set. Also use it when you need to verify facts, find URLs, or when a task would clearly benefit from current web information even if the user didn't explicitly ask you to search. Trigger on phrases like "search for", "look up", "what's the latest", "find me", "google", "how do I fix this error", or any request for up-to-date information.
3findweb
Google search from the command line using findweb. Use this skill whenever you need to search the web for information — looking up documentation, researching error messages, finding API references, checking current facts, comparing libraries, or answering questions that require up-to-date information. Trigger this proactively whenever a task would benefit from a web search, even if the user didn't explicitly ask you to search. Also use this when the user asks you to install or set up findweb.
3csv-analyzer
Analyze and process large CSV files (1M+ rows) using DuckDB and Polars. Use when the user asks to analyze, query, filter, aggregate, join, or transform CSV data. Triggers on requests like "analyze this CSV", "query CSV file", "filter large dataset", "aggregate CSV data", "join CSV files", "CSV statistics", or any data analysis task involving CSV/TSV/Parquet files.
2web-scraper
Use when the user wants to scrape, crawl, or extract data from a website or URL. Triggers on: 'scrape this site', 'get data from this URL', 'crawl this page', 'extract data from the web', 'pull data from this link', 'download data from this website', 'I need data from [URL]', or any request involving collecting structured data from a web page. Also triggers when the user provides a URL and asks for its data in CSV, JSON, or other tabular formats. Always load the agent-browser skill alongside this one.
2