rl-execution
RL Execution Optimization
Reinforcement learning (RL) for trade execution teaches an agent to split and time large orders so that total market impact is minimized. Instead of following a fixed schedule (TWAP, VWAP), an RL agent observes real-time market state and adapts its trading rate on the fly.
Why Execution Optimization Matters
Every trade has a cost beyond the quoted spread:
| Cost Component | Cause | Typical Magnitude |
|---|---|---|
| Spread cost | Crossing the bid-ask | 5-50 bps on DEXs |
| Temporary impact | Consuming liquidity | Scales with trade rate |
| Permanent impact | Information leakage | Scales with total size |
| Timing risk | Price drifts while waiting | Scales with volatility and time |
A 100 SOL market buy on a thin pool can move the price 2-5%. Splitting it into
More from agiprolabs/claude-trading-skills
pandas-ta
Technical analysis with 130+ indicators using pandas-ta for crypto market data
109feature-engineering
Feature construction from market data for ML trading models including price, volume, on-chain, and microstructure features
79risk-management
Portfolio-level risk controls, drawdown management, exposure limits, and circuit breakers for crypto trading
78trading-visualization
Professional trading charts including candlesticks, equity curves, drawdowns, correlation heatmaps, and return distributions
78signal-classification
ML trading signal classifiers using XGBoost and LightGBM with walk-forward validation, SHAP feature importance, and threshold optimization
75market-microstructure
DEX orderflow analysis, trade classification, buyer/seller pressure, and microstructure signals for Solana tokens
75