AISmush is a drop-in proxy that makes Claude Code 90% cheaper — without changing how you work.
$0.03 sessions that used to cost $30.
Route across Claude, DeepSeek, OpenRouter (290+ models), and local servers. File caching saves 99% on re-reads. Command-aware compression shrinks cargo test from 50 lines to 2. All compression now works in every mode — including Claude-only.
A heavy coding session burns through $20-50 in API costs. Most of those tokens are spent on mechanical tasks — reading files, processing tool results, making simple edits — that don't need Claude's $15/M token brain.
One command scans your codebase, sends it to AI for deep analysis, and generates Claude Code agents customized to YOUR project — your patterns, your frameworks, your architecture.
Not generic templates. Agents that know your specific file structure, your naming conventions, your test framework, your build commands.
AISmush automatically detects what kind of work each turn requires and routes it to the cheapest model that can handle it — across every provider you have configured.
Routes across Claude, DeepSeek, OpenRouter (290+ models), and local servers: Ollama, LM Studio, llama.cpp, vLLM, Jan. Planning and architecture? Claude. Tool results and edits? Local Ollama — free.
The biggest single improvement to token usage. Older tool results in your conversation get replaced with compact structural summaries — just function signatures, type definitions, and imports.
Your last 4 messages stay fully intact. Only older code results get summarized. JSON, YAML, and error results are never touched.
Three layers of compression that work together — and now active in ALL modes, including Claude-only.
Every developer's frustration: "I already told you this yesterday."
Other tools remember tool names. AISmush captures entire conversations — your questions, the AI's answers, the reasoning, the decisions. Searchable by meaning, not just keywords.
Claude handles 200K tokens. DeepSeek handles 64K. Long sessions blow past DeepSeek's limit, causing failures and lost work.
AISmush automatically manages the mismatch. Old tool results get trimmed, large contexts route to Claude, and your work is never blocked.
See exactly what you're saving. Every request tracked: which provider, how many tokens, what it cost, what it would have cost on Claude alone.
Ask Claude to make a plan, then say "run plan". AISmush builds a dependency graph, maps each step to a specialized agent, and executes with maximum parallelism. Steps unblock individually — no waiting for entire waves to finish.
One command. Single binary. No dependencies.
aismush --scan generates agents for your project.
aismush-start launches Claude Code. You save 90%.
aismush-start — Start proxy + Claude Code (recommended)aismush-start --direct — Claude only, no other providersaismush — Start proxy server onlyaismush --direct — Proxy in Claude-only mode
aismush --setup — Interactive provider setup with testingaismush --providers — List providers + health statusaismush --config — Show current configurationaismush --scan — Generate project agents + CLAUDE.md
aismush --search "query" — Semantic search past sessionsaismush --embeddings — Enable semantic search modelaismush --status — Check if proxy is running + statsaismush --help — Show all options
aismush --upgrade — Upgrade to latest versionaismush --uninstall — Remove AISmush completelyaismush --version — Show version
DEEPSEEK_API_KEY — DeepSeek providerOPENROUTER_API_KEY — OpenRouter providerLOCAL_MODEL_URL — Local server URLLOCAL_MODEL_NAME — Local model namePROXY_PORT — Listen port (default: 1849)FORCE_PROVIDER — Force a specific providerAISMUSH_BLAST_THRESHOLD — Blast-radius thresholdAISMUSH_AUTO_DISCOVER — Auto-find local models
Dashboard at http://localhost:1849/dashboard — Full documentation
Routes across Claude, DeepSeek, and OpenRouter's 290+ models. Max cloud savings (~90%). Configure providers with aismush --setup.
aismush-start
Local models (Ollama, LM Studio, llama.cpp, vLLM) handle tool results and edits for free. Cloud only when the task needs it. Just start Ollama and AISmush auto-detects it.
aismush-start (with Ollama running)
No secondary provider needed. Full compression (file caching + command patterns + structural summaries), memory, agents, and cost tracking. No DeepSeek key required.
aismush-start --direct
Pure Rust. No C dependencies. Native builds for every platform.
Then run aismush --setup for interactive provider configuration — connects and tests each provider (Claude, DeepSeek, OpenRouter, Ollama, and more).
Works on Debian 12+, Ubuntu 22.04+, any modern Linux, macOS (Intel & ARM), and Windows.
Shared savings dashboard for engineering teams. Track ROI across developers.
Set hourly and daily spend limits per provider. Auto-switch to cheaper models when budgets are hit.
Auto-test model quality per task type against your own codebase. Know which model is actually best for your work.