Stop Burning Money on AI Coding

AISmush is a drop-in proxy that makes Claude Code 90% cheaper — without changing how you work.

Real users. Real savings. $0.03 sessions that used to cost $30.

Get AISmush Free See How It Works

Community Savings — Live

$0.00
Total Saved
$0.00
Routing Savings
$0.00
Compression Savings
0
Requests
0
Tokens Compressed
0
Users

Claude Code is incredible. The bill isn't.

A heavy coding session burns through $20-50 in API costs. Most of those tokens are spent on mechanical tasks — reading files, processing tool results, making simple edits — that don't need Claude's $15/M token brain.

$30+
Typical session
100% Claude
$3
Same session
with AISmush

Seven Weapons Against Token Waste

Game Changer

AI-Generated Project Agents

One command scans your codebase, sends it to AI for deep analysis, and generates Claude Code agents customized to YOUR project — your patterns, your frameworks, your architecture.

Not generic templates. Agents that know your specific file structure, your naming conventions, your test framework, your build commands.

  • Scans your codebase in seconds
  • 5-7 AI calls for deep analysis (~$0.03)
  • Generates agents, skills, and CLAUDE.md
  • Each agent assigned the cheapest model that can do the job
  • Resumes where it left off if interrupted
  • Incremental re-scans — only re-analyzes changed files (~$0.003)
$ aismush --scan


# Analyzing your codebase...
Detected: Rust + TypeScript + React
Type: fullstack web app (complex)

# Generating project-specific agents:
├─ rust-expert (sonnet) ✓
├─ frontend-engineer (sonnet) ✓
├─ test-runner (haiku) ✓
├─ debugger (sonnet) ✓
└─ explorer (haiku) ✓

Created 5 agents, 8 skills, CLAUDE.md
Core Feature

Smart Model Routing + Blast-Radius Analysis

AISmush automatically detects what kind of work each turn requires and routes it to the cheapest model that can handle it.

Planning and architecture? Claude ($15/M). Reading files and making edits? DeepSeek ($0.27/M). That's a 55x cost difference on the turns that matter most.

  • Zero latency overhead — pure heuristic routing
  • Claude for reasoning, DeepSeek for execution
  • Blast-radius aware — parses imports to know which files are critical
  • Editing a shared type? Claude. Editing a leaf file? DeepSeek.
  • Automatic failover between providers
  • Error recovery detection (3+ errors → Claude)
# What happens behind the scenes:

"Plan the auth system" → Claude ($0.45)
Tool result: Read file → DeepSeek ($0.001)
Tool result: Edit file → DeepSeek ($0.001)
Tool result: Run tests → DeepSeek ($0.001)
"Debug this error" → Claude ($0.12)
Tool result: Grep → DeepSeek ($0.001)

Session: $0.58 instead of $12.40
Biggest Token Saver

Structural Summarization — 3-5x Fewer Tokens

The biggest single improvement to token usage. Older tool results in your conversation get replaced with compact structural summaries — just function signatures, type definitions, and imports.

Your last 4 messages stay fully intact. Only older code results get summarized. JSON, YAML, and error results are never touched.

  • 200-line file becomes ~30 lines (3-5x reduction)
  • Saves thousands of tokens per request in long sessions
  • Content-type aware — only summarizes code, never data
  • Supports Rust, TypeScript, Python, Go, and more
  • Combined with standard compression: 60-80% total reduction
# Old message tool_result (6,000 tokens):
use std::collections::HashMap;
use crate::db::Db;
// ... 180 lines of implementation ...
// comments, function bodies, tests ...

# After structural summary (1,200 tokens):
[Structural summary (200 lines -> 28 lines)]
use std::collections::HashMap;
use crate::db::Db;
pub struct ProxyState { ... }
impl ProxyState { ... }
pub async fn handle() -> Response { ... }
fn compress_text() -> String { ... }

5x reduction. API surface preserved.
Nobody Has Solved This

Deep Memory — Full Conversation Capture

Every developer's frustration: "I already told you this yesterday."

Other tools remember tool names. AISmush captures entire conversations — your questions, the AI's answers, the reasoning, the decisions. Searchable by meaning, not just keywords.

  • Full conversation capture (not just tool names)
  • Keyword search always on (FTS5, zero overhead)
  • Semantic search opt-in (--embeddings) — finds "JWT validation" when you search "auth bug"
  • Search via dashboard, CLI, or API
  • Auto-injects relevant past context into new sessions
  • Lightweight by default — no 90MB model unless you want it
$ aismush --search "auth token bug"

# Found 3 relevant conversations:

1. [3 days ago] score: 0.92
  You: "Fix the JWT token expiry bug"
  AI: "The issue is in validate_token().
      The expiry check uses > instead of >=..."
  Tools: Read(jwt.rs), Edit(jwt.rs), Bash(cargo test)

2. [5 days ago] score: 0.87
  You: "Add refresh token support"
  AI: "I'll create a refresh endpoint that..."
  - Added WebSocket broadcast server
Reliability

Context Window Management

Claude handles 200K tokens. DeepSeek handles 64K. Long sessions blow past DeepSeek's limit, causing failures and lost work.

AISmush automatically manages the mismatch. Old tool results get trimmed, large contexts route to Claude, and your work is never blocked.

  • Under 55K: both providers work fine
  • 55-64K: trim old tool results for DeepSeek
  • Over 64K: auto-route to Claude (200K window)
  • Never breaks tool_use/tool_result pairing
# Context growing during long session:

Turn 1: 5K tokens → DeepSeek
Turn 10: 25K tokens → DeepSeek
Turn 20: 48K tokens → DeepSeek
Turn 25: 58K tokens → compress + DeepSeek
Turn 30: 72K tokens → auto-route to Claude

# Without AISmush: DeepSeek fails at 64K.
# With AISmush: seamless handoff.
Transparency

Real-Time Cost Dashboard

See exactly what you're saving. Every request tracked: which provider, how many tokens, what it cost, what it would have cost on Claude alone.

  • Live dashboard at localhost:1849/dashboard
  • Per-request cost breakdown
  • Savings percentage with all-Claude comparison
  • Request history with full detail
  • Memory viewer
  • Stats persist across sessions in SQLite
Session Stats

Requests: 142
Claude turns: 12 (planning/debugging)
DeepSeek turns: 130 (execution)

Actual cost: $1.82
All-Claude cost: $18.40
Saved: $16.58 (90.1%)

Three Steps. That's It.

1

Install

One command. ~15MB RAM. Won't slow your machine.

2

Scan

aismush --scan generates agents for your project.

3

Code

aismush-start launches Claude Code. You save 90%.

Two Ways to Run

Smart Routing (Default)

Routes between Claude + DeepSeek. Max savings (~90%). Needs a free DeepSeek API key.

aismush-start

Direct Mode (Claude Only)

No DeepSeek needed. Still get compression, memory, agents, and tracking. Dashboard shows potential savings.

aismush-start --direct

Install in 10 Seconds

One command on any platform. Works with or without a DeepSeek key.

Linux & macOS:

curl -fsSL https://raw.githubusercontent.com/Skunk-Tech/aismush/main/install.sh | bash

Windows (PowerShell):

irm https://raw.githubusercontent.com/Skunk-Tech/aismush/main/install.ps1 | iex
All Releases GitHub

What's Coming Next

Local Models

Route to Ollama/llama.cpp for zero-cost execution on your own hardware.

Multi-Provider

Add Gemini, GPT-4, Mistral as routing targets. Best model for each task.

Team Dashboard

Shared savings dashboard for engineering teams. Track ROI across developers.