WashedMCP LogoWashedMCPtoken-optimized
Stop feeding your agent the whole repo.
..
Get the MCP70k → 7k tokens$1.40 → $0.40 (3.5× cheaper)
One-query code intel
Semantic retrieval + relationship expansion in one pass.
Context graph expansion
Pull callers, callees, and related helpers automatically.
Costs drop hard
Smaller contexts mean cheaper + faster agent loops.
Proof (at a glance)
Less context. Less cost. Same answers.
90% fewer tokens3.5× cheaper71% lower cost
Context size
tokens
70k → 7k tokens
Before
After
90% fewer
Cost per run
USD
$1.40 → $0.40
Baseline
With WashedMCP
3.5× cheaper
Two products
MCP • LeanMCP deploy
Chroma-backed code memory
  • • Store your repo as embeddings (semantic index)
  • • Retrieve only relevant functions instead of entire files
  • • Expand context via callers, callees, and related code edges
  • • One query returns code + relationships, not 10+ redundant searches
Backed by research
Prompt compression: LLMLingua (EMNLP 2023)
Graph retrieval: Graph RAG for QFS (2024)
How it works
1
Embed the repo
Index code in Chroma so retrieval is semantic, not file-based.
2
Retrieve the core
Pull only the functions/classes that answer the question.
3
Expand the edges
Attach callers/callees/helpers so the agent has *enough* context, not *all* context.