..
One-query code intel
Semantic retrieval + relationship expansion in one pass.
Context graph expansion
Pull callers, callees, and related helpers automatically.
Costs drop hard
Smaller contexts mean cheaper + faster agent loops.
Proof (at a glance)
Less context. Less cost. Same answers.
90% fewer tokens3.5× cheaper71% lower cost
Context size
tokens
Before
After
90% fewer
Cost per run
USD
Baseline
With WashedMCP
3.5× cheaper
Two products
MCP • LeanMCP deployChroma-backed code memory
- • Store your repo as embeddings (semantic index)
- • Retrieve only relevant functions instead of entire files
- • Expand context via callers, callees, and related code edges
- • One query returns code + relationships, not 10+ redundant searches
Backed by research
Prompt compression: LLMLingua (EMNLP 2023)
Graph retrieval: Graph RAG for QFS (2024)
How it works
1
Embed the repo
Index code in Chroma so retrieval is semantic, not file-based.
2
Retrieve the core
Pull only the functions/classes that answer the question.
3
Expand the edges
Attach callers/callees/helpers so the agent has *enough* context, not *all* context.