Monorepos with multiple Python projects, each with its own .venv, are fully supported. typemux-cc maintains a backend pool that routes LSP requests to the correct backend based on file location.
The Problem
With Claude Code’s official pyright plugin:
Monorepo has 3 projects: project-a/, project-b/, project-c/
Each project has its own .venv with different dependencies
Opening project-a/main.py → pyright uses project-a/.venv
Opening project-b/main.py → pyright still uses project-a/.venv
Type checking fails because project-b imports aren’t available
Must restart Claude Code to switch to project-b/.venv
With typemux-cc, switching is automatic and instant.
Monorepo Structure Example
From the README:
my-monorepo/
├── project-a/
│ ├── .venv/ # project-a specific virtual environment
│ └── src/main.py
├── project-b/
│ ├── .venv/ # project-b specific virtual environment
│ └── src/main.py
└── project-c/
├── .venv/ # project-c specific virtual environment
└── src/main.py
Each project:
Has independent dependencies (different package versions)
Has its own .venv/pyvenv.cfg
Gets its own backend process in the pool
Backend Pool Routing
Pool Architecture
typemux-cc maintains a pool of backend processes, one per .venv:
// From src/backend_pool.rs:105-113
pub struct BackendPool {
backends : HashMap < PathBuf , BackendInstance >, // Key: venv path
pub backend_msg_tx : mpsc :: Sender < BackendMessage >,
pub backend_msg_rx : mpsc :: Receiver < BackendMessage >,
max_backends : usize ,
backend_ttl : Option < Duration >,
next_session : u64 ,
}
Each backend instance tracks:
// From src/backend_pool.rs:43-55
pub struct BackendInstance {
pub writer : LspFrameWriter < ChildStdin >,
pub child : Child ,
pub venv_path : PathBuf , // Which .venv this backend uses
pub session : u64 , // Unique session ID
pub last_used : Instant , // For LRU eviction
pub reader_task : JoinHandle <()>,
pub next_id : u64 ,
pub warmup_state : WarmupState ,
pub warmup_deadline : Instant ,
pub warmup_queue : Vec < RpcMessage >,
}
Routing Logic
When Claude Code sends an LSP request:
Extract document URI
// From src/proxy/document.rs:10-15
pub ( crate ) fn extract_text_document_uri ( msg : & RpcMessage ) -> Option < url :: Url > {
let params = msg . params . as_ref () ? ;
let text_document = params . get ( "textDocument" ) ? ;
let uri_str = text_document . get ( "uri" ) ?. as_str () ? ;
url :: Url :: parse ( uri_str ) . ok ()
}
Look up cached venv for document
// From src/proxy/document.rs:17-23
pub ( crate ) fn venv_for_uri ( & self , url : & url :: Url ) -> Option < PathBuf > {
self . state
. open_documents
. get ( url )
. and_then ( | doc | doc . venv . clone ())
}
Get backend from pool
// From src/backend_pool.rs:128-136
pub fn get_mut ( & mut self , venv_path : & PathBuf ) -> Option < & mut BackendInstance > {
self . backends . get_mut ( venv_path )
}
Forward request to correct backend
If backend exists in pool, forward the request. If not, spawn a new backend (see “Operation Sequence” below).
Operation Sequence
From the README:
Claude Code Action Proxy Behavior 1. Session starts Search for fallback .venv (start without venv if not found) 2. Opens project-a/src/main.py Detect project-a/.venv → spawn backend (session 1), add to pool 3. Opens project-b/src/main.py Detect project-b/.venv → spawn backend (session 2), add to pool 4. Returns to project-a/src/main.py project-a/.venv already in pool → route to session 1 (no restart)
Detailed Flow
Session starts
typemux-cc searches for a fallback .venv at startup: // From src/proxy/mod.rs:51-65
let fallback_venv = venv :: find_fallback_venv ( & cwd ) . await ? ;
let mut pending_initial_backend : Option <( LspBackend , PathBuf )> =
if let Some ( venv ) = fallback_venv {
tracing :: info! ( venv = % venv . display (),
"Using fallback .venv, pre-spawning backend" );
let backend = LspBackend :: spawn (
self . state . backend_kind,
Some ( & venv )
) . await ? ;
Some (( backend , venv ))
} else {
tracing :: warn! ( "No fallback .venv found, starting with empty pool" );
None
};
If found, pre-spawns a backend. Otherwise starts with empty pool.
Open project-a/src/main.py
// From src/proxy/document.rs:65-119
pub ( crate ) async fn handle_did_open ( ... ) {
// Search for .venv
let found_venv = venv :: find_venv (
& file_path ,
self . state . git_toplevel . as_deref ()
) . await ? ;
// Cache document
let doc = crate :: state :: OpenDocument {
venv : found_venv . clone (), // Cache: project-a/.venv
// ...
};
self . state . open_documents . insert ( url . clone (), doc );
// Ensure backend in pool
if ! self . state . pool . contains ( venv_path ) {
if self . state . pool . is_full () {
self . evict_lru_backend ( client_writer ) . await ? ;
}
self . create_backend_instance ( venv_path , client_writer ) . await ? ;
}
}
Result : Backend spawned with VIRTUAL_ENV=project-a/.venv, session=1 added to pool.
Open project-b/src/main.py
Same process, but:
.venv search finds project-b/.venv
Not in pool → spawn new backend
Backend with VIRTUAL_ENV=project-b/.venv, session=2 added to pool
Pool state now :backends:
project-a/.venv → session=1
project-b/.venv → session=2
Return to project-a/src/main.py
// Lookup cached venv
let venv_path = self . venv_for_uri ( & url ); // Returns: project-a/.venv
// Get backend from pool
if let Some ( inst ) = self . state . pool . get_mut ( & venv_path ) {
inst . last_used = Instant :: now (); // Update LRU timestamp
inst . writer . write_message ( & msg ) . await ? ; // Forward to session 1
}
Result : Request routed to session 1 (already in pool), zero restart overhead .
What Actually Happens
From the README:
When Claude Code moves from project-a/main.py to project-b/main.py:
Proxy detects different .venv (project-a/.venv → project-b/.venv)
Checks the backend pool — project-b/.venv not found
Spawns new backend with VIRTUAL_ENV=project-b/.venv (session 2)
Session 1 (project-a) stays alive in the pool — no restart
Restores open documents under project-b/ to session 2
Clears diagnostics for documents outside project-b/
All LSP requests for project-b files now use project-b dependencies
When Claude Code returns to project-a/main.py later, session 1 is still in the pool — zero restart overhead .
Document Restoration
When a new backend spawns, typemux-cc restores already-open documents:
// From src/proxy/initialization.rs:179-266
pub ( crate ) async fn restore_documents_to_backend (
& self ,
backend : & mut LspBackend ,
venv : & Path ,
session : u64 ,
_client_writer : & mut LspFrameWriter < tokio :: io :: Stdout >,
) -> Result <(), ProxyError > {
for ( url , doc ) in & self . state . open_documents {
// Only restore documents matching this venv
let should_restore = doc . venv . as_deref () == Some ( venv )
|| match ( url . to_file_path () . ok (), & venv_parent ) {
( Some ( file_path ), Some ( vp )) => file_path . starts_with ( vp ),
_ => false ,
};
if ! should_restore {
skipped += 1 ;
continue ;
}
// Resend didOpen with cached text
let didopen_msg = RpcMessage {
method : Some ( "textDocument/didOpen" . to_string ()),
params : Some ( serde_json :: json! ({
"textDocument" : {
"uri" : url . to_string (),
"languageId" : doc . language_id,
"version" : doc . version,
"text" : doc . text,
}
})),
// ...
};
backend . send_message ( & didopen_msg ) . await ? ;
}
}
Selective restoration : Only documents belonging to project-b/ are restored to the project-b/.venv backend. project-a/ documents are skipped.
Pool Management
Maximum Backends
Default: 8 concurrent backends
// From src/main.rs:26-29
#[arg(long, env = "TYPEMUX_CC_MAX_BACKENDS" , default_value = "8" ,
value_parser = clap :: value_parser ! ( u64 ) . range(1 .. ))]
max_backends : u64 ,
Configurable via:
# In ~/.config/typemux-cc/config
export TYPEMUX_CC_MAX_BACKENDS = 16
LRU Eviction
When the pool is full and a new backend is needed:
// From src/backend_pool.rs:156-174
pub fn lru_venv ( & self , pending_count_fn : impl Fn ( & PathBuf , u64 ) -> usize ) -> Option < PathBuf > {
// First try: find LRU among backends with 0 pending requests
let no_pending_lru = self
. backends
. iter ()
. filter ( | ( venv , inst ) | pending_count_fn ( venv , inst . session) == 0 )
. min_by_key ( | ( _ , inst ) | inst . last_used)
. map ( | ( venv , _ ) | venv . clone ());
if no_pending_lru . is_some () {
return no_pending_lru ;
}
// Fallback: LRU among all backends
self . backends
. iter ()
. min_by_key ( | ( _ , inst ) | inst . last_used)
. map ( | ( venv , _ ) | venv . clone ())
}
Strategy :
Prefer backends with no pending requests (safe to evict)
Among those, pick the least recently used (oldest last_used timestamp)
If all have pending requests, fall back to global LRU
TTL-Based Eviction
Default: 1800 seconds (30 minutes) of inactivity
// From src/main.rs:31-34
#[arg(long, env = "TYPEMUX_CC_BACKEND_TTL" , default_value = "1800" )]
backend_ttl : u64 ,
Backends idle longer than TTL are automatically evicted:
// From src/backend_pool.rs:204-216
pub fn expired_venvs ( & self ) -> Vec < PathBuf > {
let ttl = match self . backend_ttl {
Some ( ttl ) => ttl ,
None => return Vec :: new (),
};
let now = Instant :: now ();
self . backends
. iter ()
. filter ( | ( _ , inst ) | now . duration_since ( inst . last_used) >= ttl )
. map ( | ( venv , _ ) | venv . clone ())
. collect ()
}
Disable TTL eviction:
# In ~/.config/typemux-cc/config
export TYPEMUX_CC_BACKEND_TTL = 0
Switching Between Projects
Switching is instant and automatic :
project-a/src/main.py
project-b/src/main.py
import pandas as pd # Uses project-a/.venv dependencies
def process_data ():
df = pd.DataFrame() # Hover, completion work
return df
What happens internally :
Open project-a/src/main.py → routes to session 1
Hover over pd.DataFrame → request forwarded to session 1
Open project-b/src/main.py → routes to session 2
Hover over np.array → request forwarded to session 2
Return to project-a/src/main.py → routes to session 1 (still in pool)
No visible delay . From the user’s perspective: LSP just works .
Real Monorepo Example
Here’s a realistic monorepo structure:
my-company-monorepo/
├── services/
│ ├── api/
│ │ ├── .venv/ # FastAPI, SQLAlchemy
│ │ ├── pyproject.toml
│ │ └── src/main.py
│ ├── worker/
│ │ ├── .venv/ # Celery, Redis
│ │ ├── pyproject.toml
│ │ └── src/tasks.py
│ └── scheduler/
│ ├── .venv/ # APScheduler
│ ├── pyproject.toml
│ └── src/jobs.py
├── libs/
│ └── common/
│ ├── .venv/ # Shared utilities
│ ├── pyproject.toml
│ └── src/utils.py
└── scripts/
├── .venv/ # Admin scripts
├── requirements.txt
└── deploy.py
Each project has:
Independent dependencies (e.g., api uses FastAPI, worker uses Celery)
Own .venv with different package versions
Own backend in the pool (up to 5 concurrent in this example)
Workflow :
Open api/src/main.py
Backend spawned: VIRTUAL_ENV=services/api/.venv (session 1)
Open worker/src/tasks.py
Backend spawned: VIRTUAL_ENV=services/worker/.venv (session 2)
Open libs/common/src/utils.py
Backend spawned: VIRTUAL_ENV=libs/common/.venv (session 3)
Return to api/src/main.py
Routes to session 1 (no spawn, instant)
Pool state :
backends:
services/api/.venv → session=1
services/worker/.venv → session=2
libs/common/.venv → session=3
Configuration for Monorepos
Increase max_backends for large monorepos
If you have more than 8 projects:
# In ~/.config/typemux-cc/config
export TYPEMUX_CC_MAX_BACKENDS = 16
Disable TTL for active development
To keep all backends alive indefinitely:
export TYPEMUX_CC_BACKEND_TTL = 0
Enable detailed logging
export TYPEMUX_CC_LOG_FILE = "/tmp/typemux-cc.log"
export RUST_LOG = "typemux_cc=debug"
Monitor backend pool activity:
tail -f /tmp/typemux-cc.log | grep -E "session=|Creating new backend|Evicting"
Troubleshooting
Wrong dependencies being used
Symptoms:
Import errors for packages that exist in the project’s venv
Type checking fails with “module not found”
Causes:
Wrong venv cached : Document opened before correct .venv existed
Shared venv : Multiple projects using the same .venv path
Fix :
# 1. Verify each project has its own .venv
find . -name "pyvenv.cfg" -exec dirname {} \;
# 2. Close and reopen files to refresh cache
Pool eviction too aggressive
Symptoms:
Backends being evicted while still needed
Frequent “Creating new backend” in logs
Causes:
TYPEMUX_CC_MAX_BACKENDS too low
TYPEMUX_CC_BACKEND_TTL too short
Fix :
# Increase pool size
export TYPEMUX_CC_MAX_BACKENDS = 16
# Increase or disable TTL
export TYPEMUX_CC_BACKEND_TTL = 3600 # 1 hour
# or
export TYPEMUX_CC_BACKEND_TTL = 0 # Disable TTL
Memory usage high with many backends
Each backend process uses ~200-500MB. With 16 backends: ~3-8GB total.
Solutions:
Reduce max_backends :
export TYPEMUX_CC_MAX_BACKENDS = 8
Enable TTL to evict idle backends :
export TYPEMUX_CC_BACKEND_TTL = 1800 # 30 min
Close unused projects in Claude Code to reduce active backends
Summary
Monorepo checklist
✅ Each project has its own .venv/pyvenv.cfg
✅ Configure TYPEMUX_CC_MAX_BACKENDS for project count
✅ Open files from different projects freely
✅ typemux-cc automatically routes to correct backends
✅ No restarts, no manual switching
Best practice : For large monorepos (>8 projects), increase TYPEMUX_CC_MAX_BACKENDS and monitor memory usage.