Will It Fit?
Estimate text inference VRAM across Transformers and vLLM with a compact, explainable breakdown.
Model pages and release notes are included here too.
Qwen 3.5 27B · vLLM · Official BF16 checkpoint
Required GPU VRAM (0.9 budget)
68.4 GB
Core estimate: 61.6 GB. Against RTX 4090 24GB, this leaves 42.7 GB of deficit.
At 4,096 tokens, the estimated max concurrency is 0 concurrent requests.
Quick read
Selected GPU
Runtime
Load dtype
GPU utilization
GPU count
KV cache dtype
Context length
Current concurrency
Max concurrency @ context
Class
Bandwidth
Nominal VRAM
Core estimate
Required GPU VRAM (0.9 budget)
Deficit
GPU compare
What fits this setup
Breakdown
Where the memory goes
Weights
27B resident parameters at 2.06 bytes each. Calibrated from the official checkpoint profile.
55.6 GB
KV cache
Concurrency 1, context 4,096, 16 KV-bearing layers, 4 KV heads, BF16 cache storage.
0.3 GB
Linear attention state
Concurrency 1, 48 linear-attention layers, static recurrent state, and short-convolution buffers. This term stays flat as context grows.
0.2 GB
Runtime / safety overhead
Conservative buffer for allocator fragmentation, kernels, and runtime scratch space.
5.6 GB
Weights = parameter count × bytes per parameter.
KV cache grows with context length, KV-bearing layers, concurrent requests, and the selected KV cache dtype.
Show the substituted formulas
Weights
parameter count × bytes per weight
27B × 2.06 = 55.6 GB
The official Qwen3.5-27B checkpoint totals about 55.56 GB on Hugging Face, and Qwen documents Transformers, vLLM, and related serving stacks for the release.
KV cache
batch × effective KV tokens across attention layers × 2 × KV heads × head dim × bytes per KV element
1 × 65,536 × 2 × 4 × 256 × 2 = 0.3 GB
Only the attention-bearing layers contribute KV cache in this hybrid stack, and BF16 controls the bytes per stored KV element.
Linear state
batch × linear layers × (recurrent state + short-conv buffers) × state bytes
1 × 48 × 827,392 × 4.00 = 0.2 GB
Hybrid Qwen3.5 layers keep a static recurrent state plus q/k/v short-convolution buffers. The published configs keep that state in float32, so it is modeled separately from the BF16 weight dtype.
Overhead
max(1.5 GB, 10% of weights + KV cache + linear state)
max(1.5 GB, 10% of 56.0 GB) = 5.6 GB
This leaves room for runtime buffers instead of claiming an unrealistically exact fit.
Model
Selected model
Qwen 3.5 27B
27B dense • 262,144 context • 4 KV heads
Total params
Active params
Layers
Hidden size
Attention heads
KV heads
KV-bearing layers