Will It Fit?
Estimate single-GPU text inference VRAM across Transformers and vLLM with a compact, explainable breakdown.
OpenReasoning Nemotron 1.5B · vLLM · Official BF16 checkpoint
Required GPU VRAM (0.9 budget)
5.2 GB
Core estimate: 4.7 GB. Against RTX 4090 24GB, this leaves 20.6 GB of headroom.
At 4,096 tokens, the estimated max concurrency is 153 concurrent requests.
Quick read
Selected GPU
Runtime
KV cache dtype
Context length
Current concurrency
Max concurrency @ context
Class
Bandwidth
Nominal VRAM
Core estimate
Required GPU VRAM (0.9 budget)
Headroom
Breakdown
Where the memory goes
Weights
1.5B resident parameters at 2.00 bytes each. Calibrated from the official checkpoint profile.
3.1 GB
KV cache
Concurrency 1, context 4,096, 28 KV-bearing layers, 2 KV heads, BF16 cache storage.
0.1 GB
Runtime / safety overhead
Conservative buffer for allocator fragmentation, kernels, and runtime scratch space.
1.5 GB
Weights = parameter count × bytes per parameter.
KV cache grows with context length, KV-bearing layers, concurrent requests, and the selected KV cache dtype.
Hybrid Qwen3.5 layers also keep a static linear-attention state, and runtime presets can inflate the required card VRAM beyond the core estimate.
Show the substituted formulas
Weights
parameter count × bytes per weight
1.5B × 2.00 = 3.1 GB
NVIDIA ships OpenReasoning-Nemotron-1.5B in Hugging Face Transformers format, and v1 models it as a standard dense Qwen2.5-derived checkpoint across the supported runtimes.
KV cache
batch × effective KV tokens across attention layers × 2 × KV heads × head dim × bytes per KV element
1 × 114,688 × 2 × 2 × 128 × 2 = 0.1 GB
Longer context, deeper models, and larger batches all expand the cache linearly. BF16 controls the bytes per stored KV element.
Overhead
max(1.5 GB, 10% of weights + KV cache + linear state)
max(1.5 GB, 10% of 3.2 GB) = 1.5 GB
This leaves room for runtime buffers instead of claiming an unrealistically exact fit.
Tips
What to change next
Keep some spare VRAM on RTX 4090 24GB for runtime overhead instead of targeting a zero-margin fit.
Model
Selected model
OpenReasoning Nemotron 1.5B
1.5B dense • 32,768 context • 2 KV heads
Architecture
Total params
Active params
Layers
Hidden size
Attention heads
KV heads
KV-bearing layers
Context length
Modality
License
Assumptions
The calculator works in raw bytes, displays decimal GB, and keeps both the core tensor footprint and the runtime-adjusted card requirement explicit instead of pretending every engine uses the full card the same way.