FitMyGPU

Will It Fit?

Estimate text inference VRAM across Transformers and vLLM with a compact, explainable breakdown.

Model pages and release notes are included here too.

1.5B dense • 131,072 context • 2 KV heads

vLLM estimates use the selected GPU memory utilization. Transformers stays a fixed 4K single-request baseline.

Fits on selected GPU

DeepSeek R1 Distill Qwen 1.5B · vLLM · Official BF16 checkpoint

Required GPU VRAM (0.9 budget)

3.8 GB

Core estimate: 3.4 GB. Against RTX 4090 24GB, this leaves 22.0 GB of headroom.

At 4,096 tokens, the estimated max concurrency is 164 concurrent requests.

Quick read

Selected GPU

RTX 4090 24GB

Runtime

vLLM

Load dtype

BF16

GPU utilization

0.9

GPU count

1

KV cache dtype

BF16

Context length

4,096

Current concurrency

1

Max concurrency @ context

164

Class

Consumer

Bandwidth

1,008 GB/s

Nominal VRAM

24 GB

Core estimate

3.4 GB

Required GPU VRAM (0.9 budget)

3.8 GB

Headroom

22.0 GB

GPU compare

What fits this setup

GPUStatusHeadroomMax conc.
GB200 NVL72 GPU 186GBFits195.9 GB1,376
B200 180GBFits189.5 GB1,331
H200 141GBFits147.6 GB1,039
RTX PRO 6000 96GBFits99.3 GB702
A100 80GBFits82.1 GB583
H100 80GBFits82.1 GB583
L40 48GBFits47.8 GB343
A100 40GBFits39.2 GB284
RTX 5090 32GBFits30.6 GB224
RTX 3090 24GBFits22.0 GB164
RTX 4090 24GBFits22.0 GB164

Breakdown

Where the memory goes

Weights

1.5B resident parameters at 1.18 bytes each. Calibrated from the official checkpoint profile.

1.8 GB

KV cache

Concurrency 1, context 4,096, 28 KV-bearing layers, 2 KV heads, BF16 cache storage.

0.1 GB

Runtime / safety overhead

Conservative buffer for allocator fragmentation, kernels, and runtime scratch space.

1.5 GB

Weights = parameter count × bytes per parameter.

KV cache grows with context length, KV-bearing layers, concurrent requests, and the selected KV cache dtype.

Show the substituted formulas

Weights

parameter count × bytes per weight

1.5B × 1.18 = 1.8 GB

The official DeepSeek-R1-Distill-Qwen-1.5B checkpoint totals about 1.78 GB on Hugging Face.

KV cache

batch × effective KV tokens across attention layers × 2 × KV heads × head dim × bytes per KV element

1 × 114,688 × 2 × 2 × 128 × 2 = 0.1 GB

Longer context, deeper models, and larger batches all expand the cache linearly. BF16 controls the bytes per stored KV element.

Overhead

max(1.5 GB, 10% of weights + KV cache + linear state)

max(1.5 GB, 10% of 1.9 GB) = 1.5 GB

This leaves room for runtime buffers instead of claiming an unrealistically exact fit.

Model

Selected model

DeepSeek R1 Distill Qwen 1.5B

1.5B dense • 131,072 context • 2 KV heads

About model

Total params

1.5B

Active params

Dense model

Layers

28

Hidden size

1,536

Attention heads

12

KV heads

2

KV-bearing layers

28