Will It Fit?

Estimate single-GPU text inference VRAM across Transformers and vLLM with a compact, explainable breakdown.

14.7B dense • 16,384 context • 10 KV heads

Official profile: Official BF16 checkpoint. Microsoft's official phi-4 repository is about 29.3 GB on Hugging Face.

Quick mental model: Transformers stays a fixed single-request baseline, while vLLM exposes serving context and concurrency. Runtime presets still change the required card VRAM, and FP8 KV cache cuts the KV term roughly in half versus BF16.

Does not fit on selected GPU

Phi-4 14B · vLLM · Official BF16 checkpoint

Required GPU VRAM (0.9 budget)

36.8 GB

Core estimate: 33.2 GB. Against RTX 4090 24GB, this leaves 11.1 GB of deficit.

At 4,096 tokens, the estimated max concurrency is 0 concurrent requests.

Quick read

Selected GPU

RTX 4090 24GB

Runtime

vLLM

KV cache dtype

BF16

Context length

4,096

Current concurrency

1

Max concurrency @ context

0

Class

Consumer

Bandwidth

1,008 GB/s

Nominal VRAM

24 GB

Core estimate

33.2 GB

Required GPU VRAM (0.9 budget)

36.8 GB

Deficit

11.1 GB

Breakdown

Where the memory goes

Weights

14.7B resident parameters at 1.99 bytes each. Calibrated from the official checkpoint profile.

29.3 GB

KV cache

Concurrency 1, context 4,096, 40 KV-bearing layers, 10 KV heads, BF16 cache storage.

0.8 GB

Runtime / safety overhead

Conservative buffer for allocator fragmentation, kernels, and runtime scratch space.

3.0 GB

Weights = parameter count × bytes per parameter.

KV cache grows with context length, KV-bearing layers, concurrent requests, and the selected KV cache dtype.

Hybrid Qwen3.5 layers also keep a static linear-attention state, and runtime presets can inflate the required card VRAM beyond the core estimate.

Show the substituted formulas

Weights

parameter count × bytes per weight

14.7B × 1.99 = 29.3 GB

Microsoft's official phi-4 repository is about 29.3 GB on Hugging Face.

KV cache

batch × effective KV tokens across attention layers × 2 × KV heads × head dim × bytes per KV element

1 × 163,840 × 2 × 10 × 128 × 2 = 0.8 GB

Longer context, deeper models, and larger batches all expand the cache linearly. BF16 controls the bytes per stored KV element.

Overhead

max(1.5 GB, 10% of weights + KV cache + linear state)

max(1.5 GB, 10% of 30.1 GB) = 3.0 GB

This leaves room for runtime buffers instead of claiming an unrealistically exact fit.

Tips

What to change next

Try 4-bit weights to cut the resident model footprint before touching anything else.

Reduce context length if KV cache is the fastest-growing term in the estimate.

Model

Selected model

Phi-4 14B

14.7B dense • 16,384 context • 10 KV heads

About model

Architecture

Dense decoder-only transformer

Total params

14.7B

Active params

Dense model

Layers

40

Hidden size

5,120

Attention heads

40

KV heads

10

KV-bearing layers

40

Context length

16,384

Modality

Text

License

MIT

Assumptions

The calculator works in raw bytes, displays decimal GB, and keeps both the core tensor footprint and the runtime-adjusted card requirement explicit instead of pretending every engine uses the full card the same way.