Will It Fit?

Estimate single-GPU text inference VRAM across Transformers and vLLM with a compact, explainable breakdown.

2B dense • 262,144 context • 2 KV heads

Official profile: Official BF16 checkpoint. Qwen documents Qwen3.5-2B in Hugging Face Transformers format with official Transformers and vLLM serving guidance.

Text-only estimate

This multimodal checkpoint is estimated only for text requests in v1. Resident vision and projector weights stay counted, but image and video token memory is excluded.

Quick mental model: Transformers stays a fixed single-request baseline, while vLLM exposes serving context and concurrency. Runtime presets still change the required card VRAM, and FP8 KV cache cuts the KV term roughly in half versus BF16.

Fits on selected GPUText-only estimate

Qwen 3.5 2B · vLLM · Official BF16 checkpoint

Required GPU VRAM (0.9 budget)

6.2 GB

Core estimate: 5.6 GB. Against RTX 4090 24GB, this leaves 19.6 GB of headroom.

At 4,096 tokens, the estimated max concurrency is 240 concurrent requests.

Quick read

Selected GPU

RTX 4090 24GB

Runtime

vLLM

KV cache dtype

BF16

Context length

4,096

Current concurrency

1

Max concurrency @ context

240

Class

Consumer

Bandwidth

1,008 GB/s

Nominal VRAM

24 GB

Core estimate

5.6 GB

Required GPU VRAM (0.9 budget)

6.2 GB

Headroom

19.6 GB

Breakdown

Where the memory goes

Weights

2B resident parameters at 2.00 bytes each. Calibrated from the official checkpoint profile.

4.0 GB

KV cache

Concurrency 1, context 4,096, 6 KV-bearing layers, 2 KV heads, BF16 cache storage.

0.1 GB

Linear attention state

Concurrency 1, 18 linear-attention layers, static recurrent state, and short-convolution buffers. This term stays flat as context grows.

0.0 GB

Runtime / safety overhead

Conservative buffer for allocator fragmentation, kernels, and runtime scratch space.

1.5 GB

Weights = parameter count × bytes per parameter.

KV cache grows with context length, KV-bearing layers, concurrent requests, and the selected KV cache dtype.

Hybrid Qwen3.5 layers also keep a static linear-attention state, and runtime presets can inflate the required card VRAM beyond the core estimate.

Show the substituted formulas

Weights

parameter count × bytes per weight

2B × 2.00 = 4.0 GB

Qwen documents Qwen3.5-2B in Hugging Face Transformers format with official Transformers and vLLM serving guidance.

KV cache

batch × effective KV tokens across attention layers × 2 × KV heads × head dim × bytes per KV element

1 × 24,576 × 2 × 2 × 256 × 2 = 0.1 GB

Only the attention-bearing layers contribute KV cache in this hybrid stack, and BF16 controls the bytes per stored KV element.

Linear state

batch × linear layers × (recurrent state + short-conv buffers) × state bytes

1 × 18 × 286,720 × 4.00 = 0.0 GB

Hybrid Qwen3.5 layers keep a static recurrent state plus q/k/v short-convolution buffers. The published configs keep that state in float32, so it is modeled separately from the BF16 weight dtype.

Overhead

max(1.5 GB, 10% of weights + KV cache + linear state)

max(1.5 GB, 10% of 4.1 GB) = 1.5 GB

This leaves room for runtime buffers instead of claiming an unrealistically exact fit.

Tips

What to change next

Keep some spare VRAM on RTX 4090 24GB for runtime overhead instead of targeting a zero-margin fit.

Model

Selected model

Qwen 3.5 2B

2B dense • 262,144 context • 2 KV heads

About model

Architecture

Hybrid multimodal transformer

Total params

2B

Active params

Dense model

Layers

24

Hidden size

2,048

Attention heads

8

KV heads

2

KV-bearing layers

6

Context length

262,144

Modality

Multimodal, text-only estimate

License

Apache 2.0

Assumptions

The calculator works in raw bytes, displays decimal GB, and keeps both the core tensor footprint and the runtime-adjusted card requirement explicit instead of pretending every engine uses the full card the same way.