FitMyGPU

Will It Fit?

Estimate text inference VRAM across Transformers and vLLM with a compact, explainable breakdown.

Model pages and release notes are included here too.

35B total • 3B active • 262,144 context • 2 KV heads

Text-only estimate

This multimodal checkpoint is estimated only for text requests in v1. Resident vision and projector weights stay counted, but image and video token memory is excluded.

vLLM estimates use the selected GPU memory utilization. Transformers stays a fixed 4K single-request baseline.

Does not fit on selected GPUText-only estimate

Qwen 3.5 35B A3B · vLLM · Official BF16 checkpoint

Required GPU VRAM (0.9 budget)

88.1 GB

Core estimate: 79.3 GB. Against RTX 4090 24GB, this leaves 62.3 GB of deficit.

At 4,096 tokens, the estimated max concurrency is 0 concurrent requests.

Quick read

Selected GPU

RTX 4090 24GB

Runtime

vLLM

Load dtype

FP16

GPU utilization

0.9

GPU count

1

KV cache dtype

BF16

Context length

4,096

Current concurrency

1

Max concurrency @ context

0

Class

Consumer

Bandwidth

1,008 GB/s

Nominal VRAM

24 GB

Core estimate

79.3 GB

Required GPU VRAM (0.9 budget)

88.1 GB

Deficit

62.3 GB

GPU compare

What fits this setup

GPUStatusHeadroomMax conc.
GB200 NVL72 GPU 186GBFits111.6 GB607
B200 180GBFits105.2 GB572
H200 141GBFits63.3 GB344
RTX PRO 6000 96GBFits15.0 GB82
A100 80GBOOM-2.2 GB0
H100 80GBOOM-2.2 GB0
L40 48GBOOM-36.5 GB0
A100 40GBOOM-45.1 GB0
RTX 5090 32GBOOM-53.7 GB0
RTX 3090 24GBOOM-62.3 GB0
RTX 4090 24GBOOM-62.3 GB0

Breakdown

Where the memory goes

Weights

35B resident parameters at 2.05 bytes each. Calibrated from the official checkpoint profile.

71.9 GB

KV cache

Concurrency 1, context 4,096, 10 KV-bearing layers, 2 KV heads, BF16 cache storage.

0.1 GB

Linear attention state

Concurrency 1, 30 linear-attention layers, static recurrent state, and short-convolution buffers. This term stays flat as context grows.

0.1 GB

Runtime / safety overhead

Conservative buffer for allocator fragmentation, kernels, and runtime scratch space.

7.2 GB

Weights = parameter count × bytes per parameter.

KV cache grows with context length, KV-bearing layers, concurrent requests, and the selected KV cache dtype.

Show the substituted formulas

Weights

parameter count × bytes per weight

35B × 2.05 = 71.9 GB

The official Qwen3.5-35B-A3B checkpoint totals about 71.90 GB on Hugging Face, and Qwen documents Transformers, vLLM, and related serving stacks for the release.

KV cache

batch × effective KV tokens across attention layers × 2 × KV heads × head dim × bytes per KV element

1 × 40,960 × 2 × 2 × 256 × 2 = 0.1 GB

Only the attention-bearing layers contribute KV cache in this hybrid stack, and BF16 controls the bytes per stored KV element.

Linear state

batch × linear layers × (recurrent state + short-conv buffers) × state bytes

1 × 30 × 557,056 × 4.00 = 0.1 GB

Hybrid Qwen3.5 layers keep a static recurrent state plus q/k/v short-convolution buffers. The published configs keep that state in float32, so it is modeled separately from the BF16 weight dtype.

Overhead

max(1.5 GB, 10% of weights + KV cache + linear state)

max(1.5 GB, 10% of 72.1 GB) = 7.2 GB

This leaves room for runtime buffers instead of claiming an unrealistically exact fit.

Model

Selected model

Qwen 3.5 35B A3B

35B total • 3B active • 262,144 context • 2 KV heads

About model

Total params

35B

Active params

3B

Layers

40

Hidden size

2,048

Attention heads

16

KV heads

2

KV-bearing layers

10