FitMyGPU
Back to calculator

Qwen

Qwen 3 14B

Dense Qwen3 release for higher-capacity reasoning, agent, and multilingual assistant workloads with switchable thinking modes.

Overview and architecture

What it is

Company

Qwen

Family

Qwen

Release date

Apr 27, 2025

Architecture

Dense decoder-only transformer

License

Apache 2.0

Modality

Text

Context window

131,072

Total params

14.8B

Active params

Dense model

Layers

40

Hidden size

5,120

Attention heads

40

KV heads

8

KV-bearing layers

40

Research highlight

What improved

Thinking-mode switch

The checkpoint keeps one-model switching between deeper reasoning and faster general-purpose dialogue.

Reasoning and alignment uplift

Qwen emphasizes stronger reasoning, instruction following, role-play, creative writing, and human preference alignment than earlier generations.

Extended context with YaRN

The 14B release keeps the 32K native and 131K-with-YaRN context framing of the larger dense Qwen3 line.

Training and release context

How it was released

Family release

Qwen3 is released as a dense and MoE model family centered on switching between thinking and non-thinking modes within the same model.

Training stage

Qwen describes the release as a pretraining plus post-training model rather than a small instruction-only adaptation.

Context packaging

The 14B model is published with 32K native context, and the larger dense variants explicitly extend to 131K with YaRN.

Where it is strong

Where it is strong

Thinking and non-thinking use

The 14B release is built to switch between deeper reasoning mode and faster general dialogue mode without changing models.

Agent workflows

Qwen positions the family for tool use and agent-style tasks in both thinking and non-thinking modes.

Multilingual assistant work

The family is published with support for 100+ languages and dialects, making it a broad multilingual assistant line rather than a narrow specialist release.

Memory behavior

What dominates VRAM

At 14B, resident weights dominate the floor more clearly, while long-context serving still depends on runtime reserve and whether the extended YaRN window is used.

Sources

Where this page is grounded