FitMyGPU
Back to calculator

Qwen

Qwen 3 1.7B

Small dense Qwen3 release for lightweight reasoning, agent, and multilingual assistant use with switchable thinking modes.

Overview and architecture

What it is

Company

Qwen

Family

Qwen

Release date

Apr 27, 2025

Architecture

Dense decoder-only transformer

License

Apache 2.0

Modality

Text

Context window

32,768

Total params

1.7B

Active params

Dense model

Layers

28

Hidden size

2,048

Attention heads

16

KV heads

8

KV-bearing layers

28

Research highlight

What improved

Thinking-mode switch

The same checkpoint can explicitly shift between reasoning-heavy and non-thinking behavior instead of forcing one fixed inference style.

Reasoning and instruction uplift

Qwen presents the family as stronger than prior QwQ and Qwen2.5 instruct baselines across reasoning, coding, and instruction following.

Agent-ready small model

Even the smaller dense releases are positioned for tool use and multilingual assistant workflows rather than only basic chat.

Training and release context

How it was released

Family release

Qwen3 is released as a dense and MoE model family centered on switching between thinking and non-thinking modes within the same model.

Training stage

Qwen describes the release as a pretraining plus post-training model rather than a small instruction-only adaptation.

Context packaging

The 1.7B model is published with 32K native context, and the larger dense variants explicitly extend to 131K with YaRN.

Where it is strong

Where it is strong

Thinking and non-thinking use

The 1.7B release is built to switch between deeper reasoning mode and faster general dialogue mode without changing models.

Agent workflows

Qwen positions the family for tool use and agent-style tasks in both thinking and non-thinking modes.

Multilingual assistant work

The family is published with support for 100+ languages and dialects, making it a broad multilingual assistant line rather than a narrow specialist release.

Memory behavior

What dominates VRAM

Resident weights are still modest at 1.7B, so the model stays easy to fit; context growth and runtime reserve matter proportionally more than on larger dense Qwen3 checkpoints.

Sources

Where this page is grounded