| Qwen | |
|---|---|
Screenshot Screenshot of an example of a Qwen 3 answer describingWikipedia, with the "Thinking" feature enabled | |
| Developer | Alibaba Cloud |
| Initial release | April 2023; 2 years ago (2023-04) |
| Stable release | Qwen3-Max Thinking/ January 26, 2026; 18 days ago (2026-01-26) Qwen3-235B-A22B / July 25, 2025; 6 months ago (2025-07-25) Qwen3-Next-80B-A3B / September 11, 2025; 5 months ago (2025-09-11) Qwen3-Coder-Next based on Qwen3-Next-80B-A3B / February 2, 2026; 11 days ago (2026-02-02) |
| Written in | Python |
| Operating system | |
| Type | Large language model,chatbot |
| License | Apache-2.0 Qwen Research License Qwen License |
| Website | chat |
| Repository | github |
| Qwen | |||||||
|---|---|---|---|---|---|---|---|
| Tongyi Qianwen | |||||||
| Traditional Chinese | 通義千問 | ||||||
| Simplified Chinese | 通义千问 | ||||||
| Literal meaning | to comprehend the meaning, [and to answer] a thousand kinds of questions | ||||||
| |||||||
Qwen (also known asTongyi Qianwen,Chinese:通义千问; pinyin:Tōngyì Qiānwèn) is a family oflarge language models developed byAlibaba Cloud. Many Qwen variants are distributed as open‑weight models under the Apache‑2.0 license, while others are served through Alibaba Cloud.[1]
In July 2024,South China Morning Post reported that benchmarking platform SuperCLUE ranked Qwen2‑72B‑Instruct behindOpenAI's GPT‑4o andAnthropic’s Claude 3.5 Sonnet and ahead of other Chinese models.[2]

Transformthis image into painting in the style of Picasso and Juan GrisAlibaba launched a beta of Qwen in April 2023 under the name Tongyi Qianwen, then opened it for public use in September 2023 after regulatory clearance.[3][4]
The model's architecture was based on theLlama architecture developed byMeta AI.[5][6] In December 2023, it released its 72B and 1.8B models for download, while Qwen 7B weights were released in August.[7][8] Their models are sometimes described asopen source, but the training code has not been released nor has the training data been documented, and they do not meet the terms of either theOpen Source AI Definition or theModel Openness Framework from theLinux Foundation.
In June 2024 Alibaba launched Qwen2 and in September it released some of its models with open weights, while keeping its most advanced models proprietary.[9][10] Qwen2 contains both dense andsparse models.[11]
In November 2024, QwQ-32B-Preview, a model focusing on reasoning similar to OpenAI'so1, was released under theApache 2.0 License, although only the weights were released, not the dataset or training method.[12][13] QwQ has a 32K token context length and performs better than o1 on some benchmarks.[14]
The Qwen-VL series is a line of visual language models that combines avision transformer with an LLM.[5][15] Alibaba releasedQwen2-VL with variants of 2 billion and 7 billion parameters.[16][17][18]
In January 2025, Qwen2.5-VL was released with variants of 3, 7, 32, and 72 billion parameters.[19] All models except the 72B variant are licensed under the Apache 2.0 license.[20] Qwen-VL-Max is Alibaba's flagship vision model as of 2024, and is sold byAlibaba Cloud at a cost of US$0.41 per million input tokens.[21]
Alibaba has released several other model types such as Qwen-Audio and Qwen2-Math.[22] In total, it has released more than 100 open weight models, with its models having been downloaded more than 40 million times.[10]Fine-tuned versions of Qwen have been developed by enthusiasts, such as "Liberated Qwen", developed by San Francisco-based Abacus AI, which is a version that responds to any user request without content restrictions.[23]
On January 29, 2025, Alibaba launched Qwen2.5-Max.[24][25]
On March 24, 2025, Alibaba launched Qwen2.5-VL-32B-Instruct as a successor to the Qwen2.5-VL model. It was released under the Apache 2.0 license.[26][27]
On March 26, 2025, Qwen2.5-Omni-7B was released under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms likeHugging Face,GitHub, and ModelScope.[28] The Qwen2.5-Omni model accepts text, images, videos, and audio as input and can generate both text and audio as output, allowing it to be used for real-time voice chatting, similar to OpenAI's GPT-4o.[28]
On April 28, 2025, the Qwen3 model family was released,[29] with all models licensed under the Apache 2.0 license. The Qwen3 model family includes both dense (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) andsparse models (30B with 3B activated parameters, 235B with 22B activated parameters). They were trained on 36 trillion tokens in 119 languages and dialects.[30]
On September 5, 2025, Alibaba launched Qwen3-Max.[citation needed]
On September 10, 2025, Qwen3-Next was released under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms likeHugging Face and Model Scope.[31][non-primary source needed]
On September 22, 2025, Qwen3-Omni was release under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms likeHugging Face and Model Scope. Qwen3-Omni is a mixed/multimodal model that can generate text, images, audio, and video.[32][non-primary source needed]
On 27 January 2026, Qwen3-Max-Thinking was released. The model can generate text, pictures, or video.[33]
| Version | Release date | Ref. |
|---|---|---|
| Tongyi Qianwen | September 2023 | [34] |
| Qwen-VL | August 2023 | [35] |
| Qwen2 | June 2024 | [10] |
| Qwen2-Audio | August 2024 | [36] |
| Qwen2-VL | December 2024 | [16] |
| Qwen2.5 | September 2024 | [37] |
| Qwen2.5-Coder | November 2024 | [38] |
| QvQ | December 2024 | [39] |
| Qwen2.5-VL | January 2025 | [40] |
| QwQ-32B | March 2025 | [41] |
| Qwen2.5-Omni | March 2025 | [28] |
| Qwen3 | April 2025 | [29] |
| Qwen3-Coder (AKA Qwen3-Coder-480B-A35B) Qwen3-Coder-Flash (AKA Qwen3-Coder-30B-A3B) | July 2025 | [42] |
| Qwen3-Max | September 2025 | [citation needed] |
| Qwen3-Next | September 2025 | [43] |
| Qwen3-Omni | September 2025 | [32] |
| Qwen3-VL | September 2025 | [44] |
| Qwen3-Coder-Next | February 2026 | [45] |