option
Home
List of Al models
Qwen2-72B-Instruct

Qwen2-72B-Instruct

Add comparison
Add comparison
Model parameter quantity
72B
Model parameter quantity
Affiliated organization
Alibaba
Affiliated organization
Open Source
License Type
Release time
June 6, 2024
Release time

Model Introduction
Qwen2 is the new series of Qwen large language models.
Swipe left and right to view more
Language comprehension ability Language comprehension ability
Language comprehension ability
Often makes semantic misjudgments, leading to obvious logical disconnects in responses.
4.5
Knowledge coverage scope Knowledge coverage scope
Knowledge coverage scope
Possesses core knowledge of mainstream disciplines, but has limited coverage of cutting-edge interdisciplinary fields.
8.4
Reasoning ability Reasoning ability
Reasoning ability
Unable to maintain coherent reasoning chains, often causing inverted causality or miscalculations.
4.4
Related model
Qwen3-235B-A22B-Instruct-2507 Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
Qwen3-235B-A22B-Thinking-2507 Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
Qwen2.5-7B-Instruct Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
Qwen3-32B (Thinking) Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
Qwen1.5-72B-Chat Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.
Relevant documents
AWS Launches Bedrock AgentCore: Open-Source Platform for Enterprise AI Agent Development Here is the rewritten HTML content:AWS Launches Bedrock AgentCore for Enterprise AI Agents Amazon Web Services (AWS) is betting big on AI agents transforming business operations, introducing Amazon Bedrock AgentCore—an enterprise-grade platform enab
Akaluli AI Voice Recorder Enhances Productivity & Focus Efficiently In our hyper-connected work environments, maintaining focus during crucial conversations has become increasingly challenging. The Akaluli AI Voice Recorder presents an innovative solution to this modern dilemma by seamlessly capturing, transcribing,
Spotify increases Premium subscription costs in markets outside the US Spotify is implementing subscription price hikes across multiple international markets just days after reporting underwhelming financial performance. The streaming giant confirmed Monday that Premium users throughout Europe, South Asia, the Middle Ea
Cairn RPG: Easy-to-Learn Tabletop System for New Players Want an exciting gateway into tabletop RPGs that won't overwhelm newcomers? Picture organizing an entire adventure with ten complete beginners in just fifteen minutes - starting from character creation to diving into gameplay with a fresh system. Sou
Meta Unveils Llama 4: Pioneering Next-Gen Multimodal AI Capabilities Meta's Llama 4 represents a quantum leap in multimodal AI technology, introducing unprecedented capabilities that reshape what's possible in artificial intelligence. With its triad of specialized models, expanded context processing, and benchmark-def
Model comparison
Start the comparison
Back to Top
OR