Junyang Lin - Top AI Leaders & Innovators | Profiles, Milestones & Projects - xix.ai
option

Discover quality AI tools

Bring together the world’s leading artificial intelligence tools to help improve work efficiency

Search for Al tools…
Home
Ai celebrity
Junyang Lin
Junyang Lin

Junyang Lin

Researcher, Alibaba Cloud
Year of Birth  1985
Nationality  Chinese

Important milestone

2018 Joined Alibaba AI

Began AI research at Alibaba Cloud

2023 Qwen 1 Release

Contributed to Qwen 1 model development

2025 Qwen 3 Release

Led development of Qwen 3, a leading LLM

AI product

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen-Max is an API model produced by Alibaba. This is version 0428

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen-Max is an API model produced by Alibaba. This is version 0428

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen2 is the new series of Qwen large language models.

Qwen2 is the new series of Qwen large language models.

Qwen 2.5 Max is a large-scale MoE (Mixture-of-Experts) model trained with over 20 trillion tokens of pre-training data and a meticulously designed post-training scheme.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Personal Profile

Leads research on Qwen models, advancing multimodal and open-source AI

Back to Top
OR