Jack Ma - Leaders et innovateurs IA | Profils, étapes clés & projets - xix.ai
option

Découvrez des outils AI de qualité

Rassemblez les principaux outils d'intelligence artificielle du monde pour aider à améliorer l'efficacité du travail

Rechercher des outils AL…
Maison
Célébrité de l’IA
Jack Ma
Jack Ma

Jack Ma

Fondateur, Groupe Alibaba
Année de naissance  1964
Nationalité  Chinese

Étape importante

1999 Fondé Alibaba Group

Établi Alibaba, posant les bases de sa division IA

Lancement de l'Académie DAMO 2018

A initié l'Académie DAMO d'Alibaba pour la recherche en IA

Publication publique de Qwen 2023

Soutenu la publication publique des modèles Qwen sous Alibaba Cloud

Produit IA

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen-Max is an API model produced by Alibaba. This is version 0428

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen-Max is an API model produced by Alibaba. This is version 0428

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen2 is the new series of Qwen large language models.

Qwen2 is the new series of Qwen large language models.

Qwen 2.5 Max is a large-scale MoE (Mixture-of-Experts) model trained with over 20 trillion tokens of pre-training data and a meticulously designed post-training scheme.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Profil personnel

A fondé le Groupe Alibaba, fournissant une vision stratégique pour les initiatives en IA telles que Qwen via Alibaba Cloud.

Retour en haut
OR