Ana Rojo Echeburúa - Leaders et innovateurs IA | Profils, étapes clés & projets - xix.ai
option

Découvrez des outils AI de qualité

Rassemblez les principaux outils d'intelligence artificielle du monde pour aider à améliorer l'efficacité du travail

Rechercher des outils AL…
Maison
Célébrité de l’IA
Ana Rojo Echeburúa
Ana Rojo Echeburúa

Ana Rojo Echeburúa

Spécialiste en IA et données, Alibaba Cloud
Année de naissance  inconnu
Nationalité  Spanish

Étape importante

Doctorat 2015 en Mathématiques Appliquées

Obtenu un doctorat, axé sur les solutions d'IA basées sur les données

2020 Rejoint Alibaba Cloud

Commencé à travailler sur le développement et le déploiement de modèles d'IA

Cadre Qwen-Agent 2024

Contribué au cadre Qwen-Agent pour les applications LLM

Produit IA

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen-Max is an API model produced by Alibaba. This is version 0428

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen-Max is an API model produced by Alibaba. This is version 0428

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Qwen2 is the new series of Qwen large language models.

Qwen2 is the new series of Qwen large language models.

Qwen 2.5 Max is a large-scale MoE (Mixture-of-Experts) model trained with over 20 trillion tokens of pre-training data and a meticulously designed post-training scheme.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Qwen1.5 is the beta version of Qwen2, maintaining its architecture as a decoder-only transformer model with SwiGLU activation, RoPE, and multi-head attention mechanisms. It offers nine model sizes and has enhanced multilingual and chat model capabilities, supporting a context length of 32,768 tokens. All models have enabled system prompts for roleplaying, and the code supports native implementation in transformers.

Profil personnel

Contribue au développement de Qwen avec une expertise en mathématiques appliquées et IA générative.

Retour en haut
OR