Marie-Anne Lachaux - Top AI Leaders & Innovators | Profiles, Milestones & Projects - xix.ai
option

Discover quality AI tools

Bring together the world’s leading artificial intelligence tools to help improve work efficiency

Search for Al tools…
Home
Ai celebrity
Marie-Anne Lachaux
Marie-Anne Lachaux

Marie-Anne Lachaux

Research Scientist, Meta AI
Year of Birth  1990
Nationality  French

Important milestone

2018 Joined Meta AI

Started NLP research at Meta

2023 LLaMA Paper

Co-authored LLaMA research paper

2024 LLaMA 3.1 Multilingual

Enhanced LLaMA 3.1’s support for eight languages

AI product

The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.

The Llama 3.2 3B models support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama3 is Meta's latest open-source large language model, trained on a 15T corpus, supports an 8K context length, and has been optimized for effectiveness and safety.

Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

The Llama 3.2 3B models support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.

The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality

Llama3 is Meta's latest open-source large language model, trained on a 15T corpus, supports an 8K context length, and has been optimized for effectiveness and safety.

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.

Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.

The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

The Llama 3.2 3B models support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Personal Profile

Focused on LLaMA’s multilingual capabilities and dataset curation

Back to Top
OR