Noam Brown - Top AI Leaders & Innovators | Profiles, Milestones & Projects - xix.ai
option

Discover quality AI tools

Bring together the world’s leading artificial intelligence tools to help improve work efficiency

Search for Al tools…
Home
Ai celebrity
Noam Brown
Noam Brown

Noam Brown

Research Scientist, Meta AI
Year of Birth  1987
Nationality  American

Important milestone

2017 Joined Meta AI

Began AI research at Meta AI division

2021 LLaMA Development

Contributed to LLaMA model development

2023 LLaMA 2 Release

Worked on LLaMA 2, enhancing open-source AI

AI product

The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.

The Llama 3.2 3B models support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama3 is Meta's latest open-source large language model, trained on a 15T corpus, supports an 8K context length, and has been optimized for effectiveness and safety.

Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

The Llama 3.2 3B models support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.

The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality

Llama3 is Meta's latest open-source large language model, trained on a 15T corpus, supports an 8K context length, and has been optimized for effectiveness and safety.

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.

Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.

The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

The Llama 3.2 3B models support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.

Llama3.1 are multilingual and have a significantly longer context length of 128K, state-of-the-art tool use, and overall stronger reasoning capabilities.

Personal Profile

Contributed to LLaMA models at Meta AI, focusing on efficient language models.

Back to Top
OR