TechCrunch Unveils Comprehensive AI Glossary

Artificial intelligence is a complex and ever-evolving field. The scientists diving into this area often use specialized terminology, which can make it tough for the rest of us to keep up. That's why we've created this glossary—to help you understand the key terms and phrases we use in our coverage of the AI industry. We'll keep this list updated as new terms and concepts emerge, reflecting the latest advancements and potential risks in AI research.
AI agent
An AI agent is a tool that leverages AI technologies to handle a range of tasks for you—going beyond what a simple AI chatbot can do. Think of things like filing your expenses, booking travel or restaurant reservations, or even managing and writing code. However, the term "AI agent" can vary in meaning depending on who you ask. The infrastructure to fully realize their potential is still being developed, but the core idea is an autonomous system that can tap into multiple AI systems to execute multi-step tasks.
Chain of thought
When faced with a straightforward question like "which animal is taller, a giraffe or a cat?", your brain can usually answer it without much effort. But for trickier problems—like figuring out how many chickens and cows a farmer has when you know they have 40 heads and 120 legs—you might need to jot down some equations (in this case, 20 chickens and 20 cows).
In AI, chain-of-thought reasoning involves breaking down a problem into smaller, intermediate steps to enhance the accuracy of the final result. It might take longer, but it's especially useful in logical or coding scenarios. These reasoning models are built on traditional large language models and refined for chain-of-thought processes through reinforcement learning.
(See: Large language model)
Deep learning
Deep learning is a type of self-improving machine learning that uses a multi-layered artificial neural network (ANN) structure. This setup allows for more complex correlations than simpler machine learning methods like linear models or decision trees. The design of deep learning algorithms takes inspiration from the neural pathways in the human brain.
Deep learning models can identify key features in data on their own, without needing humans to define them. They also learn from their mistakes, improving over time through repetition and adjustments. However, they require a lot of data—millions of points—to perform well, and they take longer to train than simpler models, which can drive up development costs.
(See: Neural network)
Fine-tuning
Fine-tuning involves further training an AI model to boost its performance for a specific task or area, using new, specialized data. Many AI startups start with large language models and then fine-tune them to better serve a particular industry or task, using their own domain-specific knowledge.
(See: Large language model [LLM])
Large language model (LLM)
Large language models, or LLMs, power popular AI assistants like ChatGPT, Claude, Google's Gemini, Meta's AI Llama, Microsoft Copilot, and Mistral's Le Chat. When you chat with these assistants, you're interacting with an LLM that processes your request, often using tools like web browsing or code interpreters.
AI assistants and LLMs can go by different names. For example, GPT is OpenAI's large language model, while ChatGPT is the AI assistant product built on it.
LLMs are deep neural networks with billions of parameters (or weights) that learn how words and phrases relate to each other, creating a multidimensional map of language. They're trained on vast amounts of text from books, articles, and transcripts. When you give an LLM a prompt, it generates a response based on the most likely pattern, predicting the next word based on what came before.
(See: Neural network)
Neural network
A neural network is the multi-layered algorithmic structure that forms the backbone of deep learning and the surge in generative AI tools, especially after large language models came into play.
The concept of mimicking the human brain's interconnected pathways for data processing dates back to the 1940s. However, it was the rise of graphical processing units (GPUs), thanks to the video game industry, that really brought this idea to life. These chips allowed for training algorithms with many more layers, significantly improving performance in areas like voice recognition, autonomous navigation, and drug discovery.
(See: Large language model [LLM])
Weights
Weights are crucial in AI training because they determine how much importance (or weight) is given to different features in the training data, shaping the AI model's output. Essentially, weights are numerical parameters that highlight what's most relevant in a dataset for the task at hand. They work by multiplying inputs.
Training typically starts with randomly assigned weights, which then adjust over time as the model tries to produce outputs that more closely match the target. For instance, an AI model predicting housing prices might assign weights to factors like the number of bedrooms and bathrooms, whether a property is detached or semi-detached, and whether it has parking or a garage. These weights reflect how much each factor influences a property's value, based on the data used.
Related article
AI-Powered Music Creation: Craft Songs and Videos Effortlessly
Music creation can be complex, demanding time, resources, and expertise. Artificial intelligence has transformed this process, making it simple and accessible. This guide highlights how AI enables any
Creating AI-Powered Coloring Books: A Comprehensive Guide
Designing coloring books is a rewarding pursuit, combining artistic expression with calming experiences for users. Yet, the process can be labor-intensive. Thankfully, AI tools simplify the creation o
Qodo Partners with Google Cloud to Offer Free AI Code Review Tools for Developers
Qodo, an Israel-based AI coding startup focused on code quality, has launched a partnership with Google Cloud to enhance AI-generated software integrity.As businesses increasingly depend on AI for cod
Comments (55)
0/200
CharlesYoung
August 21, 2025 at 11:01:16 AM EDT
Ce glossaire est super utile pour démystifier l'IA ! J'adore l'idée, mais franchement, parfois ces termes me donnent l'impression d'apprendre une langue extraterrestre. 😅 Vous pensez ajouter des exemples concrets pour chaque mot ?
0
WilliamRamirez
August 14, 2025 at 5:00:59 AM EDT
This AI glossary is a lifesaver! Finally, a way to decode all the tech jargon without feeling lost. 😄 Definitely bookmarking this for my next deep dive into AI articles.
0
JuanLopez
August 9, 2025 at 9:00:59 AM EDT
This AI glossary is a lifesaver! 🤓 Finally, I can decode all that tech jargon and actually understand what’s going on in AI articles.
0
DanielLewis
August 7, 2025 at 1:01:05 AM EDT
This AI glossary is a lifesaver! I was drowning in tech jargon trying to follow AI news. Now I can finally make sense of terms like 'neural network' without googling every five seconds. 🙌
0
KennethLee
August 1, 2025 at 4:25:35 AM EDT
This AI glossary is a lifesaver! 🤓 Finally, I can decode all the jargon scientists throw around. Super handy for keeping up with TechCrunch's coverage.
0
RogerJackson
April 25, 2025 at 11:32:14 AM EDT
이 용어집은 정말 도움이 돼요! AI 용어에 항상 혼란스러웠는데 이제야 이해할 수 있게 됐어요. 기술 기사의 암호 해독기 같은 느낌이에요. 예시가 더 많았으면 좋겠지만 그래도 매우 유용해요! 🤓
0
Artificial intelligence is a complex and ever-evolving field. The scientists diving into this area often use specialized terminology, which can make it tough for the rest of us to keep up. That's why we've created this glossary—to help you understand the key terms and phrases we use in our coverage of the AI industry. We'll keep this list updated as new terms and concepts emerge, reflecting the latest advancements and potential risks in AI research.
AI agent An AI agent is a tool that leverages AI technologies to handle a range of tasks for you—going beyond what a simple AI chatbot can do. Think of things like filing your expenses, booking travel or restaurant reservations, or even managing and writing code. However, the term "AI agent" can vary in meaning depending on who you ask. The infrastructure to fully realize their potential is still being developed, but the core idea is an autonomous system that can tap into multiple AI systems to execute multi-step tasks.
Chain of thought When faced with a straightforward question like "which animal is taller, a giraffe or a cat?", your brain can usually answer it without much effort. But for trickier problems—like figuring out how many chickens and cows a farmer has when you know they have 40 heads and 120 legs—you might need to jot down some equations (in this case, 20 chickens and 20 cows).
In AI, chain-of-thought reasoning involves breaking down a problem into smaller, intermediate steps to enhance the accuracy of the final result. It might take longer, but it's especially useful in logical or coding scenarios. These reasoning models are built on traditional large language models and refined for chain-of-thought processes through reinforcement learning.
(See: Large language model)
Deep learning Deep learning is a type of self-improving machine learning that uses a multi-layered artificial neural network (ANN) structure. This setup allows for more complex correlations than simpler machine learning methods like linear models or decision trees. The design of deep learning algorithms takes inspiration from the neural pathways in the human brain.
Deep learning models can identify key features in data on their own, without needing humans to define them. They also learn from their mistakes, improving over time through repetition and adjustments. However, they require a lot of data—millions of points—to perform well, and they take longer to train than simpler models, which can drive up development costs.
(See: Neural network)
Fine-tuning Fine-tuning involves further training an AI model to boost its performance for a specific task or area, using new, specialized data. Many AI startups start with large language models and then fine-tune them to better serve a particular industry or task, using their own domain-specific knowledge.
(See: Large language model [LLM])
Large language model (LLM) Large language models, or LLMs, power popular AI assistants like ChatGPT, Claude, Google's Gemini, Meta's AI Llama, Microsoft Copilot, and Mistral's Le Chat. When you chat with these assistants, you're interacting with an LLM that processes your request, often using tools like web browsing or code interpreters.
AI assistants and LLMs can go by different names. For example, GPT is OpenAI's large language model, while ChatGPT is the AI assistant product built on it.
LLMs are deep neural networks with billions of parameters (or weights) that learn how words and phrases relate to each other, creating a multidimensional map of language. They're trained on vast amounts of text from books, articles, and transcripts. When you give an LLM a prompt, it generates a response based on the most likely pattern, predicting the next word based on what came before.
(See: Neural network)
Neural network A neural network is the multi-layered algorithmic structure that forms the backbone of deep learning and the surge in generative AI tools, especially after large language models came into play.
The concept of mimicking the human brain's interconnected pathways for data processing dates back to the 1940s. However, it was the rise of graphical processing units (GPUs), thanks to the video game industry, that really brought this idea to life. These chips allowed for training algorithms with many more layers, significantly improving performance in areas like voice recognition, autonomous navigation, and drug discovery.
(See: Large language model [LLM])
Weights Weights are crucial in AI training because they determine how much importance (or weight) is given to different features in the training data, shaping the AI model's output. Essentially, weights are numerical parameters that highlight what's most relevant in a dataset for the task at hand. They work by multiplying inputs.
Training typically starts with randomly assigned weights, which then adjust over time as the model tries to produce outputs that more closely match the target. For instance, an AI model predicting housing prices might assign weights to factors like the number of bedrooms and bathrooms, whether a property is detached or semi-detached, and whether it has parking or a garage. These weights reflect how much each factor influences a property's value, based on the data used.




Ce glossaire est super utile pour démystifier l'IA ! J'adore l'idée, mais franchement, parfois ces termes me donnent l'impression d'apprendre une langue extraterrestre. 😅 Vous pensez ajouter des exemples concrets pour chaque mot ?




This AI glossary is a lifesaver! Finally, a way to decode all the tech jargon without feeling lost. 😄 Definitely bookmarking this for my next deep dive into AI articles.




This AI glossary is a lifesaver! 🤓 Finally, I can decode all that tech jargon and actually understand what’s going on in AI articles.




This AI glossary is a lifesaver! I was drowning in tech jargon trying to follow AI news. Now I can finally make sense of terms like 'neural network' without googling every five seconds. 🙌




This AI glossary is a lifesaver! 🤓 Finally, I can decode all the jargon scientists throw around. Super handy for keeping up with TechCrunch's coverage.




이 용어집은 정말 도움이 돼요! AI 용어에 항상 혼란스러웠는데 이제야 이해할 수 있게 됐어요. 기술 기사의 암호 해독기 같은 느낌이에요. 예시가 더 많았으면 좋겠지만 그래도 매우 유용해요! 🤓












