Red Hat on open, small language models for responsible, practical AI
April 24, 2025
JasonRoberts
0
Geopolitical Influences on AI Development and Usage
As global events continue to shape our world, it's inevitable that they impact the technology sector, particularly the AI market. This influence extends to how AI is developed, its methodologies, and its application within enterprises. The current landscape of AI is a mix of excitement and skepticism, with some embracing its potential while others remain cautious due to its early stages of development.
The traditional, closed-loop large language models (LLMs) like those from well-known providers are facing competition from newer, more open models such as Llama, DeepSeek, and Baidu's Ernie X1. In contrast, open-source development offers transparency and the opportunity for community contribution, aligning with the concept of "responsible AI." This approach considers environmental impacts, usage ethics, learning data sources, and issues related to data sovereignty, language, and politics.
Red Hat, a company known for its successful open-source business model, is keen on applying its collaborative and transparent approach to AI. In a recent conversation with Julio Guijarro, Red Hat's CTO for EMEA, we discussed their strategy to harness the power of generative AI in a way that adds value to enterprises responsibly and sustainably.
The Need for Education and Transparency
Julio emphasized the need for more education around AI, pointing out that its complex nature often makes it a "black box" to many. "The inner workings of AI, deeply rooted in complex science and mathematics, remain largely unknown," he said. "This lack of transparency is exacerbated when AI is developed in closed environments."
Additional challenges include the under-representation of certain languages, data sovereignty concerns, and trust issues. Julio noted, "Data is an organization's most valuable asset, and businesses must be aware of the risks of exposing sensitive data to public platforms with varying privacy policies."
Red Hat's Approach to AI
Red Hat's response to the global demand for AI focuses on delivering benefits to end-users while addressing the doubts and limitations associated with traditional AI services. Julio highlighted the potential of small language models (SLMs) as a solution. These models can run locally or on hybrid clouds using non-specialist hardware and can access local business data. SLMs are efficient and task-specific, requiring fewer resources than LLMs.
One of the key advantages of SLMs is their ability to stay current with rapidly changing business data. "Large language models can become obsolete quickly because the data generation happens outside the big clouds, right next to your business processes," Julio explained.
Cost is another critical factor. "Customer service queries using an LLM can incur significant hidden costs," Julio said. "Before AI, data queries had a predictable scope and cost. With LLMs, each interaction can escalate costs because they operate on an iterative model. Running models on-premise allows for greater control over costs, as they're tied to your infrastructure, not per-query fees."
F fortunately, organizations don't need to invest heavily in specialized hardware like GPUs. Red Hat is working on optimizing models to run on standard hardware, focusing on the specific models businesses need rather than processing large, general-purpose data sets with every query.
The Importance of Smaller, Localized Models
By using and referencing local data, outcomes can be tailored to specific needs. Julio referenced projects in regions like the Arab- and Portuguese-speaking worlds, which are not well-served by the English-centric LLMs.
There are practical challenges with LLMs, including latency issues that can affect time-sensitive or customer-facing applications. Keeping resources and results close to the user can mitigate these problems.
Trust is another crucial aspect of responsible AI. Red Hat advocates for open platforms, tools, and models to increase transparency and allow broader community involvement. "It's critical for everyone," Julio stated. "We're building capabilities to democratize AI, not just by publishing models but by providing tools for users to replicate, tune, and serve them."
Red Hat's recent acquisition of Neural Magic aims to help enterprises scale AI more easily, improve inference performance, and offer more choices in building and deploying AI workloads through the vLLM project. Additionally, in collaboration with IBM Research, Red Hat released InstructLab, opening AI development to non-data scientists who possess valuable business knowledge.
The Future of AI
While there's much speculation about the AI bubble, Red Hat believes in a future where AI is tailored to specific use cases and remains open-source. This approach will be economically viable and accessible to all. As Matt Hicks, CEO of Red Hat, stated, "The future of AI is open."

Related article
Reclaim Hours with AI Time Management Automation
Struggling with Endless To-Do Lists and Missed Deadlines? AI Could Be Your AnswerEver feel like you're drowning in a sea of tasks and deadlines that just keep slipping through your fingers? In our fast-paced world, time is a precious commodity, and managing it effectively can feel like an impossible
Trump's First Cabinet Meeting: An In-Depth Look at the Chaos
Donald Trump's first cabinet meeting of his new term was anything but ordinary, marked by a series of unusual moments and a flurry of misinformation that left observers both puzzled and concerned about the state of the economy and consumer confidence. This article dives into the key moments of the m
Google's NotebookLM Now Gathers Research Sources for Free
Google's NotebookLM is a fantastic tool for anyone diving into a research-heavy project. Typically, you'd need to hunt down and manually add all your sources, but now, NotebookLM is set to streamline your research process even further by doing the heavy lifting for you.On Wednesday, Google unveiled
Comments (0)
0/200






Geopolitical Influences on AI Development and Usage
As global events continue to shape our world, it's inevitable that they impact the technology sector, particularly the AI market. This influence extends to how AI is developed, its methodologies, and its application within enterprises. The current landscape of AI is a mix of excitement and skepticism, with some embracing its potential while others remain cautious due to its early stages of development.
The traditional, closed-loop large language models (LLMs) like those from well-known providers are facing competition from newer, more open models such as Llama, DeepSeek, and Baidu's Ernie X1. In contrast, open-source development offers transparency and the opportunity for community contribution, aligning with the concept of "responsible AI." This approach considers environmental impacts, usage ethics, learning data sources, and issues related to data sovereignty, language, and politics.
Red Hat, a company known for its successful open-source business model, is keen on applying its collaborative and transparent approach to AI. In a recent conversation with Julio Guijarro, Red Hat's CTO for EMEA, we discussed their strategy to harness the power of generative AI in a way that adds value to enterprises responsibly and sustainably.
The Need for Education and Transparency
Julio emphasized the need for more education around AI, pointing out that its complex nature often makes it a "black box" to many. "The inner workings of AI, deeply rooted in complex science and mathematics, remain largely unknown," he said. "This lack of transparency is exacerbated when AI is developed in closed environments."
Additional challenges include the under-representation of certain languages, data sovereignty concerns, and trust issues. Julio noted, "Data is an organization's most valuable asset, and businesses must be aware of the risks of exposing sensitive data to public platforms with varying privacy policies."
Red Hat's Approach to AI
Red Hat's response to the global demand for AI focuses on delivering benefits to end-users while addressing the doubts and limitations associated with traditional AI services. Julio highlighted the potential of small language models (SLMs) as a solution. These models can run locally or on hybrid clouds using non-specialist hardware and can access local business data. SLMs are efficient and task-specific, requiring fewer resources than LLMs.
One of the key advantages of SLMs is their ability to stay current with rapidly changing business data. "Large language models can become obsolete quickly because the data generation happens outside the big clouds, right next to your business processes," Julio explained.
Cost is another critical factor. "Customer service queries using an LLM can incur significant hidden costs," Julio said. "Before AI, data queries had a predictable scope and cost. With LLMs, each interaction can escalate costs because they operate on an iterative model. Running models on-premise allows for greater control over costs, as they're tied to your infrastructure, not per-query fees."
F fortunately, organizations don't need to invest heavily in specialized hardware like GPUs. Red Hat is working on optimizing models to run on standard hardware, focusing on the specific models businesses need rather than processing large, general-purpose data sets with every query.
The Importance of Smaller, Localized Models
By using and referencing local data, outcomes can be tailored to specific needs. Julio referenced projects in regions like the Arab- and Portuguese-speaking worlds, which are not well-served by the English-centric LLMs.
There are practical challenges with LLMs, including latency issues that can affect time-sensitive or customer-facing applications. Keeping resources and results close to the user can mitigate these problems.
Trust is another crucial aspect of responsible AI. Red Hat advocates for open platforms, tools, and models to increase transparency and allow broader community involvement. "It's critical for everyone," Julio stated. "We're building capabilities to democratize AI, not just by publishing models but by providing tools for users to replicate, tune, and serve them."
Red Hat's recent acquisition of Neural Magic aims to help enterprises scale AI more easily, improve inference performance, and offer more choices in building and deploying AI workloads through the vLLM project. Additionally, in collaboration with IBM Research, Red Hat released InstructLab, opening AI development to non-data scientists who possess valuable business knowledge.
The Future of AI
While there's much speculation about the AI bubble, Red Hat believes in a future where AI is tailored to specific use cases and remains open-source. This approach will be economically viable and accessible to all. As Matt Hicks, CEO of Red Hat, stated, "The future of AI is open."



5 Easy Steps to Reclaim Your Online Data Privacy - Start Today
Tweaks to US Data Centers Could Unlock 76 GW of New Power Capacity UK AI Body Renames to Security Institute, Signs MOU with Anthropic Nvidia Unveils Next-Gen GPUs: Blackwell Ultra, Vera Rubin, Feynman Telli, a YC Alum, Secures Pre-Seed Funding for AI Voice Agents








