DeepSeek Boosts AI Spending, Contrary to Beliefs
May 10, 2025
WyattHill
0
The stock market's tumble in January, spurred by the buzz around the Chinese AI breakthrough DeepSeek AI and its cost-effective computing method, might lead one to believe that companies are pulling back on their investments in AI chips and systems. However, my experience at the generative AI conference in New York, organized by Bloomberg Intelligence, painted a different picture. The enthusiasm for expanding the use of generative AI was palpable, suggesting that spending in this area is far from slowing down.
Also: What is DeepSeek AI? Is it safe? Here's everything you need to know
The conference, titled "Generative AI: Scaling Laws Post DeepSeek," was filled with discussions that emphasized the ongoing demand driving increased investment in AI.
"We had ten panels today, and not a single person on those panels said we have more capacity than we need," remarked Mandeep Singh, a senior technology analyst with Bloomberg Intelligence and one of the event's organizers.
"And no one was talking about a bubble" in infrastructure, Singh added, highlighting the industry's confidence in AI's future.
The AI Infrastructure Build: Where Are We?
Anurag Rana, Singh's colleague at Bloomberg Intelligence and a senior IT services and software analyst, posed a critical question: "The most important question right now in front of everybody is the AI infrastructure build. Yeah. Where are we in that cycle?"
"Nobody knows" for certain, Rana admitted. Yet, the hope sparked by DeepSeek AI is that significant advancements can be achieved with less expense.
"DeepSeek shook a lot of people," he said. "If you are not needing that many GPUs to run models, then why do we need $500 billion for the Stargate project," he observed, referencing a planned US AI project involving Japan's SoftBank Group, OpenAI, and Oracle.
Rana noted that the industry is hopeful that AI costs will plummet, mirroring the rapid decline in cloud computing costs.
Also: Is DeepSeek's new image model another win for cheaper AI?
"That drop in the cost curve, which probably took six, seven, eight years to store one terabyte of data in Amazon AWS, when it started versus today, the economics were good," he said. "And that's what everybody's hoping, that on the inference side" of AI, "if the curve falls to that level, oh my god, the adoption rate on AI on the end-user side of it, or, the enterprise side of it, is going to be spectacular."

Singh concurred, noting that DeepSeek AI's emergence has "changed everyone's mindset about achieving efficiency."
Throughout the day, numerous panels delved into enterprise AI projects, from their inception to deployment. Yet, there was a recurring theme: the need to drastically reduce the costs of AI to broaden its accessibility.
"I don't think DeepSeek was a surprise," said Shawn Edwards, Bloomberg's chief technologist, in an interview with David Dwyer, the head of Bloomberg Intelligence. "What it made me think is that it would be great if you could wave a wand and have these models run incredibly efficiently," he said, envisioning a future where all AI models could operate with such efficiency.
The Proliferation of AI Models
One reason many panelists anticipate increased, rather than decreased, investment in AI infrastructure is the growing number of AI models. A key takeaway from the day was that there won't be a single AI model to rule them all.
"We use a family of models," Edwards explained. "There is no such thing as a best model."
Panelists agreed that while "foundation" or "frontier" large language models will continue to evolve, individual enterprises might employ hundreds or even thousands of AI models.
Also: The rise of AI PCs: How businesses are reshaping their tech to keep up
These models could be fine-tuned on a company's proprietary data, a process of re-training a neural network after its initial "pre-training" on generic data.
"Agents in enterprise require optionality among the models," said Jed Dougherty, the head of platform strategy for venture-backed data science firm Dataiku. "They need the ability to control and create, and to have auditability" of AI models.
"We want to put the tools to build these things in the hands of people," he said. "We don't want ten PhDs building all the agents."
In a similar vein, Adobe, a leader in design tools, is betting on custom models as a key use case for creatives. "We can train custom model extensions for your brand that can be a help for a new ad campaign," said Adobe's head of new business ventures, Hannah Elsakr, in a discussion with Bloomberg TV anchor Romaine Bostick.
Increasing Processing Demand
As with AI models, the proliferation of AI agents within companies is driving up processing demands, many speakers suggested.
"You won't cram a whole process into one agent, you'll break it up into parts," said Ray Smith, Microsoft's head of Copilot Studio agents and automation.
Smith predicted that through a single interface, such as Copilot, "we will interact with hundreds of agents -- they are just apps in the new world" of programming.
"We will give the agent the business process, tell it what we want to accomplish," and the agent will carry out tasks. "Agentic apps are just a new way of workflow," he said.
Also: Nvidia dominates in gen AI benchmarks, clobbering 2 rival AI chips
Such everyday scenarios are "all technologically possible," Smith noted, "it's just the pace at which we build it out."
The push to bring AI "agents" to more people within organizations is further necessitating cost reductions, said James McNiven, the head of product management for microprocessor maker ARM Holdings, in a chat with Bloomberg's Hyde.
"How do we provide access on more and more devices," he posed. "We are seeing models at a PhD-level" of task capability, he said.
McNiven suggested that such agents should serve as assistants to humans, drawing a parallel to when payment systems were introduced to developing countries via mobile phones a decade ago: "How do we get that to people who can use that ability?"
The Proliferation of Foundation Models
Even the generic foundation models are proliferating at an astonishing rate.
Amazon AWS has 1,800 different AI models available, Dave Brown, the head of AWS computing and networking, told Bloomberg TV anchor Caroline Hyde. The company is "doing a lot to bring down the cost" of running the models, he said, including developing custom AI chips, such as Trainium.
AWS is "using more of our own processors than other companies' processors," said Brown, alluding to Nvidia, AMD, Intel, and other general-purpose chip suppliers.
Also: ChatGPT's new image generator shattered my expectations - and now it's free to try
"Customers would do more if the cost was lower," said Brown.
AWS works daily with Anthropic, makers of the Claude language model family, noted Brown. Anthropic head of application programming interfaces Michael Gerstenhaber, in the same chat with Hyde, noted that "Thinking models cause a lot of capacity to be used," referring to the trend of so-called reasoning models, such as DeepSeek R1 and GPT-o1, to output verbose statements about the arguments for their final answers.
Anthropic is working closely with AWS on ways to trim the compute budget, such as "prompt caching," storing the computations from prior answers.
Despite that trend, he said, "Anthropic needs hundreds of thousands of accelerators," meaning, AI-focused silicon chips, "across many data centers" to run its models.
In addition, the escalating energy cost of powering AI shows no sign of slowing, said Brown. Current data centers are consuming hundreds of megawatts, he noted, and will eventually require gigawatts. "The power it consumes," meaning AI, "is large, and the footprint is large in many data centers."
Also: Global AI computing will use 'multiple NYCs' worth of power by 2026, says founder
Economic Uncertainty and AI Investment
Despite the ambitious scenarios, one factor could disrupt all the use cases and investment plans: the economy.
As the conference was winding down on Wednesday evening, panelists and guests were monitoring the after-hours plunge in the stock market. US President Donald Trump had just announced a global package of tariffs that were larger and more sweeping than most on Wall Street had anticipated.
Traditional areas of tech investment, such as servers and storage, and not AI, could be the initial victims of any economic contraction, said Bloomberg's Rana.
"The other big thing we are focused on is the non-AI tech spending," he said regarding the tariffs. "When we look at the likes of IBM, Accenture, Microsoft, and all the others, when we just put aside AI for a second, that is something that is going to be a struggle going into this earnings season."
CFOs of major companies might prioritize AI and shift funds, even if they have to trim their budgets amidst economic uncertainty and potential recession, Rana suggested.
However, that optimistic outlook is not guaranteed.
"The thing I'm most interested in finding out, is, if all these large companies are going to keep their cap-ex [capital spending] targets intact," said Rana, including AI data centers, "or are they going to say, You know what? It's too uncertain."
Related article
Michael Jackson AI Cover: Unveiling the Viral News Trend
If you're a fan of music, you've probably noticed how the industry is changing rapidly, thanks to artificial intelligence. One of the most talked-about developments is the emergence of Michael Jackson AI covers. These digital creations use AI to replicate the King of Pop's unique voice and style, br
AI Image Generators Offer Free Independence Day Art Creation
Independence Day isn't just another date on the calendar; it's a vibrant celebration of freedom and patriotism. And with the advent of AI, getting creative with your Independence D
Maximize Earnings with AI: A Comprehensive Guide to AI Tools in Online Business
If you're looking to transform your online business, artificial intelligence is the game-changer you need. The AI Profits Power Cheat Sheet, crafted by Dennis Becker and Barb Ling, is your go-to resource for tapping into a wealth of AI tools designed to boost your profits. This review dives into how
Comments (0)
0/200






The stock market's tumble in January, spurred by the buzz around the Chinese AI breakthrough DeepSeek AI and its cost-effective computing method, might lead one to believe that companies are pulling back on their investments in AI chips and systems. However, my experience at the generative AI conference in New York, organized by Bloomberg Intelligence, painted a different picture. The enthusiasm for expanding the use of generative AI was palpable, suggesting that spending in this area is far from slowing down.
Also: What is DeepSeek AI? Is it safe? Here's everything you need to know
The conference, titled "Generative AI: Scaling Laws Post DeepSeek," was filled with discussions that emphasized the ongoing demand driving increased investment in AI.
"We had ten panels today, and not a single person on those panels said we have more capacity than we need," remarked Mandeep Singh, a senior technology analyst with Bloomberg Intelligence and one of the event's organizers.
"And no one was talking about a bubble" in infrastructure, Singh added, highlighting the industry's confidence in AI's future.
The AI Infrastructure Build: Where Are We?
Anurag Rana, Singh's colleague at Bloomberg Intelligence and a senior IT services and software analyst, posed a critical question: "The most important question right now in front of everybody is the AI infrastructure build. Yeah. Where are we in that cycle?"
"Nobody knows" for certain, Rana admitted. Yet, the hope sparked by DeepSeek AI is that significant advancements can be achieved with less expense.
"DeepSeek shook a lot of people," he said. "If you are not needing that many GPUs to run models, then why do we need $500 billion for the Stargate project," he observed, referencing a planned US AI project involving Japan's SoftBank Group, OpenAI, and Oracle.
Rana noted that the industry is hopeful that AI costs will plummet, mirroring the rapid decline in cloud computing costs.
Also: Is DeepSeek's new image model another win for cheaper AI?
"That drop in the cost curve, which probably took six, seven, eight years to store one terabyte of data in Amazon AWS, when it started versus today, the economics were good," he said. "And that's what everybody's hoping, that on the inference side" of AI, "if the curve falls to that level, oh my god, the adoption rate on AI on the end-user side of it, or, the enterprise side of it, is going to be spectacular."
Singh concurred, noting that DeepSeek AI's emergence has "changed everyone's mindset about achieving efficiency."
Throughout the day, numerous panels delved into enterprise AI projects, from their inception to deployment. Yet, there was a recurring theme: the need to drastically reduce the costs of AI to broaden its accessibility.
"I don't think DeepSeek was a surprise," said Shawn Edwards, Bloomberg's chief technologist, in an interview with David Dwyer, the head of Bloomberg Intelligence. "What it made me think is that it would be great if you could wave a wand and have these models run incredibly efficiently," he said, envisioning a future where all AI models could operate with such efficiency.
The Proliferation of AI Models
One reason many panelists anticipate increased, rather than decreased, investment in AI infrastructure is the growing number of AI models. A key takeaway from the day was that there won't be a single AI model to rule them all.
"We use a family of models," Edwards explained. "There is no such thing as a best model."
Panelists agreed that while "foundation" or "frontier" large language models will continue to evolve, individual enterprises might employ hundreds or even thousands of AI models.
Also: The rise of AI PCs: How businesses are reshaping their tech to keep up
These models could be fine-tuned on a company's proprietary data, a process of re-training a neural network after its initial "pre-training" on generic data.
"Agents in enterprise require optionality among the models," said Jed Dougherty, the head of platform strategy for venture-backed data science firm Dataiku. "They need the ability to control and create, and to have auditability" of AI models.
"We want to put the tools to build these things in the hands of people," he said. "We don't want ten PhDs building all the agents."
In a similar vein, Adobe, a leader in design tools, is betting on custom models as a key use case for creatives. "We can train custom model extensions for your brand that can be a help for a new ad campaign," said Adobe's head of new business ventures, Hannah Elsakr, in a discussion with Bloomberg TV anchor Romaine Bostick.
Increasing Processing Demand
As with AI models, the proliferation of AI agents within companies is driving up processing demands, many speakers suggested.
"You won't cram a whole process into one agent, you'll break it up into parts," said Ray Smith, Microsoft's head of Copilot Studio agents and automation.
Smith predicted that through a single interface, such as Copilot, "we will interact with hundreds of agents -- they are just apps in the new world" of programming.
"We will give the agent the business process, tell it what we want to accomplish," and the agent will carry out tasks. "Agentic apps are just a new way of workflow," he said.
Also: Nvidia dominates in gen AI benchmarks, clobbering 2 rival AI chips
Such everyday scenarios are "all technologically possible," Smith noted, "it's just the pace at which we build it out."
The push to bring AI "agents" to more people within organizations is further necessitating cost reductions, said James McNiven, the head of product management for microprocessor maker ARM Holdings, in a chat with Bloomberg's Hyde.
"How do we provide access on more and more devices," he posed. "We are seeing models at a PhD-level" of task capability, he said.
McNiven suggested that such agents should serve as assistants to humans, drawing a parallel to when payment systems were introduced to developing countries via mobile phones a decade ago: "How do we get that to people who can use that ability?"
The Proliferation of Foundation Models
Even the generic foundation models are proliferating at an astonishing rate.
Amazon AWS has 1,800 different AI models available, Dave Brown, the head of AWS computing and networking, told Bloomberg TV anchor Caroline Hyde. The company is "doing a lot to bring down the cost" of running the models, he said, including developing custom AI chips, such as Trainium.
AWS is "using more of our own processors than other companies' processors," said Brown, alluding to Nvidia, AMD, Intel, and other general-purpose chip suppliers.
Also: ChatGPT's new image generator shattered my expectations - and now it's free to try
"Customers would do more if the cost was lower," said Brown.
AWS works daily with Anthropic, makers of the Claude language model family, noted Brown. Anthropic head of application programming interfaces Michael Gerstenhaber, in the same chat with Hyde, noted that "Thinking models cause a lot of capacity to be used," referring to the trend of so-called reasoning models, such as DeepSeek R1 and GPT-o1, to output verbose statements about the arguments for their final answers.
Anthropic is working closely with AWS on ways to trim the compute budget, such as "prompt caching," storing the computations from prior answers.
Despite that trend, he said, "Anthropic needs hundreds of thousands of accelerators," meaning, AI-focused silicon chips, "across many data centers" to run its models.
In addition, the escalating energy cost of powering AI shows no sign of slowing, said Brown. Current data centers are consuming hundreds of megawatts, he noted, and will eventually require gigawatts. "The power it consumes," meaning AI, "is large, and the footprint is large in many data centers."
Also: Global AI computing will use 'multiple NYCs' worth of power by 2026, says founder
Economic Uncertainty and AI Investment
Despite the ambitious scenarios, one factor could disrupt all the use cases and investment plans: the economy.
As the conference was winding down on Wednesday evening, panelists and guests were monitoring the after-hours plunge in the stock market. US President Donald Trump had just announced a global package of tariffs that were larger and more sweeping than most on Wall Street had anticipated.
Traditional areas of tech investment, such as servers and storage, and not AI, could be the initial victims of any economic contraction, said Bloomberg's Rana.
"The other big thing we are focused on is the non-AI tech spending," he said regarding the tariffs. "When we look at the likes of IBM, Accenture, Microsoft, and all the others, when we just put aside AI for a second, that is something that is going to be a struggle going into this earnings season."
CFOs of major companies might prioritize AI and shift funds, even if they have to trim their budgets amidst economic uncertainty and potential recession, Rana suggested.
However, that optimistic outlook is not guaranteed.
"The thing I'm most interested in finding out, is, if all these large companies are going to keep their cap-ex [capital spending] targets intact," said Rana, including AI data centers, "or are they going to say, You know what? It's too uncertain."












