Deepseek rocked the AI ecosystem led by the United States with its latest model, shaving hundreds of billions of people in chief Nvidia CAP-CAPPORT. While the leaders of the sector are attacking the benefits, the small companies of IA see an opportunity to expand with the Chinese startup.
Several AI companies have told CNBC that Deepseek’s emergence was a “massive” opportunity for them, rather than a threat.
“Developers are very eager to replace the expensive and closed models of Openai with open source models like Deepseek R1 …”, said Andrew Feldman, CEO of the start-up of Cerebras Systems artificial chips.
The company competes with NVIDIA graphics processing units and offers cloud -based services via its own computer clusters. Feldman said that the publication of model R1 had generated one of the largest peaks in Cérebras de -request for its services.
“R1 shows that [AI market] Growth will not be dominated by a single company – software and software docions do not exist for open source models, “added Feldman.
Open Source refers to the software in which the source code is freely available on the web for possible modification and redistribution. Deepseek models are open source, unlike those of competitors such as OpenAi.
Deepseek also claims that his R1 reasoning model competes with the best American technology, despite the costs at low cost and training without cutting -edge graphic processing units, although observers and industry competitors have questioned these affirmations .
“As in the PC and Internet markets, the drop in prices helps fuel global adoption. The AI market is on a similar secular growth path,” said Feldman.
Inference chips
Deepseek could increase the adoption of new flea technologies by accelerating the AI cycle of training in the “inference” phase, “said start-ups and industry experts.
The inference refers to the act of use and application of the AI to make predictions or decisions based on new information, rather than the building or the formation of the model.
“To put it simply, the formation of AI is to build a tool or an algorithm, while inference consists in really deploying this tool for use in real applications,” said Phelix Lee, analyst in shares to Morningstar, emphasizing semiconductors.
While Nvidia occupies a dominant position in the GPUs used for the formation of AI, many competitors see Salle for expansion in the “inference” segment, where they promise higher efficiency for lower costs.
The AI training is very high calculation intensity, but inference can work with less powerful chips that are scheduled to perform a closer range of tasks, added Lee.
A number of AI chip startups have told CNBC that they saw more requests for inference and computer paces while customers adopt and lean on the open source model of Deepseek.
“”[DeepSeek] has shown that smaller open models can be trained to be as capable or more capable as larger owner models, which can be made to a fraction of the cost, “said Sid Sheth, CEO of the start-up matrix of chip ai.
“With the great availability of small competent models, they have catalyzed the age of inference,” he told CNBC, adding that the company recently saw an increase in the interest of global customers who seek to accelerate their inference plans.
Robert Wachen, co-founder and chief of operation of the engraved chipmaker, said that dozens of companies have contacted the startup since Deepseek published his reasoning models.
“Companies are 1738911278 Pass their expenses from training in the inference clusters, “he said.
“Deepseek-R1 has proven that the calculation of the inference time is now the [state-of-the-art] Approach for each large model supplier and thought is not cheap – we will only need more and more calculation capacity to scale these models for millions of users. “”
Jevon’s paradox
Analysts and industry experts agree that Deepseek’s achievements are a boost for AI inference and the wider chip industry.
“Deepseek’s performance seems to be based on a series of technical innovations that considerably reduce inference costs while improving training costs”, according to a report De Bain & Company.
“In a bullish scenario, continuous improvements in efficiency would cause cheaper inference, arousing greater adoption of AI,” he added.
This model explains the Jevon paradox, a theory in which costs reductions in a new technology stimulate increased demand.
The Wedbush Financial and Investment Service Company declared in a research note last week that it continued to expect the use of AI through companies and retail consumers in the world To stimulate demand.
Addressing CNBC “Fast Money” last week, Sunny Madra, COO at Groq, who develops tokens for IA inference, suggested that the overall demand for IA is developing, small players will have No more space to grow.
“As the world will need more tokens [a unit of data that an AI model processes] Nvidia cannot provide enough tokens to everyone, so it gives us opportunities to sell even more aggressively on the market, “said Madra.