Did China’s Baidu discover scaling laws before OpenAI? A debate rekindles in AI circles


Recent community discussions have reignited debate over whether the Chinese tech giant Baidu It may have developed the key theoretical foundations of large-scale artificial intelligence (AI) models before the US OpenAI.
Large models, or “foundation models”, are at the forefront AI developmentwith their rapid iterations leading to cutting-edge applications. While the United States is generally considered a leader in innovation when it comes to advanced AI models, some argue that China may have started exploring these concepts earlier.

The “law of scaling” is at the heart of large model development: the larger the training data and model parameters, the stronger the intelligence capabilities of the model. Widely attributed to OpenAI’s 2020 paper, “Scaling Laws for Neural Language Models,” this idea has since become a cornerstone of AI research.

THE OpenAI The paper showed that increasing model parameters, training data, and computational resources improves performance following a power-law relationship. This idea guided the development of subsequent large-scale AI models.

However, Dario Amodei, co-author of the OpenAI paper and former vice president of research at the company, shared in a November podcast that he observed similar phenomena as early as 2014, during his time at Baidu.

“When I worked at Baidu with [former Baidu chief scientist] Andrew Ng in late 2014, the first thing we worked on was voice recognition systems,” Amodei said. “I noticed that the models got better as you fed them more data, made them bigger, and trained them longer.”

Leave a Reply

Your email address will not be published. Required fields are marked *