AI arms race ‘risks amplifying the existential dangers of superintelligence’ | UK News


An arms race for the supremacy of artificial intelligence (AI), triggered by recent panic on the Chinese chatbot Deepsek, risks amplifying the existential dangers of superintendent, according to one of the “sponsors” of the AA .

The Canadian Pioneer of Automatic Learning Yoshua Bengio, author of the first international IA security report which will be presented at an International AI Summit in Paris next week, warns uncontrolled investment in power IT for unattended AI is dangerous.

“The effort is to get started in who will win the race, rather than how to make sure that we are not going to build something that explodes us,” said Bengio.

Military and economic races, he warns, “lead to the reduction of the corners of ethics, the reduction of the corners of responsibilities and security. It is inevitable”.

Bengio has worked on neural networks and automatic learning, the software architecture underlying modern AI models.

He is in London, with other Pioneers of the AI ​​to receive the Queen Elizabeth Prize, the most prestigious prize for British engineering in recognition of AI and its potential.

He is enthusiastic about his advantages for the company, but the pivot of AI regulation by the White House of Donald Trump and the frantic competition between large technological companies for more powerful AI models is a worrying change.

More artificial intelligence

“We build increasingly powerful systems; becoming superhuman in certain dimensions,” he said.

“As these systems become more powerful, they also become extraordinarily more precious, economically speaking.

“So, the magnitude of,” wow, it will make me a lot of money “motivates a lot of people. And of course, when you want to sell products, you don’t want to talk about risks.”

But all the “sponsors” of the AI ​​are not so worried.

Take Yann Lecun, the chief scientist of Meta AI, also in London to share the QE price.

Picture:
Yann Lecun, chief scientist of Meta AI

“We were deceived by thinking that the big language models are intelligent, but really, they are not,” he said.

“We have no machines that are almost as intelligent as a home cat, in terms of understanding the physical world.”

In the three to five years, predicts Lecun, AI will have certain aspects of intelligence at the human level. The robots, for example, which can perform tasks that they have not been scheduled or trained to perform.

Find out more:
AI can increase the speed of breast cancer treatment
The material that could maintain the key to energy almost without limit

But, he says, rather than making the world less safe, the drama Deepseek – where a Chinese company has developed an AI to compete with the best of American technology with a tenth of computer power – demonstrates that no one will dominate a long time.

“If the United States decides Clamour with regard to AI for geopolitical reasons, or, commercial reasons, then you will have innovation elsewhere in the world. Deepseek has shown it,” he said.

The Prix de la Royal Academy of Engineering is awarded each year to engineers whose discoveries have or promise to have the greatest impact on the world.

Previous beneficiaries include pioneers of photovoltaic cells in solar panels, wind turbines and neodymium magnets found in hard drives and electric motors.

The Minister of Sciences Lord Vallance, who presides over the Qe Prize foundation, says that it is vigilant of the potential risks of the AI. Organizations such as the new IA security institute in the United Kingdom are designed to predict and prevent the potential damage to “human type” intelligence.

Picture:
Minister of Sciences Lord Vallance

But he is less concerned with a nation or a company with a monopoly on AI.

“I think what we have seen in recent weeks is that it is much more likely that we are going to have many companies in this space, and the idea of ​​a single point domination is quite unlikely,” said -he.

Leave a Reply

Your email address will not be published. Required fields are marked *