Godfather of AI” Geoffrey Hinton warns AI could take control from humans: “People haven’t understood what’s coming


“AI godfather” Geoffrey Hinton was woken up in the middle of the night last year with a news that he won the Nobel Prize in Physics. He said he did not expect such recognition.

“I dreamed of winning one to understand how the brain works. But I didn’t understand how the brain works, but I won one anyway,” said Hinton.

The 77 -year -old researcher won the prize for his pioneer work in neural networks – offering in 1986 a method to predict the following word in a sequence – now the fundamental concept behind the major language models of today.

While Hinton thinks that artificial intelligence will transform education and medicine and potentially solve climate change, it is increasingly concerned about its rapid development.

“The best way to understand it emotionally is that we are like someone who has this really cute Tiger Cub,” said Hinton. “Unless you are very sure that it won’t want to kill you when he grew up, you should worry.”

The Pioneer IA estimates a risk of 10% to 20% that artificial intelligence ends up taking control of humans.

“People don’t have it yet, people haven’t understood what is happening,” he warned.

His concerns echo those of industry leaders like the CEO of Google Sundar Pichai, Elon Musk of X-Ai and the CEO of Openai, Sam Altman, who have all expressed similar concerns. However, Hinton criticizes these same companies to prioritize security benefits.

“If you are looking at what big companies are doing right now, they are lobbying to obtain less AI regulation. There is practically no regulation as it is, but they want less,” said Hinton.

Hinton seems particularly disappointed with Google, where he previously worked, to reverse his position on military AI applications.

According to Hinton, AI companies should devote much more resources to security research – “like a third party” of their calculation power, compared to the much smaller fraction currently allocated.

CBS News asked all laboratories AI mentioned what part of their calculation was used for safety research. None of them gave a number. All have said that security is important and that they support the regulation in general, but have mainly opposed the regulations that legislators have proposed so far.

Leave a Reply

Your email address will not be published. Required fields are marked *