I was a technophile in my first days of adolescence, sometimes wishing to be born in 2090, rather than in 1990, so that I could see all the incredible technology of the future. Lately, however, I have become much more skeptical about whether the technology with which we interact the most we really serve – or if we serve it.
So when I had an invitation to attend a conference on development Sure and ethical In the confrontation at the top of the Paris AI, I was fully ready to hear Maria Ressa, the Philippin journalist and the winner of the Nobel Peace Prize in 2021, talking about the way Big Tech, with impunity , allowed its networks to be flooded with disinformation, hatred and manipulation in a way that has had a very real and negative impact on the elections.
But I was not ready to hear some of the “sponsorships of the AI”, like Yoshua Bengio, Geoffrey Hinton, Stuart Russell and Max Tegmark, talking about the way things could go much further from the rails. At the center of their concerns was the race for act (artificial general intelligence, although Tegmark thinks that the “A” should refer to “autonomous”), which would mean that for the first time in the history of life on Land, there would be an entity other than human beings simultaneously having high autonomy, high generality and high intelligence, and which could develop objectives which are “poorly aligned” with human well-being. Perhaps this will be the result of the safety strategy of a nation state, or the search for business profits at all costs, or perhaps alone.
“It is not today that we have to worry about, it’s next year,” said Tegmark. “It is as if you interview me in 1942 and you asked me:” Why don’t people care about a nuclear arms race? ” Except they think they are In a arms race, but it’s actually a suicide race. »»
This was the subject of Ronald D Moore’s reinventure in 2003 by Battlestar Galactica, in which a public relations official shows journalists: “Things that seem strange, even outdated, modern eyes, such as phones with Cordons, awkward manual valves, computers that barely deserve the eyes of name “.” Everything was designed to operate against an enemy who could infiltrate and disturb all computer systems, except the most basic … We were so afraid by our enemies that we have literally looked back for protection. “
Maybe we need a new acronymI thought. Instead of mutually assured destruction, we should speak of “Self-Insated Destrction »with an additional accent: sad! An acronym that could even pierce Donald Trump.
The idea that we, on earth, could lose control of an act which then turns against us resembles science fiction-but is it really so far-in-law given the exponential growth of AI development? As Bengio pointed out, some of the most advanced AI models have already tried to deceive human programmers During the tests, both in pursuit of their designated objectives and to escape the deletion or replacement by an update.
When breakthroughs in human cloning were within reach of scientists, biologists gathered And agreed not to prosecute him, says Stuart Russell, who literally wrote The manual on ai. Likewise, Tegmark and Russell promote a moratorium on the pursuit of AC, and a risk approach on several levels – stricter than the EU act – where, just as with the drug approval process, AI systems in more risk levels should demonstrate to a regulator that they do not cross certain red lines, such as the possibility of copying on d ‘Other computers.
But even if the conference seemed weighted to these fears focused on future, there was a fairly obvious division between the main experts in the security and ethics of the industry, the academic world and the government. If the “sponsors” were worried about AG, a younger and more diversified demography pushed to put an equivalent accent on the dangers that the AIS already pose in climate and democracy.
We don’t have to wait for an AG to decide alone to flood the world of data to evolve more quickly – Microsoft, Meta, Alphabet, Openai and their Chinese counterparts are already doing it. Or for an AGE to decide, on its own, to manipulate the voters in mass in order to put politicians with a deregulation program in office – that, once again, Donald Trump and Elon Musk are already continuing. And even in the first current stages of AI, its energy consumption is catastrophic: according to Kate Crawford, president of AI and justice at the École Normale Supérieure, the data centers already represent more than 6% Any electricity consumption in the United States and China, and demand will only continue to increase.
“Rather than treating subjects as mutual exclusive, we need decision -makers and governments to take into account the two,” said Sacha Alanoca, a doctoral researcher for AI governance in Stanford. “And we must give priority to empirically motivated problems such as environmental damage, which already have tangible solutions.”
To this end, Sasha Luccioni, the AI and the climate at the head of Hugging Face – a collaborative platform for the Open Source models – announced this week that the startup had deployed a AI energy scoreClassifying 166 models on their energy consumption when performing different tasks. The startup will also offer a score system for one to five stars, comparable to the EU energy label for household appliances, to guide users to lasting choices.
After promoting the newsletter
“There is the scientific budget of the world, and there is the money we spend on AI,” said Russell. “We could have done something useful, and instead, we pay resources in this race to get out of a cliff.” He did not specify what alternatives, but only two months in the year, approximately 1 TM IA investment have been announcementJust as the world is still far from what is necessary to stay even within 2 ° C heating, much less 1.5 ° C.
It seems that we have an opportunity to narrowing to put incentives to companies to create the type of AI which actually benefits our individual and collective life: sustainable controlled, inclusive, compatible with democracy, controlled. And beyond the regulations, “to ensure that there is a culture of participation anchored to the development of AI in general”, as Eloïse Gabadou, OECD consultant on technology and the technology said democracy.
At the end of the conference, I told Russell that we seemed to use an incredible quantity of energy and other natural resources to run headlong in something that we should probably not create in the first place, and that relatively benign versions of are already, in many ways, poorly aligned by the types of societies in which we really want to live.
“Yeah,” he replied.