Elon Musk wants to use AI to run US gov’t, but experts say ‘very bad’ idea | Elon Musk News


Does Elon Musk plan to use artificial intelligence to manage the US government? It seems to be his plan, but the experts say that it is a “very bad idea”.

Musk dismissed tens of thousands of employees of the federal government through his government ministry (DOGE), and he would have forced the remaining workers to send a weekly e-mail to the department with five chips describing what they have accomplished this week.

Since this will undoubtedly flood hundreds of thousands of these types of emails, Musk relies on artificial intelligence to treat the responses and help determine who should remain used. Part of this plan would also be to replace many government employees with AI systems.

It is not yet clear to what these AI systems are like or how they work – something that Democrats in the US Congress ask to be filled – but experts warn that the use of AI in the federal government without robust test and verification of these tools could have disastrous consequences.

“To use the AI ​​tools responsible for the AI, they must be designed for a particular objective in mind. They must be tested and validated. It is not clear if it is done here, ”explains Cary Coglianese, professor of law and political science at the University of Pennsylvania.

Coglianese says that if AI is used to make decisions on who should be dismissed from their work, it would be “very skeptical” of this approach. He says there is a very real potential for errors to be made, so that AI is biased and for other potential problems.

“It’s a very bad idea. We know nothing about how an AI would make such decisions [including how it was trained and the underlying algorithms]The data on which would be such decisions, or why we should believe that it is trustworthy, “explains Shobita Parthasarathy, professor of public policy at the University of Michigan.

These concerns do not seem to retain the current government, in particular with Musk – a billionaire businessman and advisor close to the American president Donald Trump – leading to the accusation on these efforts.

The US State Department, for example, plans to use AI to scan the social media accounts of foreign nationals to identify anyone who can be a supporter of Hamas in order to revoke its visas. The American government has not so far been transparent to the functioning of these types of systems.

Damage not detected

“The Trump administration is really interested in continuing AI at all costs, and I would like to see a fair, fair and fair use of AI,” said Hilke Schellmann, professor of journalism at New York University and artificial intelligence expert. “There could be a lot of damage that is not detected.”

AI experts say that there are many ways, the use of the AI ​​government can be mistaken, which is why it must be adopted with care and conscience. Coglianese says that governments around the world, including the Netherlands and the United Kingdom, have had problems with an ill-executed AI that can make mistakes or show prejudices and, therefore, have unjustly refused the well-being services of residents they need, for example.

In the United States, the state of Michigan had a problem with the AI ​​that was used to find fraud in its unemployment system when it wrongly identified thousands of cases of alleged fraud. Many of those who have been denied services have been treated harshly, in particular by being struck by several penalties and accused of fraud. People were arrested and even deposited bankrupt. After a period of five years, the state admitted that the system was defective and a year later, it ended up reimbursing $ 21 million to residents accused of fraud.

“Most of the time, those responsible who buy and deploy these technologies know little about how they work, their prejudices and their limits and their mistakes,” explains Parthasarathy. “Because low -income and marginalized communities tend to have the most contact with governments through social services [such as unemployment benefits, foster care, law enforcement]They tend to be the most affected by the problematic AI. »»

The AI ​​has also caused government problems when used in court to determine things such as eligibility for parole or in police services when used to try to predict where crime is likely to occur.

Schellmann says that the AI ​​used by police services is generally trained on the historical data of these departments, and which can lead the AI ​​to recommend areas of overlays that have long been surprised, in particular colored communities.

Ai does not understand anything

One of the problems with potentially the AI ​​to replace workers in the federal government is that there are many different types of jobs in the government that require specific skills and knowledge. A computer person from the Ministry of Justice could have a very different job from that of the Ministry of Agriculture, for example, even if they have the same job. An AI program should therefore be complex and highly formed to even do poor work to replace a human worker.

“I don’t think you can cut people’s jobs randomly and then [replace them with any AI]Said Coglianese. “The tasks that these people carried out are often highly specialized and specific.”

Schellmann says you can use AI to make parts of someone’s work that could be predictable or repetitive, but you can’t just replace someone. It would be theoretically possible if you had to spend years developing the right IA tools to make many different types of jobs – a very difficult task and not what the government seems to do today.

“These workers have real expertise and a nuanced understanding of the problems, which AI does not do. AI does not “understand nothing,” explains Parthasarathy. “This is the use of calculation methods to find models, based on historical data. And it is therefore likely to have a limited utility and even to strengthen historical biases. »»

The administration of former American president Joe Biden issuing An executive decree in 2023 focused on responsible use of AI to the government and the way IA would be tested and verified, but this order was canceled by the Trump administration in January. Schellmann says that it has made it less likely that AI will be used in a responsible manner within the government or that researchers will be able to understand how AI is used.

All of this said that if AI is developed in a responsible manner, it can be very useful. AI can automate repetitive tasks so that workers can focus on more important things or help workers solve problems with which they have trouble. But you have to give time to be deployed in the right way.

“This does not mean that we could not judiciously use AI tools,” says Coglianese. “But the governments are getting lost when they try to rush and do things quickly without contribution from the public and in -depth validation and verification of the function of the algorithm.”

Leave a Reply

Your email address will not be published. Required fields are marked *