Data Centers, AI Rules, Chip Limits and OpenAI Talks Policy


One week into his term, President Biden sign a decree that reserves federal land for construction of artificial intelligence (AI) data centers, with the entire cost borne by the developers of core – or frontier – AI models like OpenAI’s GPT-4o. AI model developers should also make sure there is a clean energy source for these data centers, as intense AI workloads are notoriously power-hungry.

Latest order follows Biden’s in October 2023 decree establish guardrails for powerful frontier or foundation AI models. This includes ensuring the government can assess AI systems before they are deployed in areas such as cybersecurity and other national security risks.

Biden too promised develop content tagging and provenance mechanisms, so consumers can know what content is generated by AI. Trump issued the first executive order on AI in 2020, calling for its use in the federal government. Different states (California, Texas And others) also have their own AI rules.

UK and EU regulations

AI regulations differ in the US, UK and Europe. The European AI law is a much more radical law legislation which rates AI applications based on three levels of risk: unacceptable risk (similar to the government’s rating of individuals based on their social status), high risk (resume analysis AI tools that ranks job candidates) And what is not prohibited or considered high risk.

Keir Starmer, British Prime Minister announcement Monday, January 13, an action plan to make Britain a leader in AI, including increasing its data center capacity for AI workloads. Starmer said formal regulations on AI would be forthcoming. His predecessor Rishi Sunak revealed an AI regulatory framework for existing regulators to follow.

AI chip export controls

Biden too extended on its AI 2022 and 2023 chip export controls intended for prevent China and other adversarial countries from getting their hands on AI hardware. This week’s new regulations divide the world into haves and have-nots: 18 allies and partners will have no restrictions at allwhile buyers of small chip orders of up to 1,700 advanced GPUs in computing power also get the green light. These are generally universities and research organizations.

However, more than 120 other countries would face new rules when setting up AI computing installations. Trusted entities include those based in countries that are close allies of the United States and are not headquartered in a “country of concern.” Those not based in allied countries can still purchase up to 50,000 advanced GPUs per country. Biden also set rules that would respect secret weights of an AI model from untrusted entities, among other security controls.

Nvidia denounces the limits of chips

The rules are expected to impact Nvidia, whose GPU chips are the silicon of choice for training and inferring AI models. The company has a market share set at more than 80%.

Nvidia positioned itself for an AI revolution in 2006; its GPUs were initially developed to handle games and other graphics-intensive applications. Co-founder and CEO Jensen Huang bet that the company’s future rested on the pivot to AI, even though progress in AI had stalled. in the past so-called “AI winters”.

Ned Finkle, Nvidia’s vice president of government affairs, denounced Biden’s new rules. He wrote in a blog that progress in AI globally is “now under threat”. He said Biden’s “misguided” policy “threatens to derail innovation and economic growth around the world.”

Finkle called Biden’s expanded export control rules as a “regulatory quagmire of more than 200 pages, drafted in secret and without proper legislative review.” Such a regulatory measure “threatens to to waste America’s hard-won technological advantage,” he added.

Finkle praised the first Trump administration for fostering an environment conducive to AI innovation and said he “looks forward” to a return to his policies as the ex-president prepares to be sworn in .

The Semiconductor Industry Association contributed own statement. “We are deeply disappointed that a policy change of this magnitude and impact would be rushed just days before a presidential transition. And without any significant industry contribution.

OpenAI’s plan for the United States

As OpenAI CEO Sam Altman announced plans to attend President Trump’s inauguration, his AI startup preemptively rolled out a plan to keep America at the forefront of development of AI.

“We believe America must act now to maximize AI possibilities while minimizing its harms. AI is too powerful a technology to be led and shaped by autocrats, but that is the growing risk we face.while the economic opportunity that AI offers is too compelling to abandon,” according to the “AI in America » economic plan.

OpenAI’s vision is based on the following beliefs:

  • Chips, data, energy And talent is the key to winning the AI ​​race. The United States is expected to be an attractive market for AI investment, with $175 billion in global funds awaiting deployment. Otherwise, these sums will go to projects supported by China.
  • A free market promoting free and fair competition to stimulate innovation. This includes the freedom for developers and users to work with AI tools while adhering to clear, common-sense standards to keep AI safe for everyone. The government should not use these tools to gain power and control citizens.
  • To ensure the security of the border model, the federal government should develop best practices to prevent against abuse; limit the export of border models to adversary nations; develop alternatives to the patchwork of state and international regulations, such as a U.S.-led international coalition.
  • TThe federal government should share its expertise in protecting the intellectual property of AI developers from threats and help companies gain access to secure infrastructure, such as classified computing clusters. assess modeling risks and safeguards, creating a voluntary pathway for companies developing large language models to work with government to define model assessments, test models and exchange information to support company safeguards.
  • While the federal government should take the lead on AI issues related to national security, states can act to maximize the benefits of AI for their own AI ecosystem. They can improve AI experimentation by supporting AI startups and small businesses. has find ways to solve everyday problems.

Leave a Reply

Your email address will not be published. Required fields are marked *