In response to the White House Request of information on an AI action planAnthropic has submitted recommendations at the Office of Science and Technology Policy (OSTP). Our recommendations are designed to better prepare America to capture economic advantages and the implications of national security for powerful AI systems.
As our CEO Dario Amodei ‘writesLoving grace machines‘, We expect powerful AI systems will emerge at the end of 2026 or at the beginning of 2027. Powerful AI systems will have the following properties:
- Intellectual capacities corresponding or exceeding that of the winners of the Nobel Prize in most disciplines, including biology, IT, mathematics and engineering.
- The possibility of navigating all the interfaces available for human work doing today digital work, including the possibility of processing and generating text, audio and video, the possibility of autonomous control of technology instruments such as mice and keyboards, and the possibility of accessing and traveling Internet.
- The ability to reason independently through complex tasks over prolonged periods – hours, days or even weeks – looking for clarification and comments if necessary, just like a very competent employee.
- The ability to interface with the physical world; Control of laboratory equipment, robotic systems and manufacturing tools via digital connections.
Our own recent work adds additional evidence to the idea that powerful AI will soon arrive: our Claude 3.7 Sonnet and Claude Code recently published demonstrate significant improvements and increased autonomy, as well as systems published by other border laboratories.
We believe that the United States must take decisive measures to maintain technological leadership. Our submission focuses on six key areas to combat the economic and security implications of a powerful AI while maximizing the advantages for all Americans:
- National security tests: Government agencies should develop solid capacities to assess national and foreign AI models for potential national security implications. This includes the creation of standard evaluation frames, the creation of secure test infrastructure and the creation of expert teams to analyze vulnerabilities in deployed systems.
- Strengthening export controls: We plead for the tightening of the export restrictions of semiconductors in order to ensure that America and its allies can capitalize on the opportunities of powerful AI systems and to prevent our opponents from accessing the infrastructure of IA which allows a powerful AI. This includes control of the H20 fleas, the demanding of government agreements to government for countries that host deployments of large chips and the reduction in the thresholds required without license.
- Improve laboratory safety: As AI systems become critical strategic assets, we recommend establishing classified communication channels between AI laboratories and intelligence agencies, accelerated security authorizations for industry professionals and the development of new generation security standards for AI infrastructure.
- Energy infrastructure scale: To stay at the forefront of the development of AI, we recommend that you set an ambitious target to build 50 additional gigawatts of dedicated power by 2027, while rationalizing the processes of license and approval.
- Accelerate the adoption of the AI government: We propose to carry out an inventory on the level of the Workflows government which could benefit from the increase in AI, managers of the task agency to deliver programs where AI can offer important public advantages.
- Prepare economic impacts: To ensure that the advantages of AI are widely shared throughout society, we recommend modernization of economic data collection mechanisms, such as Census Bureau surveys and preparing potential large -scale changes in the economy.
These recommendations are based on Anthropic’s previous political work, including our advocacy for responsible scaling policies and tests and evaluation. Our goal is to find a balance – allowing innovation while attenuating the serious risks posed by increasingly capable AI systems.
Our complete submission, found hereOffers more details on these recommendations and provides practical implementation strategies to help the US government to sail in this critical technological transition.