Texas & Virginia Steer States Away from European-Style AI Regulation


This analysis is in response to breaking news. It will be updated. Please contact pr@rstreet.org to speak with the scientist.

New developments in Virginia and Texas indicate how the debate on artificial intelligence policy (AI) could be transformed in a more positive and pro-innovation direction in the United States. Less than three months after the start of the year, an amazing 900 legislative proposals related to AI—Meride 12 per day– has already been introduced. The vast majority of them are state measures, and most of them seek to impose new regulations on algorithmic systems. This represents An unprecedented level of regulatory interest in any emerging technology.

On March 24, the governor of Virginie Glenn Youngkin (R) veto A large regulatory measure of artificial intelligence (AI) which would have compromised the Commonwealth capacity to continue to be a leader in digital innovation at the level of the state. In veto on HB 2094, the “law on the deployment of high -risk artificial intelligence”, Youngkin has properly noted that the bill “would harm new jobs, to the attraction of new commercial investments and to the availability of an innovative technology in the Commonwealth of Virginia”. THE Progress room also estimated The bill would have imposed nearly $ 30 million in compliance costs on AI developers, which would have been devastating for small technology startups in the state, as R Street Teshimy explained on HB 2094.

Above all, the veto of Youngkin came only ten days after the representative of Texas, Giovanni Capriglione (R), presented A revised version of his “Texas responsible for the governance of AI (traga)”, a bill which has originally regulated the innovation of the AI ​​in this state and attracted a generalized opposition. While the original The version of Traiga was very similar to Virginia Bill, the new version of Texas Bill (HB 149) Lose the heaviest elements of early measurement.

These developments in Virginia and Texas represent a potentially important turning point in AI policy because these other states have envisaged regulatory measures which adopts a style approach from the European Union (EU) to AI regulation. These movements of Virginia and Texas also better align AI policy with A new national orientation On the opportunity and investment of AI, which is particularly important in the wake of great recent Chinese advances in this area.

Reject the regulation of AI based on fear

The Virginie AI bill that the governor of Youngkin has opposed was one of the many similar bills which are pushed by the Working group on AI decision -makers with several states (MAP-WG), a coalition of state legislators with more than 45 states that are trying to create a consensual “discrimination in matters of AI” which could be reused through state legislatures. These copy invoices are pending in a dozen states currently, in particular California,, Connecticut,, Massachusetts,, Nebraska,, New MexicoAnd new Yorkamong others. Last May, Colorado became the first state to adopt one of these bills on the discrimination of AI. Even before implementation, the problems have become obvious. The previous version of Texas Triga Bill initially followed this same model, but now largely.

These bills on the AI ​​of the map of the card melted the elements of the new European Union Ai ac and the Biden administration approach in AI, especially as articulated in their “BluePrint for a declaration of AI rights. “The EU AI Act and Biden’s administrative approach to AI policy were fundamentally based on fear insofar as it considered algorithmic systems as” dangerous, ineffective or biased “and” deeply harmful “.

On January 23, President Trump repealed the time historically long from the Biden administration 2023 Ini (EO) and replaced it with a new EO on “Eliminate obstacles to American leadership in artificial intelligence“, Which underlined the need to” maintain and improve the world domination of America’s AI in order to promote human development, economic competitiveness and national security “. Following this, on February 11, vice-president JD Vance delivered A major opening address At Paris Ai Action Summit This has developed this “AI opportunity opportunity” program more fully and explained how “we think that excessive AI sector regulations could kill a transformative industry as it is discolving, and we will do their best to encourage pro-growth policies.”

Despite this change in attitude and orientation on AI policy of the new administration, many states continue to advance the regulatory proposals of AI which imitate Biden’s policy declarations, considering AI less as an opportunity for America to adopt and more as a danger to avoid. The influence of the European regulatory model is obviously Throughout these card-wg invoices. Like the new EU AI law, these state AI bills seek to regulate hypothetical future damage that could come from AI systems. These bills are particularly concerned about the potential of “algorithmic biases” or other damage developing from “high -risk” AI applications.

Above all, if such prejudices have developed, many existing state and federal policies, in particular the laws on civilian rights overlapping and the regulations of unfair and misleading practices –would already address these problems. But, like European technological regulations, these new anti-discrimination bills of the State seek to regulate in a preventive manner before such damage is proven. This kind of technocratic ex ante settlement can be expensive and confusing Because this means that state bureaucrats will prevent a preventive way which AI innovations are authorized to go to the market according to speculative fears. As Governor Youngkin explained in his veto declaration, “the rigid framework of HB 2094 does not take into account the rapid and rapid nature of the AI ​​industry and constitutes a particularly expensive burden for small businesses and startups that lack great legal compliance services.”

Avoid the COLORADO broken IA model

The veto of Youngkin and the introduction of the considerably revised Bill of Texas mean that certain legislators of the State now understand the costs and complexities of such regulation. It was also the lesson of the new AI of Colorado AI.

When Colorado envisaged its AI regulation (SB24-205), several small medium -sized technological entrepreneurs in the state sent a letter To the legislators explaining how its “waves and too much end” “would seriously suffocate innovation and impose untenable loads on colorado companies, especially startups”. They specifically noted that efforts to predict the “predictable” risks of AI for general use are “essentially impossible and invite disputes against fundamental and socially precious innovations”. They also argued that the bill had raised concerns linked to the first amendment.

The bill was promulgated by the governor of Colorado Jared Polis (D) anyway, but He accepted With them, the new law “would create a complex compliance regime for all the developers and deployers of the AI” through “important affirmative reports requirements” and that it was “concerned about the impact that this law could have on an industry that feeds critical technological progress through our state for consumers and businesses.” After this remarkable entry, polished ordered training of a working group on the impact of the AI ​​of Colorado to respond to concerns “that an too wide definition of AI, associated with proactive disclosure requirements, could inadvertently impose them in high cost, resulting in obstacles to growth and product development, job loss and reduced capacity to increase capital.”

Unfortunately, this working group has already published vague recommendations which have not managed to properly solve these problems. No solution has been proposed, mainly because each attempt to find a way to soften the new regulations was encountered by the opposition of pro-regulatory groupsWho still wanted more expensive mandates on innovators. The working group without satisfaction concluded By simply saying that there were many “problems with the disagreement closes on the approach and where creativity will be necessary”, to resolve, but then offered no solution. In other words, nothing has been done to respond to the concerns that the developers of State AI and the Polis Governor had on the destructive potential of this law.

It is clear that the fundamentally imperfect law of Colorado is not a good model to follow for other states. Above all, Polis was so concerned about the potential negative effects of state AI regulations such as his own that he called for a “cohesive federal approach” which is “applied by the federal government to limit and pre -empt various charges of compliance on innovators and ensure a playground through the State as well as access to life -saving and money economy.”

Conclusion

Even if the federal pre -emption is difficult, the congress will have to continue a national framework of AI to overcome the associated problems The patchwork proliferating quickly state and local AI regulations.

In the meantime, however, other states should take into account the lessons of what has happened in Virginia and Texas. The veto of Governor Youngkin and the very revised the Texas bill send a clear message to other legislators and governors of the State who envisage similar measures: it would be a mistake to import the regulatory model of the European Union into America and to impose expensive and confusing mandates on the entrepreneurs of the American AI.

Again, there are better ways for states to respond to concerns concerning AI systems which would not involve a heavy, top and high dynamic regulatory system for the most important technology in modern times. As Governor Youngkin concluded in his veto declaration, “the role of government in the safeguarding of AI practices should be the one that allows and allows innovators to create and develop, not one that stifles progress and imposes expensive burdens on the many business owners of our Commonwealth.” The same goes for all other states and the nation as a whole.

Follow our work of artificial intelligence policy.

Leave a Reply

Your email address will not be published. Required fields are marked *