Trump plans bonfire of US state-level AI regulation
US president Donald Trump has directed his administration to start work on establishing a nationwide synthetic intelligence (AI) regulation framework throughout the nation in an try and override what he described as cumbersome state-level authorized frameworks.
The most recent government order (EO) to emerge from the Oval Workplace, Making certain a nationwide coverage framework for synthetic intelligence, builds on a January 2025 order titled Eradicating boundaries to American management in synthetic intelligence, wherein Trump lambasted his predecessor, Joe Biden, for allegedly attempting to paralyse the trade with regulation.
Trump claimed that since then, his administration has delivered “super advantages” which led to trillions of {dollars} of funding in AI initiatives throughout the US.
In his follow-on EO, Trump mentioned that to be able to win, US synthetic intelligence corporations should be allowed to innovate with out extreme regulation, however have been being thwarted by “extreme” state-level regulation. He mentioned this creates a patchwork of fifty totally different regulatory regimes making compliance far more difficult, particularly for startups.
Trump additionally accused some states of enacting legal guidelines that require entities to embed “ideological bias” in AI fashions, noting a Colorado legislation that bans algorithmic discrimination. The president claimed this may increasingly power AI fashions to supply false outcomes to keep away from “a differential remedy or affect on protected teams”.
“My administration should act with … Congress to make sure that there’s a minimally burdensome nationwide customary – not 50 discordant state ones,” he wrote.
“The ensuing framework should forbid State legal guidelines that battle with the coverage set forth on this order. That framework must also make sure that kids are protected, censorship is prevented, copyrights are revered, and communities are safeguarded. A rigorously crafted nationwide framework can make sure that america wins the AI race, as we should.”
Activity power
On the idea that it’s US coverage to “maintain and improve” its international AI dominance by way of a “minimally burdensome nationwide coverage framework”, the order directs US lawyer basic Pam Bondi to determine an AI Litigation Activity Pressure within the subsequent month to problem state AI legal guidelines the administration deems inconsistent with the EO on varied grounds – for instance, people who “unconstitutionally regulate interstate commerce”, or people who Bondi merely judges illegal herself.
The EO additional mandates that in 90 days, secretary of commerce Howard Lutnick will, in session with varied different folks, publish an analysis of current state AI legal guidelines that identifies any conflicting with the broader coverage and people which may be referred to the Activity Pressure.
At a minimal, this analysis is designed to determine any that require AI fashions to change truthful outputs or compel builders or deployers to deal with data in an unconstitutional trend – notably with regard to the First Modification masking freedom of speech.
The EO makes varied different provisions limiting sure federal funding for states with restrictive AI legal guidelines – notably associated to broadband roll-out, it directs companies such because the Federal Communications Fee (FCC) and Federal Commerce Fee (FTC) to contemplate nationwide reporting and disclosure requirements that would preempt conflicting legal guidelines in areas similar to truthful outputs, and proposes laws to create a unified federal AI coverage to preempt conflicting state legal guidelines, albeit with some exemptions round areas similar to youngster security, AI compute and datacentre infrastructure, and state procurement and use of AI.
Kevin Kirkwood, chief data safety officer at cyber safety firm Exabeam, mentioned that no matter Trump’s chosen supply mechanism, the core thought behind establishing a federal framework to preempt state legal guidelines was not essentially with out benefit.
“You possibly can’t strong-arm a distributed ecosystem into aligning with a single imaginative and prescient simply since you wrote it into an government order, however let’s not confuse techniques with precept,” he mentioned. “The underlying level is sound: AI regulation needs to be nationwide in scope, not stitched collectively from state capitols that don’t even agree on what constitutes an algorithm.
“Synthetic intelligence … is a nationwide, and international, infrastructure layer. Permitting 50 states to create inconsistent, siloed legal guidelines round how AI might be developed, deployed or audited creates friction, uncertainty and large compliance overhead. Whether or not it comes from Congress or an government order, a unified federal framework is important for guaranteeing the US stays aggressive, cohesive and able to setting international norms.”
Acknowledging the argument that federal pre-emption undermines native management, Kirkwood mentioned that when it got here to AI, native management would result in fragmented requirements benefiting no person “besides possibly legal professionals”.
“California might want aggressive AI security laws, but when New York and Florida disagree, builders are left navigating a maze of contradictory guidelines,” he mentioned. “That sort of regulatory patchwork doesn’t shield folks; it paralyses innovation. It’s not exhausting to think about a future the place startups construct for the least regulated state and geo-fence everybody else. That’s a race to the underside disguised as client safety.”
Lacking the purpose?
However Ryan McCurdy, advertising vice-president at database change governance platform Liquibase, mentioned the EO missed the purpose, although he conceded federal alignment on AI was a good suggestion.
“A single rulebook means nothing until it addresses the baseline downside behind each AI failure: an absence of governance over the info buildings that feed these fashions,” he mentioned. “Mannequin-level guidelines received’t shield the general public if the underlying information is inconsistent, drifting or untraceable.
“So, the true query is whether or not the nationwide customary will demand proof,” mentioned McCurdy. “Proof of how fashions are skilled, proof of how information evolves, proof of how organisations stop unapproved or dangerous modifications. That’s the distinction between precise oversight and a press launch.
“If the US desires to guide in AI, it wants greater than a unified rulebook,” he mentioned. “It wants a normal that forces AI methods to be explainable, governable and accountable from the bottom up.”

