President Trump Signs Executive Order to Limit State AI Regulation

President Trump’s Dec. 11, Executive Order, “Ensuring a National Policy Framework for Artificial Intelligence” (the “order”), targets what the administration views as burdensome and fragmented state AI regulation in favor of a single national framework. Although the order does not overturn any existing or proposed state AI law, it directs federal agencies to challenge certain state AI laws, condition federal funding on compliance with the order, and propose federal preemption legislation.

What the Executive Order Does
The order states that U.S. policy is to “sustain and enhance” American AI dominance through a national framework that reduces regulatory burden and avoids a patchwork of 50 different state regimes. The order explicitly targets state laws like the Colorado AI Act’s algorithmic discrimination provisions, which the administration views as requiring ideological bias in models or forcing changes to otherwise “truthful outputs.”

The order directs the Attorney General to create an AI Litigation Task Force within 30 days to challenge state AI laws that conflict with this policy. It also instructs the Secretary of Commerce to review existing state AI laws within 90 days and identify those that are “onerous” or potentially unconstitutional.

Funding, FCC/FTC Actions, and Future Legislation
The order ties AI policy to federal funding. The Department of Commerce must condition remaining Broadband Equity, Access and Deployment (BEAD) Program funds so that states with “onerous” AI laws (as identified in the Commerce review) are not eligible for certain non‑deployment funds. It also tells federal agencies to review their discretionary grants and consider whether they can require states to avoid conflicting AI laws or agree not to enforce them during the grant period.

The Federal Communications Commission must consider a federal AI reporting and disclosure standard that would preempt conflicting state requirements. Likewise the Federal Trade Commission must issue a policy statement within 90 days explaining how the FTC Act’s prohibition on unfair or deceptive practices applies to AI models. The policy must also address when state laws requiring “alterations to truthful AI outputs” could be treated as mandating deceptive practices.

Finally, the order calls for a legislative proposal for a uniform federal AI framework that would preempt conflicting state AI laws. Such a legislative proposal, per the order, should still carve out topics such as child safety rules, state government procurement/use of AI, and most regulation of AI compute and data center infrastructure. The order concludes by clarifying that it creates no private right of action and must be implemented consistent with existing law and appropriations.

How This Interacts With Existing and Future State AI Laws
The administration’s view is that aggressive state AI regimes, especially those focused on algorithmic discrimination, content outputs, or detailed disclosure requirements, risk chilling innovation and raising First Amendment issues. Early federal scrutiny is likely to focus on laws that condition deployment on highly prescriptive model behavior, impose detailed reporting obligations on model design or training data, or reach across borders to cover out‑of‑state providers that serve in‑state users.

The order does not, by itself, invalidate any state AI law. Instead, challenges will proceed case by case, and courts will decide where federal authority ends and state police powers in areas like consumer protection, anti‑discrimination, and labor begin. In the meantime, companies should expect overlapping and sometimes conflicting expectations from states, federal agencies, and private litigants.

Practical Guidance for Companies Using or Developing AI Tools
For businesses already trying to track rapid AI developments, the order adds a federal layer but does not remove existing state obligations. In this environment, several practical steps may help:

  • Adopt a dual‑track compliance approach: Organizations should continue mapping AI use cases to state AI and privacy laws, such as rules on algorithmic discrimination, automated decision‑making, and transparency. Concurrently, organizations should also track evolving federal expectations from agencies like the FTC and Commerce. The order should be treated as an enforcement roadmap, not a shield from state enforcement.
  • Anchor AI governance in risk and transparency: Regardless of how preemption develops, regulators and courts are focusing on explainability, documentation, and controls around bias, safety, and misuse. A sound AI governance program should describe model purpose, data sources, testing and validation, human oversight, and change management in a way that can satisfy both state and federal audiences.
  • Prepare for inevitable federal‑state conflicts: In areas such as content outputs and anti‑discrimination, there may be tension between state rules aimed at avoiding disparate impact and federal views that some mandated outputs amount to compelled speech or deception. Companies should identify high‑risk models, including those used in employment, credit, housing, or sensitive content, and be ready to adjust deployment strategies through measures such as geofencing, configuration changes, or separate documentation.
  • Watch funding conditions and contract terms: Entities that rely on federal funds, particularly in broadband, infrastructure, education, and healthcare, should monitor for new funding conditions tied to state AI laws and enforcement pauses. Vendors serving those entities should expect more AI‑specific contract provisions on compliance, transparency, and cooperation with agency reviews.
  • Reinforce FTC and state UDAP compliance: The planned FTC policy statement will likely stress that overstating AI capabilities, hiding material limits, or changing model behavior in ways that mislead users may be treated as deceptive. Companies should review marketing materials, public claims, and user interfaces for AI tools to confirm that they accurately describe capabilities, risks, and the role of human oversight.
  • Plan for litigation‑ready documentation: With an AI Litigation Task Force and more state and private enforcement activity, contemporaneous documentation will be important. Companies should expect detailed questions later about why a model was deployed, how risks were evaluated, what mitigation steps were taken, and what alternatives were considered.

Navigating Uncertainty While the Order Is Challenged
The order will likely face constitutional challenges, including questions about how far the executive branch may go in conditioning funding and directing independent agencies toward rules that preempt state law. Until courts resolve those issues, companies should assume that state AI laws remain in effect unless a court blocks them or they are superseded by valid federal law or regulation.

Federal agencies are likely to use the order as a guide for enforcement priorities and rulemaking, particularly in areas such as disclosure, truthful outputs, and unfair or deceptive practices. The broader trend points toward more detailed, sector‑specific rules for higher‑risk AI use cases, even within an announced goal of a “minimally burdensome” national framework.

During this period, organizations that invest in clear, well‑documented AI governance, with a focus on transparency, fairness, security, and accountability, will be in the best position to adapt. That remains true whether the federal framework expands, contracts, or is reshaped in court.

In This Article

You May Also Like