December 11, 2025
By: David B. Allen
AI adoption is outpacing regulation, and regulators are responding fast. In recent months, both federal and state governments have accelerated efforts to assert control over how artificial intelligence (AI) is developed, deployed, and governed. What’s emerging is not a unified framework, but rather a fragmented regulatory landscape that creates compliance risk for businesses operating across jurisdictions. For legal and compliance teams, the question is no longer whether AI will be regulated, but how to prepare for overlapping and sometimes conflicting rules.
At the federal level, AI policy continues to be a flashpoint in 2025 and will likely continue into 2026. A notable development is a leaked draft Executive Order titled “Eliminating State Law Obstruction of National AI Policy.” This draft would direct multiple federal agencies—including the U.S. Departments of Justice and Commerce, Federal Communications Commission, and Federal Trade Commission—to evaluate state AI laws, potentially sue states over AI regulations deemed burdensome, and condition certain federal funding on states’ regulatory approaches. It also contemplates developing a uniform federal AI regulatory framework that could preempt conflicting state laws. Implementation challenges are anticipated, including legal hurdles under constitutional principles and pushback from states defending their regulatory authority.
At the state level in Florida, on the other hand, Governor Ron DeSantis recently unveiled a proposed “AI Bill of Rights” aimed at curbing perceived risks from unregulated AI deployment and protecting Floridians’ interests. Key elements announced in the proposal include:
- Expanded anti-deepfake protections and restrictions on the use of AI tools in sensitive contexts.
- Data privacy safeguards, including prohibitions on selling consumer data or using individuals’ name, image, or likeness without consent.
- Require notice to consumers when interacting with AI chatbots.
- Prohibit companies from providing licensed therapy or mental health counseling through AI.
- Parental access to children’s interactions with AI.
- Limitations on AI deployment in government and restrictions on AI data center subsidies or placements.
Further evidence of statewide interest in AI governance comes from broader 2025 AI legislative activity, with dozens of states introducing or enacting laws addressing AI deployment, disclosures, governance, and safety, illustrating that the regulatory landscape is highly dynamic and fragmented outside a unified federal framework.
How can your organization get ahead of these changes?
- Monitor Regulatory Trajectories and Compliance Obligations – Organizations should track both federal moves toward uniform frameworks and state-specific initiatives like Florida’s AI Bill of Rights. With divergent approaches already emerging, legal and compliance teams should evaluate how different regimes may apply to operations, products, or data practices across jurisdictions.
- Conduct an AI Risk and Usage Inventory – Legal counsel should lead a cross-functional assessment of where AI is used within the business—from customer interactions and HR tools to backend analytics and decision systems—and identify where regulatory obligations or prohibitions may apply.
- Update Contractual Terms and Vendor Controls – Vendor agreements involving AI technologies should be reviewed to ensure they reflect state-level requirements (i.e., data protections or consumer notice obligations) and potential federal standards. Representation and warranty language, risk allocation, and indemnities for regulatory compliance failures will be key negotiable terms.
- Reevaluate Governance and Incident-Response Protocols – Policies governing model deployment, audit trails, human oversight standards, and response plans should be updated to reflect emerging legislative expectations and to prepare for possible enforcement. Counsel should work with IT and cybersecurity teams to align legal and technical controls.
- Engage Counsel Early in Policy Interpretation – Given the evolving interplay between federal preemption efforts and state autonomy, in-house and outside counsel should be engaged proactively to interpret regulatory impacts, defend against enforcement actions, and advise leadership on strategic compliance. Legal guidance will be necessary to navigate ambiguity and reduce exposure.
The federal government is signaling a move toward centralized AI oversight, while states like Florida are advancing their own comprehensive frameworks. That divergence creates operational uncertainty for companies deploying AI tools across multiple jurisdictions. In this environment, organizations cannot rely on a single compliance playbook; they should actively track regulatory developments, reassess their AI footprints, and strengthen governance before enforcement arrives. The companies that act now will be best positioned to mitigate risk, maintain operational agility, and demonstrate responsible stewardship of emerging technologies.
For businesses evaluating how these shifting AI policies affect their operations, contracting strategy, or compliance posture, engaging experienced legal counsel early is no longer optional—it is a strategic necessity.
Questions?
Contact GrayRobinson Attorney David Allen or another member of the firm’s Data Privacy and Security Practice.