As artificial intelligence (AI) becomes increasingly integrated into our daily lives, its regulation has emerged as a pressing issue. However, while AI development advances at a rapid pace, regulation in the United States remains fragmented. Unlike the European Union (EU), which has introduced its landmark AI Act, the U.S. lacks a unified federal framework, leaving states to fill the regulatory void. This has created a complex patchwork of AI laws and standards that businesses must navigate.
A Growing Mosaic of State Regulations
Since 2019, at least 40 U.S. states have considered AI-related legislation, and 17 states have enacted 29 bills addressing concerns such as data privacy, accountability, and algorithmic fairness. In the absence of federal action, states like Colorado, Utah, and California have begun crafting their own AI regulatory frameworks, drawing inspiration from the EU’s comprehensive approach or adapting their existing laws to govern AI.
Legal experts warn that this fragmented landscape mirrors the early days of data privacy regulation, when businesses struggled to comply with a variety of conflicting state laws. The same risks now loom for AI, complicating compliance efforts for organizations and placing additional burdens on risk and legal professionals.
Federal Efforts Fall Short
Congress has shown interest in AI regulation, holding hearings and proposing several bills in 2023. Yet, none have passed, leaving regulatory initiatives largely to the discretion of individual states. The White House has stepped in to offer guiding principles for AI governance, emphasizing the importance of protecting civil rights, ensuring transparency, and preventing algorithmic discrimination. While these principles provide a valuable foundation, they lack the binding authority of legislation.
White House Guiding Principles for AI
The White House recommends that states adopt AI governance principles to:
- Foster inclusive dialogue among stakeholders during the design and development of AI systems.
- Safeguard individuals from unsafe or ineffective AI.
- Ensure transparency in data collection and AI use.
- Provide options for human intervention when AI is employed.
- Prevent discrimination and promote equitable outcomes.
- Hold developers and users accountable for adhering to ethical and legal standards.
Spotlight on State Regulations
Colorado AI Act
The Colorado AI Act, set to take effect on February 1, 2026, is the first comprehensive state-level framework for AI governance in the U.S. It draws parallels with the EU AI Act but includes unique provisions tailored to Colorado’s legal environment.
Key features of the Colorado AI Act include:
- Defining high-risk AI systems as those influencing decisions in critical areas like healthcare, employment, and education.
- Mandating safeguards against algorithmic discrimination based on age, race, disability, and other protected characteristics.
- Requiring transparency regarding training data and algorithm logic.
- Excluding general-purpose AI, such as content generation tools, unless they impact significant decisions.
Violations of the Colorado AI Act also constitute breaches of the state’s Unfair and Deceptive Trade Practices Act, imposing severe penalties on non-compliant organizations.
Utah AI Policy Act
Utah’s AI Policy Act takes a narrower approach, focusing on generative AI. The law integrates AI governance into existing consumer protection frameworks, requiring businesses and licensed professionals to disclose when consumers are interacting with AI systems. Non-compliance can result in fines of up to $2,500 per violation.
California’s Multifaceted Approach
California is at the forefront of AI regulation, with several legislative efforts targeting transparency, accountability, and fairness. Notable initiatives include:
- AB 2013: Requiring disclosure of training data for generative AI systems (effective January 1, 2026).
- SB 942: Mandating free tools to identify AI-generated content.
- AB 3030: Introducing disclaimers for AI-generated healthcare communications.
- AB 1008: Expanding the California Consumer Privacy Act to include AI-generated outputs.
Although some measures, such as SB 1047, were vetoed due to concerns about stifling innovation, California continues to balance regulatory oversight with its role as a leader in AI development.
Harmonizing AI Governance: Challenges and Opportunities
The current patchwork of state laws poses significant challenges for businesses operating across multiple jurisdictions. Companies must adapt to varied requirements, from transparency and data reporting to specific use-case restrictions. Without harmonization, these fragmented regulations may inhibit innovation and increase compliance costs.
The Role of Connected Risk
Navigating this complex regulatory landscape requires a proactive and adaptable approach. Empowered Systems’ Connected Risk platform offers a comprehensive solution for managing compliance, identifying risks, and adapting to evolving AI regulations. By centralizing regulatory data and enabling dynamic risk assessments, Connected Risk simplifies compliance for organizations facing a web of state-level AI laws.
Stay ahead of the curve and navigate the ever-changing landscape of AI regulation with confidence. Discover how Connected Risk can empower your business to meet compliance requirements, mitigate risks, and foster ethical AI practices. Schedule a demo today and transform regulatory challenges into opportunities for innovation.
This summary of relevant regulations and any additional regulatory information that is displayed on Empowered Systems’ website is provided for informational purposes only and does not constitute legal advice. We advise that you obtain advice from a lawyer or appropriate legal counsel.