AI Regulation on the Move: EU Leads with AI Act as U.S. States Forge Their Own Paths
Quite understandably, the passing of the AI Act this week by the European Parliament received a great deal of attention, as it represents a significant step forward in developing a legal framework for artificial intelligence (AI).
However, while a few of the provisions will become effective in the coming months, it will take several years before many of its provisions are in full force. In the U.S., not unexpectedly given the current environment in Washington, there has not been movement in Congress on the myriad of AI bills that have been introduced, including several intended to provide oversight of AI systems.
However, that does not mean that U.S. businesses do not have to consider their legal obligations when developing, deploying or using AI systems, products or services. For example, in February the Utah Legislature passed the Artificial Intelligence Policy Act (the “AI Policy Act”), which when signed by the state’s governor would go into effect in May of this year. The AI Policy Act will establish liability for the use of AI in ways that violate Utah’s consumer protection laws if proper disclosure is not made. In addition, it permits companies who are considering developing an AI product while the AI regulations are being developed, to enter into a risk mitigation agreement with the Office of Artificial Intelligence Policy (a newly created agency under the AI Policy Act with AI rulemaking authority). The law also makes clear that synthetic data – defined as data that has been generated by computer algorithms or statistical models and does not contain personal data – is considered “de-identified data.”
Another area where states have taken action is in the procurement by state agencies of products and services that use AI. For example, in January Governor Youngkin of Virginia issued Executive Order Number 30 (2024) that included provisions directing the Virginia IT Agency (VITA) to issue “guiding principles for the ethical use of AI, general parameters to determine the business case for AI, a mandatory approval process for all Al capabilities, a set of mandatory disclaimers to accompany any products or outcomes generated by Al, methods to mitigate third-party risks, and measures to ensure that the data of private citizens are protected”.
The Policy Standards for the Utilization of Artificial Intelligence by the Commonwealth of Virginia published by VITA directs state agencies to only use AI “if there is a positive outcome for the citizens of the Commonwealth.” In addition, AI should be the “optimal solution” for the outcome and agencies should conduct a regulatory impact analysis to assess costs and benefits.
Agencies are also directed to add any intended use of AI into a state registry controlled by VITA. AI capabilities included in the registry must undergo an extensive approval process. For example, agency IT and information security officials and VITA (or the Chief Information Officer of the Commonwealth) must approve the AI system, while the agency’s Secretariat must approve use of the AI and the Secretary of Administration, in consultation with the Governor’s Chief Counsel, must review for any ethical issues.
The review process includes verification that an AI system is fair and will not result in any unlawful discrimination against, or have any unlawful disparate impact on, any individual or group of individuals. It also includes an examination as to whether the developer of the AI system is providing or should provide any warranties regarding the safety and security of the AI system or its output.
Even companies that are not subject to AI laws, regulations or policies are not immune from having to address legal matters associated with AI. For example, every company should review its Software as a Service (SaaS) agreements to determine if its data will be (or can be, for existing contracts) used as training data to build AI models. In addition, legal due diligence should accompany technical due diligence of vendors, as many are introducing (or claiming to introduce) AI into a wide range of new products and services. As the law evolves, companies will need to understand what AI is included in its technology stack and how it is being used.