Virginia’s High-Risk AI Developer and Deployer Act Awaits Governor’s Signature
This is a summary of HB 2094. By definition, a summary must leave out details. To view the full copy of the bill on the Virginia Legislative Information's website, click here.
On February 20, 2025, the Virginia General Assembly passed HB 2094, the Virginia High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”). The bill now awaits signature by Governor Youngkin. If it becomes law, Virginia will become the second state—after Colorado—to enact legislation addressing algorithmic discrimination, although the scope of the Act is narrower than Colorado’s legislation.
The Act only applies to high-risk artificial intelligence systems, which is defined as “any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.” For purposes of the Act, an artificial intelligence system is “any machine learning-based system that, for any explicit or implicit objective, infers from the inputs such system receives how to generate outputs, including content, decisions, predictions, and recommendations, that can influence physical or virtual environments.”
Only developers and deployers of high-risk artificial intelligence systems that are involved in consequential decision-making would be subject to the Act. Consequential decision-making is any “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer” of:
- Parole, probation, pardon, or any other release from incarceration or court supervision;
- Education enrollment or an education opportunity;
- Access to employment;
- A financial or lending service;
- Access to health care services;
- Housing;
- Insurance;
- Marital status; or
- A legal service.
The Act requires AI developers and deployers of high-risk AI systems to conduct risk assessments and document their intended uses and limitations. Companies must also provide clear disclosures regarding the performance, limitations, and potential biases of their AI systems and provide deployers with detailed documentation that explains the evaluation methods used to test the system’s performance and any steps taken to mitigate known risks.
An AI system or service is not a “high-risk artificial intelligence system” if it is intended to:
- Perform a narrow procedural task,
- Improve the result of previously completed human activity,
- Detect any decision-making patterns or any deviations from pre-existing decision-making patterns, or
- Perform a preparatory task to an assessment relevant to a consequential decision.
Moreover, a number of technologies are specifically excluded from the Act, including:
- Anti-fraud technology that does not use facial recognition technology;
- Artificial intelligence-enabled video games;
- Autonomous vehicle technology;
- Cybersecurity technology;
- Databases;
- Data storage;
- Firewall technology;
- Networking;
- Spam and robocall filtering; and
- Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions (subject to an acceptable use policy that prohibits generating content that is discriminatory or unlawful}.
During the debate on this bill, numerous entities expressed grave concern that restricting the use of AI could likely destroy their business. Section 59.1-610 lists activities that are exempt. It states:
“A. Nothing in this chapter shall be construed to restrict a developer's or deployer's ability to
(i) comply with federal, state, or municipal ordinances or regulations; (ii) comply with a civil, criminal, or regulatory inquiry … ; (iii) cooperate with law-enforcement …; (iv) investigate… defend legal claims … ; (v) provide a product or service specifically requested by a consumer; (viii)… protect an interest that is essential for the life or physical safety of the consumer or another individual; (ix) prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, or malicious or deceptive activities; …”
Similarly, it declares that certain industries “shall be deemed” to have met the requirements of this new law as the use of AI by these entities is already regulated by other federal and state government entities. Specifically, Subsection G states:
“The obligations imposed on developers or deployers by this chapter shall be deemed satisfied for any bank, out-of-state bank, credit union, federal credit union, mortgage lender, out-of-state credit union, savings institution, or any affiliate, subsidiary, or service provider thereof if such bank, out-of-state bank, credit union, federal credit union, mortgage lender, out-of-state credit union, savings institution, or affiliate, subsidiary, or service provider is subject to the jurisdiction of any state or federal regulator under any published guidance or regulations that apply to the use of high-risk artificial intelligence systems and such guidance or regulations.”
If signed by the governor, the Act would come into law on July 1, 2026. Virginia’s Office of the Attorney General has the exclusive authority to enforce the provisions of the Act.