In The Spotlight
Multiple state legislatures have passed laws in 2024 that define at least some ground rules about what is and isn't tolerable for artificial intelligence (AI) use in their states.
By John Ghrist
Although the U.S. Congress has made little progress on passing federal legislation regarding unethical use of AI (unlike the European Union), numerous state legislatures in the U.S. have this year adopted a number of new laws regarding the use of AI in political campaigns, education and training, criminal justice, healthcare, and other contexts. Overall, 31 states, plus the territories of Puerto Rico and the Virgin Islands, passed AI-related legislation of some kind in 2024. Many of these acts are worth a summary review because they contain some useful definitions and ideas.
Several disclaimers first. It's true that a number of executive orders and initiatives regarding AI, such as this example, have been put forth by the Biden administration and have national influence, but it may not be useful to enumerate them here because so many are likely to be canceled or significantly altered by the impending change of administrations in Washington. We will also pass over the group of 2024 state laws that deal exclusively with definition and production of AI-generated pornography because of their dubious business relevance, except to note that this is the only AI-related legislation to pass so far in several states.
Numerous bills regarding AI introduced in state legislatures failed to pass or were never signed into law for other reasons, which also won't be covered here. Also not covered are numerous bills passed setting up or funding studies, departments, commissions, task forces, and the like to study the impacts of AI apps on existing laws, government and educational entities, or other aspects of public activity or commerce, and to suggest future legislative actions. Finally, the names or numbers of individual acts cited below to which links are attached are not the official names and numbers of the individual bills; they are a numbering system developed for LexisNexus StateNet, an online legislative and regulatory tracking and intelligence service for the 50 states and Congress, which was a major source for the information presented here.
Political Misuse of AI and Deepfakes
2024 being a Presidential election year in the U.S., potential political aspects of AI misuse garnered particular attention from some state legislatures. Space prohibits citation of legal definitions of AI set out in each law, but each law explicitly refers to AI-generated media as it is first defined by each law.
Alabama's Act No. 2024-349 makes it a crime to distribute "materially deceptive" media "produced by artificial intelligence" that "falsely depicts an individual engaging in speech or conduct in which the individual did not in fact engage" and that causes a reasonable viewer or listener to "incorrectly believe that the depicted individual engaged in the speech or conduct depicted." This action is not a crime if the disclaimer "this media has been manipulated by technical means and depicts speech or conduct that did not occur" appears within any video making such a depiction "in letters in a size that is easily readable by the average viewer" and "appears throughout the entirety of the video." If the depiction is caused by alteration of existing media, the altered version must contain a citation of the original source. The first offense is a Class A misdemeanor, but subsequent ones are a Class D felony (although we won't delve into the differences in consequences between those here). The law exempts broadcasting altered video as part of a news report about the altered media itself.
Colorado's 2024 CO H 1147 law defines a deepfake as "analogous to a person being forced to say something in a video recorded under duress where the victim appears to say something they would not normally say" and outlines procedures for filing a civil lawsuit against the perpetrator(s).
Florida's 2024 FL H 919 requires disclaimers to appear in "a political advertisement, an electioneering communication, or other miscellaneous advertisement of a political nature" that contains images or other digital content "created in whole or in part with the use of generative artificial intelligence." Such communications must contain a disclaimer to that effect, and this law specifies how that disclaimer must appear. Violators will be guilty of a "misdemeanor of the first degree" plus potential civil penalties.
New Hampshire's 2023 NH H 1432 makes use of a deepfake for any purpose a Class B felony, authorizes civil penalties for violations, and doesn't exempt avoiding violation by use of disclaimers within the altered media but does exempt depictions of the altered media for satirical or news reporting purposes.
New Jersey's 2024 NJ AR 141 is less strict, simply calling on "platforms which are used to generate deepfake and cheapfake media to voluntarily commit to prevent and remove harmful content" but prescribes no penalties for failure to do so. (The law officially defines "cheapfake" as "any software-generated audiovisual representation.") Use in news reports is exempted.
New Mexico's 2024 NM H 182 is a more general law about political media regulations but makes it a crime punishable by a civil fine to distribute AI-generated media in a "materially deceptive" political advertisement without an audio or visual disclaimer that the advertisement was "manipulated or generated by artificial intelligence," each violation being a separate offense.
Oregon's 2024 OR S 1571 defines AI-generated content as "synthetic media" and calls for civil fines if a political advertisement doesn't include "a disclosure stating that the image, audio recording or visual recording has been manipulated." Again, use in news reports is exempted.
Utah's 2024 UT S 131 also terms AI-generated media as "synthetic," prescribes the forms disclaimers must take, and requires a $1,000 civil fine for each politically related (e.g., for a candidate or a ballot measure) advertisement that violates this law.
Wisconsin's 2023 WI A 664 also defines AI-generated media as "synthetic," requires specifically worded disclaimers throughout "video and audio synthetic media" that the content is AI-generated, and provides a $1,000 fine for each violation.
Education and Training
California’s 2023 CA A 2013 amends existing law on AI training data transparency to require developers of any generative AI-based "system or service" released publicly, prior to official software release, to also publicly release "the developer's internet website documentation…regarding the data used to train" the system. This informational release must include "among other requirements, a high-level summary of the datasets used in the development of the system or service." The law also defines "synthetic data generation" as "a process in which seed data is used to create artificial data that has some of the statistical characteristics of the seed data." The law doesn't specify penalties for failure to do so, but the law is defined as a civil rather than a criminal statute.
California also passed 2023 CA S 1047 (despite its index name, this law and others dated before 2024 didn't pass until 2024, although introduced earlier), a law requiring AI app developers to meet requirements for designing the data models used to build the app, to include an ability to effect a full shutdown of the app by humans on demand, and to maintain for the app's service lifetime a written safety and security protocol. The law also prohibits developers from using a foundational data model for purposes "not exclusively related to the training or reasonable evaluation of the covered model" (or other models derived from it) for any purpose "if there is an unreasonable risk" that the covered model or derivative "will cause or materially enable a critical harm, as defined." The law also, beginning in 2026, requires developers to retain a third-party auditor to audit compliance with the law's provisions and for the auditor to produce a report to be made publicly available. It also requires developers to file an annual report on models with the state Attorney General, sets up a "Board of Frontier Models" to issue future regulations regarding covered models, and establishes a "cloud computing cluster" called "CalCompute" to advance the "development and deployment" of AI apps that are "safe, ethical, and sustainable."
Maryland's 2024 MD H 1128 sets up a Talent Innovation Program and Fund to improve availability of job training for state residents, to include AI and cybersecurity training.
Tennessee's 2023 TN S 1711 requires governing boards of higher education institutions, as well as all local boards of education for public schools and the governing bodies of public charter schools, to enact formal policies for use of AI by students and all educational personnel for any instructional and assignment activities.
Criminal Justice and Other Legal Concerns
California's 2023 CA A 2811 amends the state's Business and Professions Code to require attorneys to "execute and maintain for a period of 7 years an affidavit certifying whether generative artificial intelligence, as defined, was used in the drafting of each document that an attorney files…in a state or federal court within this state."
That state's 2023 CA A 2602 defines a "digital replica" to mean "a computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual…in which the actual individual either did not actually perform or appear, but the fundamental character of the performance or appearance has been materially altered, except as prescribed." The law also holds that agreements between any two individuals for personal or professional services is unenforceable if those services are fulfilled by any such digital replica instead of in person.
Illinois' 2023 IL H 3773 amends that state's Human Rights Act to prohibit employers that use predictive data analytics to screen employment applications from considering an applicant's race or to use an applicant's ZIP code as a proxy for race, in a variety of hiring, selection, privilege, discipline, or a number of other employment-related contexts.
New York's 2023 NY S 7676 sets new requirements for any contracts that involve use of digital replicas.
South Carolina's 2023 SC H 4754 amends state requirement for training of workers in real estate (e.g., brokers, associates, and property managers) to specify that during training for such positions as required by law, the hiring real estate brokerage firm will be held responsible for trainees using AI or similar programs to meet work-product requirements produced during such training.
Utah's 2024 UT H 366 amends several criminal justice statutes but includes the restriction that courts can't solely use an algorithm or score derived from a computerized risk assessment tool to make decisions regarding probation for any defendant.
That state's 2024 UT H 249 prohibits the granting of "legal personhood" to any entity classified as "artificial intelligence" (along with a list of other entities that are not commonly considered persons).
Healthcare
California's 2023 CA A3030 requires healthcare facilities using generative AI to generate any "patient communications pertaining to patient clinical information, as defined, to ensure those communications include disclaimers" that they were generated by AI, as well as instructions on how to contact a "human health care provider, employee, or other appropriate person." Violations are to be handled by "the Medical Board of California or the Osteopathic Medical Board of California, as appropriate."
That state's 2023 CA S 1120 requires that healthcare or disability insurers that use AI, an algorithm, or other software tool (or contract with another entity that uses such tools) for utilization management or review functions must comply with specified requirements, such as making sure coverage determinations are made based on specific information that is equitably and fairly applied.
New York's 2023 NY S 7543 specifies that state agencies may not solely use an automated decision-making system to make determinations for delivery of any public assistance unless such a system is specifically authorized by another law tailored to a particular public service.
Other Contexts of Potential Interest
California's 2023 CA A 2930 requires developers or deployers of any "automated decision system" to "provide any impact assessment to the Civil Rights Department" and "notify any natural person that is subject to the consequential decision that an automated decision system is used and provide that person with specified information" as well as give affected persons "an opportunity to correct any incorrect personal data." If a "consequential decision is made solely based on the output of a decision tool system," the developer is required to "if technically feasible, accommodate a natural person's requirement to not be subject to the automated decision system and to instead be subject to an alternative selection process of accommodation, as prescribed." There is a civil penalty of $25,000 per violation, and the law outlines procedures for the state to follow to remedy alleged violations.
That state's 2023 CA S 942 modifies the Business and Professions Code (relating to consumer protections) to require a "covered provider" to "make available an AI detection tool at no cost to the user" that can "assess any content that was created or altered by the provider's generative AI system" that would highlight and compile "any system provenance data that is detected in the output" and give the user an opportunity to submit feedback regarding that provenance data to the developer. Violation would result in a $5,000 civil fine.
Colorado's 2024 CO S 205 sets consumer protections against "algorithmic discrimination" in a variety of contexts (including but not limited to education enrollment, employment, lending services, healthcare services, and housing) by "any high-risk artificial intelligence system that when deployed makes or is a substantial factor in making a consequential decision." Significantly, the law defines "high-risk" apps as including not just AI but apps governing such activities as antivirus and cybersecurity, AI-enabled video games, calculators, spellcheckers, spreadsheets, firewalls, web-hosting technologies, and a variety of other app types not generally associated with macro-world decision-making tasks.
New Hampshire's 2023 NH H 1688 prohibits state agencies from using AI to manipulate, discriminate against, or surveil members of the public. In addition, "if an AI system produces a recommendation or a decision, and this recommendation or decision once implemented or executed cannot be reversed, then the recommendation or decision must be reviewed by a human who is in an appropriate responsible position and is aware of the limitations of the AI system before the recommendation or decision takes effect."
Utah's 2024 UT S 149 creates an "Artificial Intelligence Policy Act" that establishes an Office of Artificial Intelligence and a regulatory AI analysis program, gives the office rulemaking authority over AI programs and regulation exemptions, sets up a regulatory AI analysis program, and "requires disclosure when an individual interacts with AI in a regulated occupation."
Conclusions
While regulation of AI apps at the U.S. federal level seems stalled for now, this summary of state laws passed this year shows that legislators on that level, at least, are aware of some of the potential pitfalls of unregulated AI apps. Hopefully, some of the ideas embodied in these legislative acts will provide some food for thought for legislators at other levels in countries worldwide, as well as the developers and deployers of current and future AI software.