The sensational media stories following the release of Open AI’s GPT-4 continue, but the theme that it's about to—or already has—plunged the world into the "Singularity" is being quietly undercut by efforts in the EU and the US to regulate AI in general.
Many popular and business media outlets have recently gone overboard in predicting the end of civilization at the hands of this useful but humble chatbot. While it’s fun to scare ourselves with spooky stories from time to time, this deluge is starting to give AI in general an undeserved bad rep. A useful antidote is to look at the slew of laws and regulations—some already on the books, some merely pending—that can assure the sober-minded that responsible people are aware of the dangers and are acting to do something about it.
The European Union’s Take
The EU can claim to be the world leader in anticipating the societal problems that may occur when technology outstrips legal systems. Perhaps because Europe experienced the dangers of totalitarianism leading to war within living memory of some of its citizens, its people are more sensitive to certain kinds of "what-ifs," as illustrated by the Charter of Fundamental Rights of the European Union. The 2016 General Data Protection Regulation (GDPR), for example, provides privacy protections from exploitation of personal data by Big Anything. Its equivalent will likely be a long time coming in the US, where the GDPR is sometimes viewed as a restraint on advertisers’ First Amendment rights to make unlimited sales offers. However, the direction of thinking the EU is following is a path others will likely emulate eventually.
The EU’s policy statement on liability rules for AI outlines an effort to legislate safeguards against unbridled AI apps. It starts out by postulating four categories of AI application risk.
Unacceptable risk is the highest category of apps, which the EU will prohibit completely. Examples of this risk type are children’s toys with voice assistance (which might suggest dangerous behavior) and social scoring by governments (based on credit ratings, comments by acquaintances, or other factors). (Although not explicitly mentioned now, presumably this would include other actions considered dangerous to society at large.)
High risk is the next most severe classification. Such apps must be strictly controlled via a combination of human oversight, systems of risk assessment and mitigation, high standards for app training data sets, traceability of results via activity logging, detailed documentation for users and oversight authorities, and overall requirements for accuracy, security, and robustness. App types in this category would include infrastructures that could affect the life and health of users (e.g., self-driving cars or medical lifting devices), product-safety components (e.g., robotic surgical systems), apps affecting worker employment or management (e.g., resume-sorting or exam-scoring apps), significant private or public services (e.g., credit scoring), border-control management (e.g., evaluation of travel docs), and administration of justice or democratic processes (e.g., evaluating evidence, voting, applying the law to a set of facts, or using remote biometric identification methods for law enforcement). Biometrics would still be permitted (with court approval) in cases such as finding lost children, identifying a specific perpetrator as part of a trial, or responding to a terrorist threat.
Limited risk would include generative apps like GPT-4 that would simply have transparency requirements, such as indicating to users that they are working with a machine, specifying that resulting content would be AI-generated, making available public summaries of the app’s training data, and having built-in restrictions against producing illegal content, such as previously copyrighted material.
Minimal risk apps would be unrestricted and would include such examples as spam filters and video games.
The EU moved on to an Artificial Intelligence Act proposal in 2021 that, to start, would impose a regulatory framework for high-risk AI systems only," but prescribes a voluntary "code of conduct" for producers of AI apps in the other risk categories.
"The requirements will concern data, documentation and traceability, provision of information and transparency, human oversight, and robustness and accuracy" for high-risk apps, the proposal states. The proposal goes on to say this approach was considered the best balance between violations of fundamental rights, people’s safety, assuring that deployment and enforcement costs would remain reasonable, and giving member states "no reason to take unilateral actions that would fragment the single market" (in Europe for AI apps). The proposal also points out that "an emerging patchwork of potentially divergent national rules will hamper the seamless circulation of products and services related to AI systems" and that "national approaches in addressing the problems will only create additional legal uncertainty and barriers and will slow market uptake of AI." The same logic implies—probably correctly—that the most effective parallel effort in the US will have to be federal laws or regulations, rather than leaving regulation to individual states.
AI systems that are safety components of products "will be integrated into the existing sectoral safety legislation." High-risk AI systems integrated into other products (e.g., machinery, medical devices, toys) "will be checked as part of the existing conformity assessment procedures" under separate legislation relevant to the individual product types.
This general position was agreed to by member states in 2021, but the legislation was tabled indefinitely until June of this year, when the requirements were added that apps like ChatGPT would have to disclose when content is AI-generated, image-related apps would have to include a method of distinguishing between “deep-fake” images and real ones, and all apps would have to provide safeguards against illegal materials (i.e., content previously copyrighted by others). Perhaps most significantly, social media platforms with more than 45 million users that use AI systems to influence election outcomes would be classified as "high-risk" systems—which would explicitly include Meta and Twitter/X.
Far from heeding the voices of those who think a moratorium on AI development should be observed until new standards are worked out, Thierry Breton, current Commissioner of the Internal Market for the EU, said in June that, "AI raises a lot of questions—socially, ethically, economically. But now is not the time to hit any 'pause button.' On the contrary, it is about acting fast and taking responsibility."
The proposed law is currently being debated by politicians and industry alike, but the EU is hopeful of passing the legislation by year’s end. Afterward, there will be an approximate grace period of two years before the rules take full effect.
Some Laws Already on the Books
Legally speaking, it’s not exactly as Wild West in the US AI market as some might have you believe. A 2020 blog on the US Federal Trade Commission (FTC) website points to two laws in force since the 1970s that have a bearing on AI. The Fair Credit Reporting Act (1970) prohibits use of algorithms to determine credit scoring for loan applications. The Equal Credit Opportunity Act (1974) "prohibits discrimination on the basis of race, color, religion, national origin, sex, marital status, age, receipt of public assistance, or good faith exercise of any rights," a challenge for an AI that might have been trained with data that isn't properly curated to filter out potential bias in any of these areas. The blog goes on to point out that any deceptive use of "doppelgangers" (e.g., fake dating profiles, phony followers, deepfakes, AI chatbots), secretly collected or sensitive data for AI training, and failure to consider outcomes as well as inputs could result in a company facing an FTC enforcement action. In addition, section 5 of the FTC Act (1914) itself bans unfair or deceptive practices such as, the blog notes, "sale or use of racially based algorithms."
The National Artificial Intelligence Initiative Act (NAIIA) of 2020, passed as part of the 2021 National Defense Authorization Act, directs the President, the interagency Select Committee on AI, and agency heads to support AI interdisciplinary research programs, AI R&D, AI education and workforce training programs; establishes seven governmental AI research institutes; and directs the Office of Management and Budget (OMB) to issue guidance on regulation of AI in the private sector.
Although the NAIIA doesn’t define AI app risk tiers as the EU legislation does, the mandate for OMB guidance does include the admonishment that, "for higher risk AI applications, agencies should consider, for example, the effect on individuals, the environments in which the applications will be deployed, the necessity or availability of redundant or back-up systems, the system architecture or capability control methods available when an AI application makes an error or fails, and how those errors and failures can be detected and remediated." Although vague, such governmental strictures mean any AI app producer in the US already needs to be aware of the potential impacts of its systems.
As directed by the NAIIA, the National Institutes of Standards and Technology (NIST) has issued the Artificial Intelligence Risk Management Framework, intended to be a "living document" that provides information on assessing and managing risk in AI apps. The publication’s goals are "to equip organizations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time." It further states that the framework is intended to be practical and will be updated "as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms."
In May, the US and EU announced joint work on a voluntary code of conduct for AI applications to apply until the EU law takes full effect. Then, on July 21, the Biden administration announced an agreement with seven major companies to ensure their AI products are safe before they’re released, along with some unspecified third-party oversight. The agreement included a pledge by the companies to start including watermarks on all AI-generated images.
Progress on More Local Fronts
US states and municipalities have also already passed some laws pertaining to AI.
Alabama passed a law in April 2021, that sets up a council on AI to advise the governor and legislature. Another Alabama law passed in 2022 prohibits the use of facial recognition technology as the sole evidence for an arrest.
Colorado passed a law in 2022 setting up a task force to investigate and restrict use of facial recognition technologies by state and local bodies, and to prohibit public schools from entering into contracts for facial recognition services. The city of Baltimore and the city of Bellingham, WA, have passed ordinances similarly restricting facial recognition use by their police departments.
Illinois has passed an act requiring employers that make hiring decisions solely based on AI evaluations of video interviews to report those instances to the state Department of Commerce. New York City has passed a similar local ordinance.
In May 2022, Vermont created a division of its Agency of Digital Services to investigate all uses of AI by the state government.
Keeping Eyeballs on AI
Is all this activity absolute insurance against an eventual takeover of humanity by AI? No, but it illustrates that many people are already watching and listening for trouble. It shows that there’s no reason for outright alarm. While it’s true that AI is a young enough technology that humans haven’t even scratched the surface of coming up with ways to misuse it, the moratoriums and outright bans that some are calling for are misguided. A sharpened pencil thrust into a vulnerable part of the human body can kill, but saving the world from that possibility with laws against all writing utensils is not a reasonable response. Like most of life’s dangers, AI apps can be a safe and useful tool set if rational people continue to give careful ongoing thought to its ramifications.
LATEST COMMENTS
MC Press Online