Although there have already been plenty of concerns about how AI as a technology will cause seismic changes in numerous fields, OpenAI’s recent release of ChatGPT-4 has pushed many observers to a premature tipping point of existential panic.
Since the late 1960s, science fiction thrillers have gleefully anticipated the day when inventing a tool as clever as AI would come back to bite us. HAL 9000, Skynet, and similar silicon-based villains have all along seemed destined to usher in the fall of our species as soon as humans got stupid enough to make computers that are smart enough. Suddenly this spring, ChatGPT-4’s release seems to have become the “red sky at morning” of which sailors of digital seas must take warning. With apologies to Hunter S. Thompson, a review of the bidding may be in order.
To begin with, ChatGPT-4 is the fourth version of an AI app that analyzes human language; it’s produced by OpenAI, a company backed by Microsoft. There are competitors out there, such as Anthropic’s Claude, Baidu’s Ernie, Bing’s Bing Chat, Google’s Bard, Jasper’s Jasper, and others, but it’s been GPT-4 that’s really escalated the level of concern, so we’ll just focus on that one here. Version 3.5 was released in November 2022, and GPT-4 was released March 14, 2023. It’s classified as a Large Language Model (LLM), a type of AI built to interpret natural language and predict what words might appear next in a sentence, based on context. (It’s also multimodal, which means it can work with images as well, but there’s insufficient space to get into that aspect here.)
GPT-4 has been trained to mimic human writing by digesting a large amount of human-written text, employing one to 100 trillion parameters, depending on the source (but all unconfirmed by OpenAI). With that large a training data set, GPT-4 can produce lengthy responses to questions in such a way that it can generate output ranging from a recipe generated from a picture of a refrigerator’s contents to application code to a fake conversation between simulations of a filmmaker and a philosopher, to name just a few possibilities. It's a significant achievement not only in itself, but it shows how the language processing of AI in general continues to show great promise. What will follow in coming years will be even more impressive.
Aieee! It’s AI!
On the heartburn side of the ledger is the angst about the potential loss of jobs GPT-4 and similar apps might cause. It’s hard not to note that this potential change is seeming to attract more attention than some other steps on the historical road to automating work did, just because the color of the threatened workers’ collars this time is white rather than blue. (Having been a programmer, journalist, and technical writer at points in my career, that was my own first reaction as well.)
Beginning with a report (that OpenAI co-wrote with University of Pennsylvania collaborators), which predicts 80 percent of the U.S. workforce could eventually have 10 percent of their work tasks (and up to 19 percent of workers could see as much has half their work tasks) affected, mourning bells have already begun tolling for potentially displaced workers. Another report issued by Goldman Sachs predicts eventual automation of at least 300 million jobs worldwide by ChatGPT-4 and similar apps. As this example illustrates, articles predicting whose jobs will be the first to go abound.
Citing concern that specific queries will become fodder for further GPT-4 training, several companies have specifically banned employees from using the app for work lest the queries make company secrets available to other GPT-4 users later. Such industry luminaries as Steve Wozniak and Elon Musk (and more than 27,000 other people so far) have signed a letter asking for a moratorium on large AI projects, to make time for development of “robust AI governance systems.” Even Bill Gates, on page 6 of an otherwise largely positive blog about AI, felt motivated to ask, “Could a machine decide that humans are a threat?”
Although it was largely over concern that GPT-4 violated privacy regulations of the E.U.’s General Data Protection Regulation (GDPR) in how ChatGPT-4 was trained, Italy’s GDPR watchdog agency banned ChatGPT-4 pending further investigation, and parallel offices in the U.K. and Eire hinted they could follow suit. (OpenAI says it thinks it didn’t violate the GDPR.)
Clearly, GPT-4 is not your daddy’s algorithm. However, such concerns disguise the real drawbacks of CPT-4. First, its training data was confined to text available on the Internet prior to 2021. Second, it creates nothing; it’s simply copying and rearranging pieces of that training data in response to any query (although it mimics well the underlying expression patterns). What that will do to copyright and intellectual property laws in the long run is anyone’s guess. Relying on the Internet as a whole for accurate information is a cautionary tale in itself.
Taking a Breath
As Gates also notes in the abovementioned blog, “artificial general intelligence [AGI] refers to software that’s capable of learning any task or subject.” That’s even a step short of “strong AI,” which is an AI with a human-level intellect. GPT-4 is capable of answering natural-language questions at length (even though it’s already gaining notoriety for some of its glaring mistakes). It’s all too easy for some to jump to the dreaded conclusion that GPT-4 is capable of humanlike independent thought (and therefore malevolence because, hey, we know what those humans are like).
The Turing Test (which evaluates how well a machine mimics human behavior) in part only asks if a machine’s behavior is “indistinguishable” from a human’s. It looks like many people who aren’t being mindful of Turing’s nuances consider that GPT-4 is actually meeting that much of the test’s criteria.
It isn’t. Looking like a duck and sounding like a duck doesn’t make GPT-4 a human-level intelligence. We have neither AGI nor strong AI yet; they don’t exist except in our imaginations. GPT-4 is simply an AI tool capable of use by laypeople.
The Sun Also Rises
As such, it has much to recommend it. A more realistic approach to GPT-4 is to think about all the ways in which this app (not to mention its successors in kind) will give many of the worker categories whose demise is being predicted ways to avoid much of the drudgery they have to endure now. With a little custom editing, such chores as banal as business letters, interoffice memos, real estate descriptions, even resumes and their cover letters can all be adequately handled by GPT-4 and others of its ilk. AI-assisted research can help in all kinds of fields.
An additional grab bag of GPT-4 possibilities includes such varied tasks as analyzing business plans and financial summaries, synopsizing medical diagnoses and research documents, enhancing education by making better language models of learning topics, customizing corporate chatbots for specific products, and providing mental health support by using chatbots for limited interactive psychological therapy. Other ideas include improving text-to-speech apps for users with blindness or vision difficulties, helping AI prompt engineers write chatbot-testing queries, and assisting user experience designers analyze survey and focus-group results.
Contrary to the readily imaginable scenario of high-school and college students using something like GPT-4 to write essays for them, UCLA Law School Professor John Villasenor suggests it’s best to teach students to use it ethically and productively in writing papers to boost learning. (This idea embraces the concept that with GPT-4 and similar apps coming to maturity, needing to know how to write a good essay may soon follow the fate of knowing the slide rule when the pocket calculator came along.) And after all, Villasenor argues, how effective will a ban on college students using something like GPT-4 be anyway? A recent article published by Intelligent.com suggests that as many as eight out of ten teachers agree with Villasenor and are even using ChatGPT themselves to prepare teaching materials. (For those who fear the other side of the coin, Google’s Originality.ai offers a possible plagiarism detector.)
However, to think that the GPT-4s of the world will eliminate creative writing ignores a common human attribute of wanting to use poetry and prose to find an original way to express thoughts, feelings, and concepts. It seems unlikely any AI can ever completely eradicate those ambitions.
Can GPT-4 and similar tools have an economic impact, such as on worker productivity? Maybe. A recent (but not yet peer-reviewed) study by researchers at MIT divided 444 marketing and HR professionals into two groups and had one group use GPT-4 to accomplish daily tasks, while a control group didn’t. One of the unexpected results was that while those using the app improved productivity slightly overall, the least-skilled writers got a greater boost than those judged to be better writers. This implies GPT-4 and its siblings could have the greatest impact on improving skills of workers who had fewer of them to begin with.
Not Going Gently into That Good Night
The abovementioned letter signed by Wozniak and Musk advocated a six-month moratorium to enable the U.S. and other governments to come up with at least a start on regulations or laws that would set guidelines for AI apps in general. That’s one way to go, although it smacks of a hammer seeking a nail—and these days six months doesn’t seem like long enough for Congress to act on much of anything with one voice. Besides, an (unenforceable) moratorium of any length will mainly give those who don’t observe it a leg up on anyone who does. The idea that an AI Race mirroring the 1960s Space Race currently exists (although more between corporate rather than strictly national entities) is more the actual situation.
Anyone who seriously thinks any AI will get the best of us isn’t giving our species enough credit. We haven’t lasted 300,000 (or whatever) years so far just to let a tool of our own making take us out. Our survival instincts (so far, at least) are just too strong—as nearly 80 years of flirting with nukes should prove. Besides, any first-year student coder can tell you computers don’t do what you think they could do, nor do they do what you want them to do; they’re only capable of doing what you accurately tell them to do.
The GPT-4 furor shows that enough people are paying attention to the threat Gates and others have raised. Famed science-fiction author Isaac Asimov postulated in I, Robot three “laws of robotics,” the first of which included the admonishment “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Surely even by the 2058 deadline Asimov postulated, similar strictures will be programmed into any strong AI system that we can come up with, regardless of governmental rules that may or may not exist by then.
In the final analysis, humans need air to live, and computers need electricity. Without even waiting for a thunderstorm, earthquake, or coronal mass ejection to take down our power grids and save us from GPT-4 today or strong AI tomorrow, as the Wall Street Journal recently noted, all we really need is a reliable “off” switch.
LATEST COMMENTS
MC Press Online