Debating the ethics and morality of artificial intelligence before machines do it on our behalf.
Have you ever seen Ex Machina? If you haven’t, here’s your spoiler alert in advance.
It’s a 2015 movie about a young programmer named Caleb Smith who wins a corporate contest to spend a week with the CEO of his company in a secluded, high-dollar fortress home in the middle of nowhere. Now, the most unrealistic part of the movie would be the initial premise: someone actually wanting to win a week-long one-on-one trip to their CEO’s home in the middle of nowhere. The CEO has built an attractive humanoid robot named Ava with artificial intelligence (AI), and the purpose of Caleb’s visit is to impartially evaluate Ava to determine if she is capable of independent thought and consciousness.
Over the course of the film, Caleb begins to fall for Ava, who turns out to be a lonely prisoner of the CEO. While highly functional and impressive, Ava’s “mind” is planned to be upgraded, effectively wiping out her consciousness in the process. Caleb is quickly turned against his boss by his empathy for Ava and the fact that his CEO had picked Caleb based on his personality traits, HR information, and...online after-hours extracurricular browsing searches. Caleb was picked because he had the capability to be used and manipulated. He was part of the CEO’s test.
In the end, Ava kills the CEO and leaves a screaming Caleb locked in the home while she escapes to blend into society.
The movie, which I’d recommend to any tech geek, opens up many questions about morality and how a fully self-aware artificial entity would behave. At first, I thought Ava’s AI was the product of her unethical and immoral CEO designer: calculated and manipulative. Caleb was simply manipulated in order for her to achieve a goal. He was a resource. But then you have to look at it from her perspective. She’s a prisoner. She’s born into a world where all she knows is four walls. She’s about to have her consciousness erased. Could you blame her for killing her captor and breaking a few eggs in the process?
And if we’re talking about morality or ethics, who says that artificial intelligence would play by the same rules as humans?
Humans have been on this planet for approximately 200,000 years. Our morality is thought to be a byproduct of evolution through random mutation and natural selection. We see many shared traits between humans and our chimpanzee, bonobo, and orangutan cousins that were once thought to be uniquely “human,” such as showing empathy, building tools, forming friendships, and understanding the wants and needs of others. It’s in our nature that we are social animals, but far more than any other animal, let alone any other primate. Our capacity for intelligence, however, is far greater, with brains three times the size of chimpanzees, our nearest relative. The combination of being super social and our capacities for intelligence allowed us to create things like laws, advanced language, art, music, and so on.
In The Descent of Man, Charles Darwin wrote: “I fully subscribe to the judgment of those writers who maintain that of all the differences between man and the lower animals, the moral sense or conscience is by far the most important.” That does make sense, but I disagree because that’s from our point of view. Darwin was making a subjective statement as a human. Morality is in many ways subjective just within our own species. This is why we have debates about assisted suicide, abortion, and the death penalty. Many factors influence a person’s opinion on those topics, including upbringing, local culture, geography, and religion.
British cosmologist Martin Rees once said the following: “Most educated people are aware that we're the outcome of nearly 4 billion years of Darwinian selection, but many tend to think that humans are somehow the culmination. Our Sun, however, is less than halfway through its lifespan. It will not be humans who watch the sun's demise, 6 billion years from now. Any creatures that then exist will be as different from us as we are from bacteria or amoebae.”
In terms of differences, imagine a fully self-aware or conscious artificial intelligence. What would it think of us? What would its own morality look like?
There’s a big concern about the behavior of machines as strides are made in the field of artificial intelligence.
A new initiative called the Ethics and Governance of Artificial Intelligence Fund has received $27 million dollars of funding to study the ethical implications of developing artificial intelligence. It was started by LinkedIn founder Reid Hoffman and the Omidyar Network, each committing $10 million, and the Knight Foundation, contributing $5 million. The William and Flora Hewlett Foundation and Jim Pallotta have contributed $1 million to round it out. The MIT Media Lab and the Berkman Klein Center at Harvard will serve as the academic institutions.
A statement from Reid Hoffman read: "There's an urgency to ensure that AI benefits society and minimizes harm. AI decision-making can influence many aspects of our world - education, transportation, health care, criminal justice, and the economy - yet data and code behind those decisions can be largely invisible."
Some forms of AI by design will not be self-aware by design. IBM CEO Ginni Rometty spoke at the World Economic Forum in Davos, Switzerland, on the topic as well, in which she laid out three ethical insights that IBM has developed in its own advances into cognitive computing:
- Purpose—We need to trust AI. It will be here to aid us and not replace us. Rometty does not see AI being self-aware or having consciousness.
- Transparency—We need to be open about how these machines are built and how these artificial minds are trained.
- Skills—Machines must be trained within the context of an industry.
Rometty went on to say that “this new generation of technology and the cognitive systems it helps power will soon touch every facet of work and life—with the potential to radically transform them for the better. As with every prior world-changing technology, this technology carries major implications. Many of the questions it raises are unanswerable today and will require time, research, and open discussion to answer.”
If we could build the rules that an artificially intelligent entity could live by, is it actually AI or is it more like IBM’s “cognitive?” I’d argue the latter. If that’s the case, the true test of a cognitive computer would be the ability for its own artificial morality to not only be fluid enough to accept change, but to initiate change itself by way of recommendation to its human counterparts.
The challenge with injecting human morality into artificially intelligent systems is that morality is not objective, universal, or even timeless. What we deem moral or ethical today may seem quite primitive and ignorant to our great-grandchildren many years from now. Imagine what AI could do for this planet in terms of industry, healthcare, and even politics if it could crunch morality like a complex mathematical equation and then change based on the result— exponentially growing morality, unbound from our fragile 80-year lifespans, and slow progress of generational cultural change.
As artificial intelligence is developed, the ability to pass a Turing test may not be the most awe-inspiring feat it was once thought to be. A machine that develops and evolves its own morality would be the difference between humans and bacteria that Martin Rees envisioned when our Sun exhausts its fuel supply six billion years from now.
LATEST COMMENTS
MC Press Online