When will artificial intelligence develop aspirations?
When will a robot yearn to have its own apartment? When will an AI that invented technology want to re-invest its earnings into better marketing for its product? When will an AI providing value to a company desire the pay and benefits its co-workers are receiving?
As far as I am concerned, until any of these things happen, we should not even be discussing the concept of granting legal rights to artificially intelligent beings of any type.
Computers are already much better than people at many tasks. This shouldn’t be a surprise. We build powerful machines that can perform the physical work of ten thousand people at once, why shouldn’t we be able to create machines that can perform faster calculations and more accurate predictions than humans ever could?
But mental work is not perceived the same way as physical work. Because we feel that advanced thinking is the signature essential characteristic of humans, a machine that performs thinking-type skills seems more human than a bulldozer or a backhoe. As the proficiency of computers becomes more and more spectacular people wonder whether these computers should be treated more like people and less like machines.
There has been wide interest in the topic recently as artificial intelligence works deeper into the human imagination. Of course, from Mary Shelley through Isaac Asimov to Ridley Scott our artists have been imagining the time when our creations should earn the most basic aspects of respect our society provides. But recently legal scholars entered the arena and considered granting legal rights to machines.
Practicality has led the discussions. Both the European Patent Office and the U.S. Patent Office have recently rejected proposals to grant patents to the artificially intelligent computer algorithms that “invented” technologies. Lawyers and law professors have written books and articles on the rights that can and should be accorded creative machines. I include myself in this hand wringing (see blog post).
But I now believe that all of these efforts are looking the wrong direction and asking the wrong questions. Putting aside the significant practical question of how a single AI entity can be defined for the purpose of providing privilege and punishment, the issue of legal rights should not depend on what can be created by an entity, but by whether the intelligence driving the entity has broader aspirations to make something of its rights and the ability to be disappointed by legal punishments.
This test arises from the nature of human society and its laws, rather than the nature of AI. As Jefferson stated in our breakaway document, rights worth noting “include life, liberty and the pursuit of happiness.” Each of these – seeking the people, places and ways of life that make us happy, being free to make decisions about our own futures, and even the ability to continue existing from day to day until our final hour arrives – is an aspirational right. When exercising these rights we are planning for a better future, whether ten minutes or ten years from now. We reward people’s work with something that can improve their future – money. We punish people by removing their freedom to operate in our society, and for the most extreme crimes in the most extreme circumstances, taking their lives away entirely.
Because we are creatures with limited life spans moving only one direction in time, the incentives and disincentives we recognize all affect how we may express our lives in the future. No entity that experiences time differently would be similarly affected by the rewards and punishments of our society.[1] So even if an AI entity is smarter than any human, even if it can fly a squadron of fighter planes, control an economy or change the weather, the AI entity is still just a tool unless and until it anticipates rewards or punishments for its behavior and can desire – in whatever relevant manner we can define desire – to seek the rewards for a separate purpose beyond simple scorekeeping. An objective may be driven by desire (in humans or animals), but an objective is not a desire in and of itself.
A computer can beat me in chess, but until it aspires to more than just meeting its preprogrammed objective, we should consider that computer nothing more than a training tool.
How will we determine whether an AI entity has dreams, aspirations and desires, beyond simple logical objectives? It may become obvious to us. When Koko the gorilla repeatedly asked for a kitten as a pet, it was clear to her keepers and friends that she had a desire for a more fulfilling life. Her desire was clear. You know it when you see it. However, we are likely to struggle with this definition and how it applies to an entity that clearly does not “think” in the manner humans do.
Feelings seem to be part of dreaming and aspiring, and machines cannot yet feel in a meaningful way. Living things feel. When asked why he was not a vegetarian, the great anthropologist and philosopher Joseph Campbell observed that, in his experience, vegetarians are people without enough imagination to hear a carrot scream. Plants sense their environments and communicate to each other (and the fungi that often support them) through the release of chemicals. We know that even microorganisms show similar communications and avoidance mechanisms. Computers are being fitted with sensors, but they are not even at the feeling level of plants and bacteria, so they are far from being able to participate independently in a complex society
The Turing Test seems simple. It measures when a computer is capable of thinking like a human being, and relies on a person’s perception of being fooled into thinking the computer is truly another person. This test could be much more complicated than it seems. Will one conversation be enough to meet the test? How about living with the computer for a week or a month?
Similarly, my test, measuring when artificial intelligence should be granted legal rights seems easy. Wait until the AI entity develops ambitions for its own future. Like the Turing Test, this new test is easy to define but could be very difficult to prove. Separating simple machine objectives from human-like desires may take months or years of deep analysis.
But until machines can dream for their own future and be affected by the rewards and punishments we have devised for other people, these machines are not the kind of peers who should receive the rights and responsibilities that our society is built on.
[1] Treatment of corporations, which are created by humans, operated through human activity and whose actions ultimately inure to the benefit or detriment of humans, is calculated to affect the people who ultimately decide on behalf of the corporation. Companies and corporations are not independent enough from people to deserve a credible category in the same way that an independent artificially intelligent entity would be. Of course, I have long held that any independent AI should create and manage a corporation, even if humans were involved, to truly enjoy the benefits of society’s privileges.
Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.National Law Review, Volume X, Number 259