No Rights for Robots? Never?

Agata Sagan, Peter Singer

Last year, we published a syndicated column on the development of robots and raised the question of whether robots could be conscious and, if so, whether they would have rights. The topic is evidently a sensitive one with some Christians because it seems to threaten the unique status of human beings. We will here review the current development of robots and some predictions for their future, and then consider one response.

Robots already perform many functions, from making cars to defusing bombs—and more menacingly, firing missiles. Children and adults play with toy robots, while vacuum-cleaning robots are sucking up dirt in a growing number of homes and—as evidenced by YouTube videos—entertaining cats. There is even a Robot World Cup, though judging by the event held in Graz, Austria, last summer, footballers have no need to feel threatened just yet. (Chess, of course, is a different matter.) Last year, Gecko Systems announced that it was developing a “fully autonomous personal companion home care robot,” also known as a “carebot,” designed to help elderly or disabled people live independently. In one trial, the company reported, a woman with short-term memory loss broke into a big smile when the robot—which looks rather like the Star Wars robot R2D2—asked her, “Would you like a bowl of ice cream?” The woman answered “Yes,” and presumably the robot did the rest.

Most of the robots being developed for home use are functional in design. Honda and Sony are making robots that look like the “android” C-3PO in Star Wars. There are some robots, though, with soft, flexible bodies, humanlike faces and expressions, and a large repertoire of movements. Hanson Robotics has a demonstration model called Albert, whose face bears a striking resemblance to that of Albert Einstein.

Will we soon get used to having humanoid robots around the home? Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, has predicted that busy parents will start employing robots as babysitters. What will the effect be on a child, he asks, to spend a lot of time with a machine that cannot express genuine empathy, understanding, or compassion? One might also ask why we should develop energy-intensive robots to work in one of the few areas—care for children or elderly people—in which people with little education can find employment.

In his book Love and Sex with Robots, Robert Levy goes further and suggests that we will fall in love with warm, cuddly robots and even have sex with them. (If the robot has multiple sexual partners, just remove the relevant parts, drop them in disinfectant, and, voilá, no risk of sexually transmitted diseases!) But what will the presence of a “sexbot” do to the marital home? How will we feel if our spouse starts spending too much time with an inexhaustible robotic lover?

A more ominous question is familiar from novels and movies: Will we have to defend our civilization against intelligent machines of our own creation? Some consider the development of superhuman artificial intelligence inevitable and expect it to happen no later than 2070. They refer to this moment as “the singularity” and see it as a world-changing event.

Eliezar Yudkowsky, one of the founders of the Singularity Institute for Artificial Intelligence, believes that the singularity will lead to an “intelligence explosion” as super-intelligent machines design even more intelligent machines, with each generation repeating this process. The Association for the Advancement of Artificial Intelligence has set up a special panel to study what it calls “the potential for loss of human control of computer-based intelligences.”

If that happens, the crucial question for the future of civilization is: Will the super-intelligent computers be friendly? Is it time to start thinking about what steps to take to prevent our own creations from becoming hostile to us?

For the moment, a more realistic concern is not that robots will harm us but that we will harm them. At present, robots are mere items of property. But what if they become sufficiently complex to have feelings? After all, isn’t the human brain just a very complex machine?

If machines can and do become conscious, will we take their feelings into account? The history of our relations with the only nonhuman sentient beings we have encountered so far—animals—gives no ground for confidence that we would recognize sentient robots not just as items of property but as beings with moral standing and interests that deserve consideration.

The cognitive scientist Steve Torrance has pointed out that powerful new technologies like cars, computers, and phones tend to spread rapidly in an uncontrolled way. The development of a conscious robot that (who?) was not widely perceived as a member of our moral community could therefore lead to mistreatment on a large scale.

The hard question, of course, is how we could tell that a robot really is conscious and not just designed to mimic consciousness? Understanding how the robot is programmed would provide a clue—did the designers write the code to provide only the appearance of consciousness? If so, we would have no reason to believe that the robot was conscious.

But if the robot is designed to have humanlike capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really is conscious. At that point, the movement for robot rights would begin. It would also, apparently, meet resistance. Responding to our original article in his blog on the Web site of the Christian magazine First Things, Wesley J. Smith asserts that it is “fanciful” to think that robots will ever be conscious. They will, he says, never be people and should never have rights.

Never? How could Smith know that? Because, it seems, “We are much more than mere complex machines. We are alive, for example. Robots would not be.” Unfortunately, Smith doesn’t tell us what it is that distinguishes living things from nonliving ones, which makes it difficult to discuss whether robots could be alive in his sense of the term. There are plenty of science-fiction accounts, going back to Mary Shelley’s Frankenstein, of scientists creating a living being. Now scientists have already created the basic components of living matter, and creating a whole living organism may also be possible.

Smith’s confidence that this will never happen comes from his dogmatic belief in human exceptionalism. Obviously, the fact that we are alive is not enough to make us exceptional. Smith appears to think that it is, instead, the fact that we have free will, and robots could not, or not in the same sense. He also seems to think that a robot could not be conscious: “If a robot could program itself into greater and greater data processing capacities, that doesn’t make it truly sentient, just sophisticated.” Nor could a robot learn in the way that we do: “Human behavior arises from a complex interaction of rationality, emotions, abstract thought, experience, education, etc. That would never be true of robots.”

Behind all this appears to lie a denial of the possibility that the human brain could be merely a very complex machine. This is, he says, “reductionist thinking.” It is clear that some kind of religious view is motivating Smith’s hostility to the idea of conscious robots. If human beings are divine creations, then we could be more than very complex machines. We might, for instance, have that mysterious thing that religious people call a “soul,” given to us by God. But if life on Earth began from the chance inter
action of molecules and millions of years of evolution did the rest, then it is hard to see why consciousness, or even the capacity for decision making that we call “free will,” should be in principle beyond any machine, no matter how sophisticated. Why would it be impossible to re-create, in different materials, those interactions in our brain that are responsible for our rationality and give rise to our consciousness and our emotions? And why would such a machine not be able to be educated and to learn from its experiences?

Even for Christians, there is no need to believe in human exceptionalism. Brother Guy Consolmagno, an astronomer and a Jesuit priest, caused a stir in 2006 when he agreed that there could be intelligent life elsewhere in the universe. In a BBC interview, when asked whether aliens would be made in the image of God, he pointed out that the traditional belief is that we are made in the image of God in the sense that we have an intellect, free will, and the capacity to love. Then he added: “Anything, whether it is an intelligent computer or an alien with five arms—if they have those aspects, seems to me they’d be in the image and likeness of God.”

Right on, Brother! We find it hard to understand why anyone would feel the need to deny the very possibility of an intelligent computer, with as much intellect and free will as we have. Who knows where science will take us in fifty years?

Agata Sagan

Agata Sagan is an independent researcher and information technology worker. She lives in Warsaw, Poland.

Peter Singer

Peter Singer is DeCamp Professor of Bioethics at the University Center for Human Values at Princeton University. His books include Animal Liberation, How Are We to Live?, Writings on an Ethical Life, One World, and, most recently, Pushing Time Away.


Last year, we published a syndicated column on the development of robots and raised the question of whether robots could be conscious and, if so, whether they would have rights. The topic is evidently a sensitive one with some Christians because it seems to threaten the unique status of human beings. We will here review …

This article is available to subscribers only.
Subscribe now or log in to read this article.