What happens when robots become social?

Artificial intelligence isn’t just changing technology

The recent prominence of artificial intelligence has led to some of the first serious political discussions of whether robots can be persons, moving the debate outside science, fiction, and philosophy.

For instance, the European Parliament has begun to outline the legal rights and responsibilities of artificial intelligence systems:

MEPs vote on robots’ legal status — and if a kill switch is required (Jane Wakefield, BBC News)

The first human to argue for equivalence of people and machines was Alan Turing in his 1950 article, “Computing Machinery and Intelligence”. This elegant case, and its basis in what later became known as the “Turing test”, has shaped the nature of artificial intelligence ever since.

Turing’s argument was: if you can’t tell the difference between an artificial intelligence and a person, after unlimited interaction, the difference isn’t important.

The point that Turing was making — and which has often been missed  —  is that what counts as artificial intelligence isn’t a scientific issue, it’s a social issue. You don’t get to say that what you’ve built is an artificial intelligence just because you’re using machine learning/neural networks/deep learning.

It is entirely appropriate that what counts as an artificial intelligence is a decision for society, and therefore belongs in parliaments and courts of law. And I have argued as such (Watt, 2009).

But there are those who disagree. Lorna Brazell of Osborne Clarke is quoted in the news article:

“Blue whales and gorillas don’t have personhood but I would suggest that they have as many aspects of humanity as robots, so I don’t see why we should jump into giving robots this status.”

First, the debate about whether great apes should have personhood is not over. The Great Ape Project has made exactly this case.

Second, nobody is “jumping in” — nothing meets the criteria, yet. But if robots demonstrably meet these aspects of “humanity”, there is no good reason for barring them from personhood on the basis of their physical construction.

The parallel that I’ve made (Watt, 2009) is that Turing’s argument has much in common with gender identity. Underlying physical structure doesn’t get to override our choices about identity. And this is not something science can resolve, because science isn’t about identity. It’s an issue for society as a whole.

So I, for one, will welcome our robot friends to our society, just as soon as they are ready to join it.

Written by Dr. Stuart Watt, long-time artificial intelligence researcher and Chief Technology Officer of Turalt. The full version of this argument is in: Can People Think? Or Machines? (2009). In Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer (eds. R. Epstein, G. Roberts, & G. Beber), Springer.