(Photo of a gnu or wildebeest in the Ngorongoro Crater, Tanzania. Courtesy of Muhammad Mahdi Karim and Wikipedia.)
Do sapient beings deserve respect, simply because they are sapient? An affirmative answer to this question seems reasonable, but it also imperils the Gnu Atheist project of basing morality on our shared capacity for empathy. My short parable about two machines illustrates why. Let’s call them Machine 1 and Machine 2. Since this post is a parable written for atheists, I shall assume for argument’s sake that machines are in principle capable of thinking and feeling.
Machine 1 is like HAL9000, in the movie 2001. It has a fully human psyche, which is capable of the entire gamut of human emotions. It can even appreciate art. It also thinks: it is capable of speech, speech recognition, facial recognition, natural language processing and reasoning. Machine 1 is also capable of genuine empathy.
Machine 2 is different. It’s more like an advanced version of Watson, an artificial intelligence computer system developed by IBM which is capable of answering questions posed in natural language. IBM has described Watson as “an application of advanced Natural Language Processing, Information Retrieval, Knowledge Representation and Reasoning, and Machine Learning technologies to the field of open domain question answering,” which is “built on IBM’s DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.” Building on Watson’s successes in retrieving and interpreting useful information, Machine 2 uses its massively parallel probabilistic evidence-based architecture to advise human experts on fields as diverse as healthcare, technical support, enterprise and government. Since its advanced problem-solving capacities easily surpass those of any human being in breadth and depth, AI experts are unanimous in agreeing that Machine 2 can think. However, nobody has ever suggested that Machine 2 can feel. It was never designed to have feelings, or to interpret other people’s emotions for that matter. Also, it has no autobiographical sense of self.
Here’s my question for the Gnu Atheists. I take it you’re all agreed that it would be wrong to destroy Machine 1. But what about Machine 2? Would it be wrong to destroy Machine 2?
Machine 2 is extraordinarily intelligent – no human being comes close to matching its problem-solving abilities in scope or depth. Machine 2 is therefore sapient. So it seems perversely anthropocentric to say that it would be perfectly all right for a human being, who is much less intelligent than Machine 2, to dismantle it and then use it for spare parts.
But once we allow that it would be wrong to kill Machine 2, we are acknowledging that an entity can matter ethically, simply because it is sapient and not because it is sentient. Remember: Machine 2 has no feelings, and is unable to interpret feelings in others.
Why is this a problem for the Gnu Atheists? Because empathy constitutes the very foundation of their secular system of morality. For instance, an online article entitled Where do Atheists Get Their Morality From? tells readers that “[m]orality is a built-in condition of humanity” and that empathy is “the foundational principle of morality.” But where does that leave intelligent beings that lack empathy, such as Machine 2? If it is correct to say that sapient beings are ethically significant in their own right, then morality cannot be based on empathy alone. It has to be based on empathy plus something else, in order to ensure that sapient beings matter too, and not just sentient beings.
But if we want to define morality in terms of respecting both sentient beings and sapient beings, then we have to ask: why these two kinds of beings, and only these two? What do they have in common? Why not define morality in terms of respecting sentient beings and sapient beings and silicon-based beings – or for that matter, square beings or sharp beings?
One might be tempted to appeal to the cover-all term “interests”, in order to to bring both sentience and sapience under a common ethical umbrella. But Machine 2 doesn’t have any conscious interests. It’s just very, very good at solving all kinds of problems, which makes it intelligent. And if we are going to allow non-conscious interests to count as ethically significant, then why don’t plants matter in their own right, according to the Gnu atheists? Or do they? And why shouldn’t rocks or crystals matter? In his book, A New Kind of Science (2002), Stephen Wolfram argues that a vast range of systems, even “ones with very simple underlying rules … can generate at least as much complexity as we see in the components of typical living systems” (2002, pp. 824-825). This claim is elaborated in Wolfram’s Principle of Computational Equivalence, which says that “there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple” (2002, p. 717). More precisely: (i) almost all systems, except those whose behaviour is not “obviously simple”, can be used to perform computations of equivalent sophistication to those of a universal Turing machine, and (ii) it is impossible to construct a system that can carry out more sophisticated computations than a universal Turing machine (2002, pp. 720 – 721; the latter part of the Principle is also known as Church’s Thesis).
If Wolfram is right, then it seems that a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows that nearly every natural system has the same kind of intelligence that humans do, and if we allow that intelligence (or sapience) is morally insignificant in its own right, it follows that that there is no fundamental ethical difference betwen human beings and crystals.
Before I throw the discussion open to readers, I’d like to clarify two points. First, I deliberately chose machines to illustrate my point instead of people, in order to present the issues as clearly as possible. I am well aware that there are certain human beings who lack the qualities deemed ethically significant by the Gnu atheists, but I realized that if I attempted to point that out in an argument, all I’d get in response would be a load of obfuscation, as virtually no-one wants to appear cold and uncaring in their attitudes towards their fellow human beings.
Second, I anticipate that some Gnu atheists will retort: “If theists can’t provide a sensible answer to these vexing ethical questions, then why should we have to?” But I’m afraid that won’t do. After all, Gnu atheists are convinced that theism is fundamentally irrational, and even insane. Comparing your belief system with an insane system and saying that your system answers the big moral questions just as well as the insane one doesn’t give honest inquirers any reason to trust your system. In any case, the ethical dilemma I have presented here, relating to Machine 1 and Machine 2, presupposes the truth of materialism, as well as a computational theory of mind – both of which most theists would totally reject).
I’d like to hear what readers think about the issues I’ve raised. Thoughts, anyone?