Uncommon Descent Serving The Intelligent Design Community

Machine 1 and Machine 2: A Challenge to the Ethics of the New Atheists

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email


(Photo of a gnu or wildebeest in the Ngorongoro Crater, Tanzania. Courtesy of Muhammad Mahdi Karim and Wikipedia.)

Do sapient beings deserve respect, simply because they are sapient? An affirmative answer to this question seems reasonable, but it also imperils the Gnu Atheist project of basing morality on our shared capacity for empathy. My short parable about two machines illustrates why. Let’s call them Machine 1 and Machine 2. Since this post is a parable written for atheists, I shall assume for argument’s sake that machines are in principle capable of thinking and feeling.

Machine 1 is like HAL9000, in the movie 2001. It has a fully human psyche, which is capable of the entire gamut of human emotions. It can even appreciate art. It also thinks: it is capable of speech, speech recognition, facial recognition, natural language processing and reasoning. Machine 1 is also capable of genuine empathy.

Machine 2 is different. It’s more like an advanced version of Watson, an artificial intelligence computer system developed by IBM which is capable of answering questions posed in natural language. IBM has described Watson as “an application of advanced Natural Language Processing, Information Retrieval, Knowledge Representation and Reasoning, and Machine Learning technologies to the field of open domain question answering,” which is “built on IBM’s DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.” Building on Watson’s successes in retrieving and interpreting useful information, Machine 2 uses its massively parallel probabilistic evidence-based architecture to advise human experts on fields as diverse as healthcare, technical support, enterprise and government. Since its advanced problem-solving capacities easily surpass those of any human being in breadth and depth, AI experts are unanimous in agreeing that Machine 2 can think. However, nobody has ever suggested that Machine 2 can feel. It was never designed to have feelings, or to interpret other people’s emotions for that matter. Also, it has no autobiographical sense of self.

Here’s my question for the Gnu Atheists. I take it you’re all agreed that it would be wrong to destroy Machine 1. But what about Machine 2? Would it be wrong to destroy Machine 2?

Machine 2 is extraordinarily intelligent – no human being comes close to matching its problem-solving abilities in scope or depth. Machine 2 is therefore sapient. So it seems perversely anthropocentric to say that it would be perfectly all right for a human being, who is much less intelligent than Machine 2, to dismantle it and then use it for spare parts.

But once we allow that it would be wrong to kill Machine 2, we are acknowledging that an entity can matter ethically, simply because it is sapient and not because it is sentient. Remember: Machine 2 has no feelings, and is unable to interpret feelings in others.

Why is this a problem for the Gnu Atheists? Because empathy constitutes the very foundation of their secular system of morality. For instance, an online article entitled Where do Atheists Get Their Morality From? tells readers that “[m]orality is a built-in condition of humanity” and that empathy is “the foundational principle of morality.” But where does that leave intelligent beings that lack empathy, such as Machine 2? If it is correct to say that sapient beings are ethically significant in their own right, then morality cannot be based on empathy alone. It has to be based on empathy plus something else, in order to ensure that sapient beings matter too, and not just sentient beings.

But if we want to define morality in terms of respecting both sentient beings and sapient beings, then we have to ask: why these two kinds of beings, and only these two? What do they have in common? Why not define morality in terms of respecting sentient beings and sapient beings and silicon-based beings – or for that matter, square beings or sharp beings?

One might be tempted to appeal to the cover-all term “interests”, in order to to bring both sentience and sapience under a common ethical umbrella. But Machine 2 doesn’t have any conscious interests. It’s just very, very good at solving all kinds of problems, which makes it intelligent. And if we are going to allow non-conscious interests to count as ethically significant, then why don’t plants matter in their own right, according to the Gnu atheists? Or do they? And why shouldn’t rocks or crystals matter? In his book, A New Kind of Science (2002), Stephen Wolfram argues that a vast range of systems, even “ones with very simple underlying rules … can generate at least as much complexity as we see in the components of typical living systems” (2002, pp. 824-825). This claim is elaborated in Wolfram’s Principle of Computational Equivalence, which says that “there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple” (2002, p. 717). More precisely: (i) almost all systems, except those whose behaviour is not “obviously simple”, can be used to perform computations of equivalent sophistication to those of a universal Turing machine, and (ii) it is impossible to construct a system that can carry out more sophisticated computations than a universal Turing machine (2002, pp. 720 – 721; the latter part of the Principle is also known as Church’s Thesis).

If Wolfram is right, then it seems that a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows that nearly every natural system has the same kind of intelligence that humans do, and if we allow that intelligence (or sapience) is morally insignificant in its own right, it follows that that there is no fundamental ethical difference betwen human beings and crystals.

Before I throw the discussion open to readers, I’d like to clarify two points. First, I deliberately chose machines to illustrate my point instead of people, in order to present the issues as clearly as possible. I am well aware that there are certain human beings who lack the qualities deemed ethically significant by the Gnu atheists, but I realized that if I attempted to point that out in an argument, all I’d get in response would be a load of obfuscation, as virtually no-one wants to appear cold and uncaring in their attitudes towards their fellow human beings.

Second, I anticipate that some Gnu atheists will retort: “If theists can’t provide a sensible answer to these vexing ethical questions, then why should we have to?” But I’m afraid that won’t do. After all, Gnu atheists are convinced that theism is fundamentally irrational, and even insane. Comparing your belief system with an insane system and saying that your system answers the big moral questions just as well as the insane one doesn’t give honest inquirers any reason to trust your system. In any case, the ethical dilemma I have presented here, relating to Machine 1 and Machine 2, presupposes the truth of materialism, as well as a computational theory of mind – both of which most theists would totally reject).

I’d like to hear what readers think about the issues I’ve raised. Thoughts, anyone?

Comments
I see. To answer, I don't think much of itmike1962
September 23, 2011
September
09
Sep
23
23
2011
04:51 AM
4
04
51
AM
PDT
Well here's something to think about. We appear to use the word "ethical" to mean much the same thing. I certainly don't mean - "according to what God says is right". So whatever we both mean by that word the definition does not include any reference to deity.markf
September 22, 2011
September
09
Sep
22
22
2011
11:14 AM
11
11
14
AM
PDT
Hi markf, I would certainly agree with your last sentence.vjtorley
September 22, 2011
September
09
Sep
22
22
2011
09:38 AM
9
09
38
AM
PDT
Hi mike1962, I was referring to Descartes' solution to his skeptical doubts, which included doubts about the existence of other minds. In a nutshell, his solution is that God cannot deceive. See here: http://home.wlu.edu/~mahonj/Descartes.M3.God.htm http://www.iep.utm.edu/descarte/#SH6avjtorley
September 22, 2011
September
09
Sep
22
22
2011
09:37 AM
9
09
37
AM
PDT
Are you saying, then, that the environmental problems (e.g. food shortages) that conscious organisms learned to solve as they evolved are not computational problems?
Yes, that is part of what I am saying. Computation is only a tool. Humans, working with the environment, might find that tool useful. But the computation by itself won't achieve much. We can imagine pumping all of the neural signals from all of the stone age people into a super-super computer. And then we can imagine running the most powerful search program possible, to search for patterns in that input. That super computer is not going to come up with agriculture, for there is no such pattern to be found in those inputs. However, people eventually did come up with agriculture.Neil Rickert
September 22, 2011
September
09
Sep
22
22
2011
09:20 AM
9
09
20
AM
PDT
Solution to what?mike1962
September 22, 2011
September
09
Sep
22
22
2011
08:36 AM
8
08
36
AM
PDT
vj I think the question of whether fish and other creatures can suffer is still very open. To me this is absolutely key as to whether angling for example is moral. However, I did say that while empathy is the most important cause of ethical behaviour (and therefore the ability to suffer or be happy the most important criterion for whether something should be the object of ethical behaviour) it is not the only cause. The exact definition of "ethical" is not precise - some people think that extraordinary courage in battle on behalf of your country is ethical, others not. Like many everday concepts there are woolly edges and a firm core. The belief that we should be kind to machine 2 on account of its complexity falls into these woolly edges - witness the disagreement on this very forum. However, compassion towards those that are capable of suffering and pleasure is absolutely core to what counts as ethical in the usual use of the word. Anyone who did not think it ethical to relieve suffering in other creatures is using the word in a manner I don't recognise.markf
September 22, 2011
September
09
Sep
22
22
2011
08:30 AM
8
08
30
AM
PDT
mike1962 I see. So you'd distinguish the "problem of other minds" from the "problem of other consciousnesses", then? Interesting. By the way, what do you think of Descartes' solution?vjtorley
September 22, 2011
September
09
Sep
22
22
2011
08:23 AM
8
08
23
AM
PDT
Neil Rickert: Interesting. Are you saying, then, that the environmental problems (e.g. food shortages) that conscious organisms learned to solve as they evolved are not computational problems? What are they then? I mention this because I've always believed that we could (at least in theory) design a computer capable of solving our current environmental problems.vjtorley
September 22, 2011
September
09
Sep
22
22
2011
08:19 AM
8
08
19
AM
PDT
Hi Dr.REC, Thank you for your post. I'm certainly not arguing that a salt crystal is on the road to sentience. What Wolfram claims is that salt crystals and many other inorganic systems can be used to perform the same kinds of computations that the brains of sentient beings perform. Computationally, there's no bright line between sentient and non-sentient beings; and theoretically, there could be non-sentient beings whose computational powers exceed those of sentient beings. You might want to say we should ignore a being's computational abilities and focus exclusively on its capacity to have subjective experiences when deciding whether it matters and how we should behave towards it, but I think that's an odd position for a materialist to take. If you look at my reply to markf in 16.1, you'll see that fish and bees have some pretty remarkable cognitive abilities, despite lacking sentience. I find it puzzling that a materialist who believes in behaving ethically would want to maintain that we may do as we please with these creatures, without wronging them. Wolfram's animism (see http://www.wolframscience.com/nksonline/page-845 ) is much more consistent with materialism than utilitarianism is, in my opinion.vjtorley
September 22, 2011
September
09
Sep
22
22
2011
08:14 AM
8
08
14
AM
PDT
Hi William F. Murray, Actually, I think you're probably right about the immorality of destroying a thing of beauty (alive or not) for no good reason. However, the fact that destroying such a thing is wrong does not necessarily imply that in doing so, we are wronging that thing. One could simply say that we are wronging God, who gave it to us and/or allowed us to enjoy it.vjtorley
September 22, 2011
September
09
Sep
22
22
2011
08:00 AM
8
08
00
AM
PDT
Hi markf, All right. Let me put it another way. According to Stephen Wolfram, our universe is chock-a-block full of systems that could be described as natural computers - and what's more, they're universal Turing machines at that, which is as good as it gets. The only differences between them and us, according to Wolfram, are ones of degree. Among all these natural computers, you seem to be saying that the only ones we should worry about are the tiny, tiny fraction that happen to be (a) alive and (b) sentient, possessing what Arnold Lunn, in a completely different context, once referred to as "funny internal feelings" or "fif". The rest don't matter in their own right; we have no duties towards them. I find that odd. Take fish. Neurologists are unanimous that they aren't conscious: they lack the neural wherewithal for conscious experiences. However, they have remarkable cognitive abilities. For instance, features such as individual recognition, acquisition of new behaviour patterns by observational learning, transmission of group traditions, co-operative hunting, tactical deception (cheating), tit-for-tat punishment strategies, reconciliation, altruism and social prestige, formerly thought to be unique to primates or at least mammals, can all be found in fish societies. (Bshary R., Wickler W. and Fricke H. 2002. "Fish cognition: a primate's eye view." In Animal Cognition, vol. 5, March 2002, pp. 1-13.) You're saying we have no duties towards them whatsoever? Or take bees, which are apparently capable of insight learning and of solving delayed matching to sample tasks - and even more remarkably, delayed non-matching to sample. I discussed bees' remarkable feats in my thesis. It's cases like these that make me doubt the wisdom of an ethic that grounds our moral behaviour to others exclusively in our feelings of empathy towards them.vjtorley
September 22, 2011
September
09
Sep
22
22
2011
07:53 AM
7
07
53
AM
PDT
Are you sure you agree with *all* the above? :)Eugene S
September 22, 2011
September
09
Sep
22
22
2011
02:51 AM
2
02
51
AM
PDT
I agree with all the above :)Elizabeth Liddle
September 22, 2011
September
09
Sep
22
22
2011
12:32 AM
12
12
32
AM
PDT
But what if one defined ethical behaviour more broadly, as any behaviour which is intended to promote the good of the recipient, or benefit the recipient in some way? On this broad definition, it is by no means clear why I should confine my ethical behaviour to entities that are capable of having feelings. Machine 2 might therefore be a worthy recipient – especially as it would benefit in many ways by not being destroyed: there is much that it could accomplish.
Of course you can define ethical behaviour in any way you like. This definition would include servicing my car which is intended to benefit the car.markf
September 21, 2011
September
09
Sep
21
21
2011
10:47 PM
10
10
47
PM
PDT
markf Thank you for your post. You wrote:
...[I]t would be weird and against normal human nature to behave ethically towards something we cannot show empathy towards such as machine 2. I would probably disagree strongly with someone who showed such behaviour... We might value a thinking but non-sentient machine as useful or for aesthetic reasons. I could not understand someone who felt a moral need to preserve such a machine. It is just a fact of human nature that we only have moral feelings towards sentient beings.
Why does moral behaviour have to be based on feelings? Why can it not be based on an understanding of one's duty to others? You seem to be defining ethical behaviour as behaviour which is based on, or grounded in, a feeling of empathy with the individual who is the target of one's actions. You see someone suffering, and your impulse is to help them. I can understand that. But what if one defined ethical behaviour more broadly, as any behaviour which is intended to promote the good of the recipient, or benefit the recipient in some way? On this broad definition, it is by no means clear why I should confine my ethical behaviour to entities that are capable of having feelings. Machine 2 might therefore be a worthy recipient - especially as it would benefit in many ways by not being destroyed: there is much that it could accomplish. (Of course, as an anti-materialist, I find the very notion of speaking of a non-living entity as "benefiting" to be absurd, as I draw a distinction between the intrinsic finality of an organism and the extrinsic finality of a machine. But that's another topic.) That's all for now; I'll be back later.vjtorley
September 21, 2011
September
09
Sep
21
21
2011
03:02 PM
3
03
02
PM
PDT
Eugene S: Thank you for that wonderful quote and reference. ScottAndrews: Yes, a heart full of gratitude and appreciation should keep us from wantonly destroying or harming anything that we either can avoid or have no good reason for. I may not have much in the way of empathy, but I have an acute sense of gratitude and appreciation for both the living and non-living wonders and mechanisms this world abounds with.William J Murray
September 21, 2011
September
09
Sep
21
21
2011
12:59 PM
12
12
59
PM
PDT
I guess that's what we have a conscience for, so that every single thing doesn't have to be in writing. I couldn't think of anything in the Bible addressing waste except perhaps the account of Onan, but that was more about what he didn't do than what he did do. Then again, perhaps people weren't inclined to waste then so there was no need to address it. Like how there's nothing in the Bible about crack cocaine or a million other specific things. Here's a thought - maybe it falls under being grateful. Destroying something doesn't show gratitude.ScottAndrews
September 21, 2011
September
09
Sep
21
21
2011
12:38 PM
12
12
38
PM
PDT
"but I struggle to think of what moral law it violates." Let us listen to the wise. St Abba Dorotheus of Gaza, AD 505-565. "Conscience should be guarded towards God, towards one's neighbour and towards things. In relation to God, he guards his conscience who does not neglect God's commandments and who, even in things not seen by men and that no one demands of us, guards his conscience towards God in secret. Guarding conscience towards our neighbour demands that we should never do anything which, to our knowledge, would offend or tempt him, whether by word or deed, look or expression. Guarding conscience towards things means not to misuse a thing, nor let it be spoiled nor throw it away needlessly. In all these respects conscience should be kept pure and unblemished, lest one should fall into the calamity against which the Lord warns us (Matthew 5:26)." From "Directions on the Spiritual Life", more here.Eugene S
September 21, 2011
September
09
Sep
21
21
2011
12:38 PM
12
12
38
PM
PDT
It’s my cell phone.
That's where you and I disagree. In my theism, everything is God's. I look at everything which I have as God's, and that it has been given to me to help me serve the purpose I was created for. Destroying or harming anything God has provided me without good reason is probably wrong, something I should avoid. As a general rule, I consider any destructive tendencies or impulses immoral, and indulgence in them moves me farther from god, not close.William J Murray
September 21, 2011
September
09
Sep
21
21
2011
11:46 AM
11
11
46
AM
PDT
I mostly agree. We should not behave morally only when we feel empathy. But I don't follow on not destroying objects. If it violates your conscience then it is wrong for you. And breaking stuff for no reason (or doing anything for no reason) doesn't make sense. It even sounds wrong, but I struggle to think of what moral law it violates. Perhaps it wrongs the potential beneficiary of the object? If it had no value to one person then why not give it to someone else? Nonetheless, as I look at the cell phone on my desk, I think that to start smashing it on the desk until it broke would be idiotic, but not immoral. It's my cell phone.ScottAndrews
September 21, 2011
September
09
Sep
21
21
2011
11:14 AM
11
11
14
AM
PDT
I didn't mean to follow this down a slope, but the answers lead me to more questions. What if I'm bored so I drive around with no purpose, placing unnecessary wear on the car and risking a flat tire or other damage that would not have occurred if I stayed home?ScottAndrews
September 21, 2011
September
09
Sep
21
21
2011
11:02 AM
11
11
02
AM
PDT
I guess my point here is that I'm a theist and my morality is not based on empathy. I have virtually no empathy. It's not wrong to torture infants because it pulls my heartstrings and makes me feel sad; it's just obviously wrong. It's also obviously wrong to destroy stuff for no good reason, whether that stuff is sentient and can feel pain or not.William J Murray
September 21, 2011
September
09
Sep
21
21
2011
11:02 AM
11
11
02
AM
PDT
"For my entertainment" is not a good moral reason for anything, and neglect does count as damaging it.William J Murray
September 21, 2011
September
09
Sep
21
21
2011
10:57 AM
10
10
57
AM
PDT
This may be splitting hairs, but I think everyone assumes their reasons are good. Like if I have a bazillion dollars and I buy I car so I can destroy it with a sledgehammer, is my entertainment a good reason? What if the bazillionaire allows a perfectly good car to rust away unused and neglected because he has other cars and doesn't care about that one? Does that count as damaging it?ScottAndrews
September 21, 2011
September
09
Sep
21
21
2011
10:51 AM
10
10
51
AM
PDT
Am I the only person here who considers it wrong to damage any functional mechanism (alive or not, conscious or not) for no good reason?William J Murray
September 21, 2011
September
09
Sep
21
21
2011
10:45 AM
10
10
45
AM
PDT
ME: "I don’t think salt crystals or my TI-85 calculator will ever perform the same kind of calculations the human mind can. Ever." You: "Then you disagree with Dr. Stephen Wolfram’s Principle of Computational Equivalence, which which says that “there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple”" I think the key phrase you're missing is "obviously simple." The notion that a salt packet from McDonalds or my TI calculator in a desk drawer on on a pathway to sentience is absurd. Selective skepticism to accept that as a possibility, but not evolution. Your conclusion that "it follows that that there is no fundamental ethical difference betwen human beings and crystals" (I think you meen between, spelling police that you are) requires the absurdity of treating any thing with any probability of ever developing sentience as equal to human. Considering most of us don't even treat animals (much closer to sentience than salt) equally, this criteria doesn't seem to be one that is largely accepted. There is also a huge bright line that is more important to not just me, but many of the posters here: sentient self-awareness. "The whole point of my post was to argue that it is ethically blinkered to claim that only sentient beings matter; surely intelligent beings do too." ok, but even if we accept that ""it follows that that there is no fundamental ethical difference betwen human beings and crystals" does not logically follow.DrREC
September 21, 2011
September
09
Sep
21
21
2011
09:41 AM
9
09
41
AM
PDT
me: Personally, I do not believe that Machine 2 is sapient or intelligent. vjtorley: OK. Why not?
Based on my own study of cognition, the problems that have to be solved are not computational problems, and thus a computer style solution cannot work.
me: The harder question is about Machine 1. However, I doubt that Machine 1 will ever exist. vjtorley: Why? I’m just curious, that’s all.
As I see it, conscious systems cannot be designed. They need to be highly adapted to their environment, and you can only get that with something akin to biological development. Roughly speaking, you need a system that designs itself in situ, rather than something from an external designer.Neil Rickert
September 21, 2011
September
09
Sep
21
21
2011
05:47 AM
5
05
47
AM
PDT
vj 1) You write:
Rather, my point was that if a capacity for empathy is the sole basis for moral behavior on our part, then it follows that any being (such as Machine 2) which has no feelings (as it has not been hard-wired for emotions) and no capacity of its own empathy will be a being with whom we cannot empathize, and we will therefore dismiss it as morally insignificant.
I believe that machine 2 is morally insignificant and that empathy is not the sole cause of moral behaviour (although the most important). However, it is worth noting that your argument is not valid. Empathy might be the sole basis for causing moral behaviour but that doesn't mean that the objects of moral behaviour have to be capable of empathy - see my example of cats. 2) You ask:
My question for you is: is empathy the sole legitimate cause, on your account? Do you think it could ever be appropriate for us to try to behave ethically towards beings with whom we cannot empathize in principle, because they have no feelings of any sort? If not, why not?
I see empathy as the main (but not sole) cause of moral behaviour. This is just an observed fact about human nature - so there is no question of whether it is legitimate or not. So it would be weird and against normal human nature to behave ethically towards something we cannot show empathy towards such as machine 2. I would probably disagree strongly with someone who showed such behaviour. 3) You ask:
why should we only value beings that can suffer? Why shouldn’t we value beings that can think, even if they can’t suffer? Isn’t the ability to think equally precious? Isn’t it the height of absurdity to claim that it’s wrong to kill a sparrow for the fun of it, but that it’s perfectly OK to destroy a being with an intelligence that would dwarf Einstein’s, just for the fun of it?
We might value a thinking but non-sentient machine as useful or for aesthetic reasons. I could not understand someone who felt a moral need to preserve such a machine. It is just a fact of human nature that only have moral feelings towards sentient beings. I am conscious there is some repetition in my answers - sorry not enough time to edit my response.markf
September 21, 2011
September
09
Sep
21
21
2011
05:05 AM
5
05
05
AM
PDT
"Two quick points. First, neuro-scientists have a fairly reliable set of neural indicators for consciousness, which work in the vast majority of human cases (PVS is a bit of a gray zone, however).
I disagree. They make assumptions that consciousness exists outside themselves in the first place. A leap of faith is required. I submit this leap of faith is hardwired into most people so that the alternative seems "ridiculous." We're hardwired against the notion of solipsism with regards to consciousness.
Second, Wittgenstein’s private language argument suffices to refute any notion that you and only you might be conscious."
I disagree. Nothing in Wittgenstein's argument requires consciousness, only a "mind", that can "map words to ideas, concepts or representations." It has not been conclusively demonstrated that consciousness is required for such a mind. (Nor that consciousness exists out of one's own mind.)mike1962
September 21, 2011
September
09
Sep
21
21
2011
03:41 AM
3
03
41
AM
PDT
1 2

Leave a Reply