Home » Intelligent Design » Minds, brains, computers and skunk butts

Minds, brains, computers and skunk butts

[This post will remain at the top of the page until 10:00 am EST tomorrow, May 22. For reader convenience, other coverage continues below. - UD News]

In a recent interview with The Guardian, Professor Stephen Hawking shared with us his thoughts on death:

I have lived with the prospect of an early death for the last 49 years. I’m not afraid of death, but I’m in no hurry to die. I have so much I want to do first. I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.

Now, Stephen Hawking is a physicist, not a biologist, so I can understand why he would compare the brain to a computer. Nevertheless, I was rather surprised that Professor Jerry Coyne, in a recent post on Hawking’s remarks, let the comparison slide without comment. Coyne should know that there are no less than ten major differences between brains and computers, a fact which vitiates Hawking’s analogy. (I’ll say more about these differences below.)

But Professor Coyne goes further: not only does he equate the human mind with the human brain (as Hawking does), but he also regards the evolution of human intelligence as no more remarkable than the evolution of skunk butts, according to a recent report by Faye Flam in The Philadelphia Inquirer:

Many biologists are not religious, and few see any evidence that the human mind is any less a product of evolution than anything else, said Chicago’s Coyne. Other animals have traits that set them apart, he said. A skunk has a special ability to squirt a caustic-smelling chemical from its anal glands.

Our special thing, in contrast, is intelligence, he said, and it came about through the same mechanism as the skunk’s odoriferous defense.

In a recent post, Coyne defiantly reiterated his point, declaring: “I absolutely stand by my words.”

So today, I thought I’d write about three things: why the brain is not like a computer, why the evolution of the brain is not like the evolution of the skunk’s butt, and why the human mind cannot be equated with the human brain. Of course, proving that the mind and the brain are not the same doesn’t establish that there is an afterlife; still, it leaves the door open to that possibility, particularly if you happen to believe in God.

Why the brain is not like a computer

For readers wishing to understand why the human brain is not like a computer, I would highly recommend a 2007 blog article entitled, 10 Important Differences Between Brains and Computers, by Chris Chatham, a 2nd year Grad student pursuing a Ph.D. in Cognitive Neuroscience at the University of Colorado, Boulder, over on his science blog, Developing Intelligence. Let me say at the outset that Chatham is a materialist who believes that the human mind supervenes upon the human brain. Nevertheless, he regards the brain-computer metaphor as being of very limited value, insofar as it obscures the many ways in which the human brain exceeds a computer in flexibility, parallel processing and raw computational power, not to mention the fact that the human brain is part of a living human body.

Anyway, here is a short, non-technical summary of the ten differences between brains and computers which are discussed by Chatham:

1. Brains are analogue; computers are digital.
Digital 0’s and 1’s are binary (“on-off”). However, the brain’s neuronal processing is directly influenced by processes that are continuous and non-linear. Because early computer models of the human brain overlooked this simple point, they severely under-estimated the information processing power of the brain’s neural networks.

2. The brain uses content-addressable memory.
Computers have byte-addressable memory, which relies on information having a precise address. With the brain’s content-addressable memory, on the other hand, information can be accessed by “spreading activation” from closely-related concepts. As Chatham explains, your brain has a built-in Google, allowing an entire memory to be retrieved from just a few cues (key words). Computers can only replicate this feat by using massive indices.

3. The brain is a massively parallel machine; computers are modular and serial.
Instead of having different modules for different capacities or functions, as a computer does, the brain often uses one and the same area for a multitude of functions. Chatham provides an example: the hippocampus is used not only for short-term memory, but also for imagination, for the creation of novel goals and for spatial navigation.

4. Processing speed is not fixed in the brain; there is no system clock.
Unlike a computer, the human brain has no central clock. Time-keeping in the brain is more like ripples on a pond than a standard digital clock. (To be fair, I should add that some CPUs, known as asynchronous processors, don’t use system clocks.)

5. Short-term memory is not like RAM.
As Chatham writes: “Short-term memory seems to hold only ’pointers’ to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk.” One advantage of this flexibility of the brain’s short-term memory is that its capacity limit is not fixed: it fluctuates over time, depending on the speed of neural processing, and an individual’s expertise and familiarity with the subject.

6. No hardware/software distinction can be made with respect to the brain or mind.
The tired old metaphor of the mind as the software for the brain’s hardware overlooks the important point that the brain’s cognition is not a purely symbolic process: it requires a physical implementation. Some scientists believe that the inadequacy of the software metaphor for the mind was responsible for the embarrassing failure of symbolic AI.

7. Synapses are far more complex than electrical logic gates.
Because the signals which are propagated along axons are actually electrochemical in nature, they can be modulated in countless different ways, enhancing the complexity of the brain’s processing at each synapse. No computer even comes close to matching this feat.

8. Unlike computers, processing and memory are performed by the same components in the brain.
In Chatham’s words: “Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain.” We can make our memories stronger by the simple act of retrieving them.

9. The brain is a self-organizing system.
Chatham explains:

…[E]xperience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit – something known as “trauma-induced plasticity” kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction…

Chatham argues that failure to take into account the brain’s “trauma-induced plasticity” is having an adverse impact on the emerging field of neuropsychology. A whole science is being stunted by a bad metaphor.

10. Brains have bodies.
Embodiment is a marvelous advantage for a brain. For instance, as Chatham points out, it allows the brain to “off-load” many of its memory requirements onto the body.

I would also add that since computers are physical but not embodied, they lack the built-in teleology of an organism.

As a bonus, Chatham adds an eleventh difference between brains and computers:

11. The brain is much, much bigger than any [current] computer.

Chatham writes:

Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn’t include the influences of dendritic geometry, or the approximately 1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion.

Readers may ask why I am taking the trouble to point out the many differences between brains and computers, when both are, after all, physical systems with a finite lifespan. But the point I wish to make is that human beings are debased by Professor Stephen Hawking’s comparison of the human brain to a computer. The brain-computer metaphor is, as we have seen, a very poor one; using it as a rhetorical device to take pot shots at people who believe in immortality is a cheap trick. If Professor Hawking thinks that belief in immortality is scientifically or philosophically indefensible, then he should argue his case on its own merits, instead of resorting to vulgar characterizations.

Why the evolution of the brain is not like the evolution of the skunk’s butt

As we saw above, Professor Jerry Coyne maintains that human intelligence came about through the same mechanism as the skunk’s odoriferous defense. I presume he is talking about the human brain. However, there are solid biological grounds for believing that the brain is the outcome of a radically different kind of process from the one that led to the skunk’s defense system. I would argue that the brain is not the product of an undirected natural process, and that some Intelligence must have directed the evolution of the brain.

Skeptical? I’d like to refer readers to an online article by Steve Dorus et al., entitled, Accelerated Evolution of Nervous System Genes in the Origin of Homo sapiens. (Cell, Vol. 119, 1027–1040, December 29, 2004). Here’s an excerpt:

[T]he evolution of the brain in primates and particularly humans is likely contributed to by a large number of mutations in the coding regions of many underlying genes, especially genes with developmentally biased functions.

In summary, our study revealed the following broad themes that characterize the molecular evolution of the nervous system in primates and particularly in humans. First, genes underlying nervous system biology exhibit higher average rate of protein evolution as scaled to neutral divergence in primates than in rodents. Second, such a trend is contributed to by a large number of genes. Third, this trend is most prominent for genes involved a implicated in the development of the nervous system. Fourth, within primates, the evolution of these genes is especially accelerated in the lineage leading to humans. Based on these themes, we argue that accelerated protein evolution in a large cohort of nervous system genes, which is particularly pronounced for genes involved in nervous system development, represents a salient genetic correlate to the profound changes in brain size and complexity during primate evolution, especially along the lineage leading to Homo sapiens. (Emphases mine – VJT.)

Here’s the link to a press release relating to the same article:

Human cognitive abilities resulted from intense evolutionary selection, says Lahn by Catherine Gianaro, in The University of Chicago Chronicle, January 6, 2005, Vol. 24, no. 7.

University researchers have reported new findings that show genes that regulate brain development and function evolved much more rapidly in humans than in nonhuman primates and other mammals because of natural selection processes unique to the human lineage.

The researchers, led by Bruce Lahn, Assistant Professor in Human Genetics and an investigator in the Howard Hughes Medical Institute, reported the findings in the cover article of the Dec. 29, 2004 issue of the journal Cell.

“Humans evolved their cognitive abilities not due to a few accidental mutations, but rather from an enormous number of mutations acquired through exceptionally intense selection favoring more complex cognitive abilities,” said Lahn. “We tend to think of our own species as categorically different – being on the top of the food chain,” Lahn said. “There is some justification for that.”

From a genetic point of view, some scientists thought human evolution might be a recapitulation of the typical molecular evolutionary process, he said. For example, the evolution of the larger brain might be due to the same processes that led to the evolution of a larger antler or a longer tusk.

We’ve proven that there is a big distinction. Human evolution is, in fact, a privileged process because it involves a large number of mutations in a large number of genes,” Lahn said.
“To accomplish so much in so little evolutionary time – a few tens of millions of years – requires a selective process that is perhaps categorically different from the typical processes of acquiring new biological traits.” (Emphases mine – VJT.)

Professor Lahn’s remarks on elephants’ tusks apply equally to the evolution of skunk butts. Professor Jerry Coyne’s comparison of the evolution to the evolution of the skunk’s defense system therefore misses the mark. The two cases do not parallel one another.

Finally, here’s an excerpt from another recent science article: Gene Expression Differs in Human and Chimp Brains by Dennis Normile, in “Science” (6 April 2001, pp. 44-45):

“I’m not interested in what I share with the mouse; I’m interested in how I differ from our closest relatives, chimpanzees,” says Svante Paabo, a geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Such comparisons, he argues, are the only way to understand “the genetic underpinnings of what makes humans human.” With the human genome virtually in hand, many researchers are now beginning to make those comparisons. At a meeting here last month, Paabo presented work by his team based on samples of three kinds of tissue, brain cortex, liver, and blood from humans, chimps, and rhesus macaques. Paabo and his colleagues pooled messenger RNA from individuals within each species to get rid of intraspecies variation and ran the samples through a microarray filter carrying 20,000 human cDNAs to determine the level of gene expression. The researchers identified 165 genes that showed significant differences between at least two of the three species, and in at least one type of tissue. The brain contained the greatest percentage of such genes, about 1.3%. It also produced the clearest evidence of what may separate humans from other primates. Gene expression in liver and blood tissue is very similar in chimps and humans, and markedly different from that in rhesus macaques. But the picture is quite different for the cerebral cortex. “In the brain, the expression profiles of the chimps and macaques are actually more similar to each other than to humans,” Paabo said at the workshop. The analysis shows that the human brain has undergone three to four times the amount of change in genes and expression levels than the chimpanzee brain… “Among these three tissues, it seems that the brain is really special in that humans have accelerated patterns of gene activity,” Paabo says.” (Emphasis mine – VJT.)

I would argue that these changes that have occurred in the human brain are unlikely to be natural, because of the deleterious effects of most mutations and the extensive complexity and integration of the biological systems that make up the human brain. If anything, this hyper-fast evolution should be catastrophic.

We should remember that the human brain is easily the most complex machine known to exist in the universe. If the brain’s evolution did not require intelligent guidance, then nothing did.

As to when the intelligently directed manipulation of the brain’s evolution took place, my guess would be that it started around 30 million years ago when monkeys first appeared, but became much more pronounced after humans split off from apes around 6 million years ago.

Why the human mind cannot be equated with the human brain

The most serious defect of a materialist account of mind is that it fails to explain the most fundamental feature of mind itself: intentionality. Professor Edward Feser, who has written several books on the philosophy of mind, defines intentionality as “the mind’s capacity to represent, refer, or point beyond itself” (Aquinas, 2009, Oneworld, Oxford, p. 50). For example, when we entertain a concept of something, our mind points at a certain class of things, and it points at the conclusion of an argument when we reason, at some state of affairs when we desire something, and at some person (or animal) when we love someone.

Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves. However, brain processes cannot possess this kind of meaning, because physical states of affairs have no inherent meaning as such. Hence our thoughts cannot be the same as our brain processes. As Professor Edward Feser puts it in a recent blog post (September 2008):

Now the puzzle intentionality poses for materialism can be summarized this way: Brain processes, like ink marks, sound waves, the motion of water molecules, electrical current, and any other physical phenomenon you can think of, seem clearly devoid of any inherent meaning. By themselves they are simply meaningless patterns of electrochemical activity. Yet our thoughts do have inherent meaning – that’s how they are able to impart it to otherwise meaningless ink marks, sound waves, etc. In that case, though, it seems that our thoughts cannot possibly be identified with any physical processes in the brain. In short: Thoughts and the like possess inherent meaning or intentionality; brain processes, like ink marks, sound waves, and the like, are utterly devoid of any inherent meaning or intentionality; so thoughts and the like cannot possibly be identified with brain processes.

Four points need to be made here, about the foregoing argument. First, Professor Feser’s argument does not apply to all mental states as such, but to mental acts – specifically, those mental acts (such as thoughts) which possess inherent meaning. My seeing a red patch here now would qualify as a mental state, but since it is not inherently meaningful, it is not covered by Feser’s argument. However, if I think to myself, “That red thing is a tomato” while looking at a red patch, then I am thinking something meaningful. (The reader will probably be wondering, “What about an animal which recognizes a tomato but lacks the linguistic wherewithal to say to itself, ‘This is a tomato’?” Is recognition inherently meaningful? The answer, as I shall argue in part (b) below, depends on whether the animal has a concept of a tomato which is governed by a rule or rules, which it considers normative and tries to follow – e.g. “This red thing is juicy but has no seeds on the inside, so it can’t be a tomato but might be a strawberry; however, that green thing with seeds on the inside could be a tomato.”)

Second, Professor Feser’s formulation of the argument from the intentionality of mental acts is very carefully worded. Some philosophers have suggested that the characteristic feature of mental acts is their “aboutness”: thoughts, arguments, desires and passions in general are about something. But this is surely too vague, as DNA is “about” something too: the proteins it codes for. We can even say that DNA possesses functionality, which is certainly a form of “aboutness.” What it does not possess, however, is inherent meaning, which is a distinctive feature of mental acts. DNA is a molecule that does a job, but it does not and cannot “mean” anything, in and of itself. If (as I maintain) DNA was originally designed, then it was meant by its Designer to do something, but this meaning would be something extrinsic to it. Its functionality, on the other hand, would be something intrinsic to it.

Third, it is extremely difficult to disagree with Feser’s premise that thoughts possess inherent meaning. To do that, one would have to either deny that there are any such things as thoughts, or one would need to locate inherent meaning somewhere else, outside the domain of the mental.

There are a few materialists, known as eliminative materialists, who deny the very existence of mental processes such as thoughts, beliefs and desires. The reason why I cannot take eliminative materialism seriously is that any successful attempt to argue for the truth of eliminative materialism – or, indeed, for the truth of any theory – would defeat eliminative materialism, since argument is, by definition, an attempt to change the beliefs of one’s audience, and eliminative materialism says we have none. If eliminative materialism is true, then argumentation of any kind, about any subject, is always a pointless pursuit, as argumentation is defined as an attempt to change people’s beliefs, and neither attempts not beliefs refer to anything, on an eliminative materialist account.

The other way of arguing against the premise that thoughts possess inherent meaning would be to claim that inherent meaning attaches primarily to something outside the domain of the mental, rather than to our innermost thoughts as we have supposed. But what might this “something” be? The best candidate would be public acts, such as wedding vows, the signing of contracts, initiation ceremonies and funerals. Because these acts are public, one might argue that they are meaningful in their own right. But this will not do. We can still ask: what is it about these acts and ceremonies that makes them meaningful? (A visiting alien might find them utterly meaningless.) And in the end, the only satisfactory answer we can give is: the cultural fact that within our community, we all agree that these acts are meaningful (which presupposes an mental act of assent on the part of each and every one of us), coupled with the psychological fact that the participants are capable of the requisite mental acts needed to perform these acts properly (for instance, someone who is getting married must be capable of understanding the nature of the marriage contract, and of publicly affirming that he/she is acting freely). Thus even an account of meaning which ascribes meaning primarily to public acts still presupposes the occurrence of mental acts which possess meaning in their own right.

Fourth, it should be noted that Professor Feser’s argument works against any materialist account of the mind which identifies mental acts with physical processes (no matter what sort of processes they may be) – regardless of whether this identification is made at the generic (“type-type”) level or the individual (“token-token”) level. The reason is that there is a fundamental difference between mental acts and physical processes: the former possess an inherent meaning, while the latter are incapable of doing so.

Of course, the mere fact that mental acts and physical processes possess mutually incompatible properties does not prove that they are fundamentally different. To use a well-worn example, the morning star has the property of appearing only in the east, while the evening star has the property of appearing only in the west, yer they are one and the same object (the planet Venus). Or again: Superman has the property of being loved by Lois Lane, but Clark Kent does not; yet in the comic book story, they are one and the same person.

However, neither of these examples is pertinent to the case we are considering here, since the meaning which attaches to mental acts is inherent. Hence it must be an intrinsic feature of mental acts, rather than an extrinsic one, like the difference between the morning star and the evening star. As for Superman’s property of being loved by Lois Lane: this is not a real property, but a mere Cambridge property, to use a term coined by the philosopher Peter Geach: in this case, the love inheres in Lois Lane, not Superman. (By contrast, if Superman loves Lois, then the same is also true of Clark Kent. This love is an example of a real property, since it inheres in Superman.)

The difference between mental acts and physical processes does not merely depend on one’s perspective or viewpoint; it is an intrinsic difference, not an extrinsic one. Moreover, it is a real difference, since the property of having an inherent meaning is a real property, and not a Cambridge property. Since mental acts possess a real, intrinsic property which physical processes lack, we may legitimately conclude that mental acts are distinct from physical processes. (Of course, “distinct from” does not mean “independent of”.)

A general refutation of materialism

Feser’s argument can be extended to refute all materialistic accounts of mental acts. Any genuinely materialistic account of mental acts must be capable of explaining them in terms of physical processes. There are only three plausible ways to do this: (a) identifying mental acts with physical processes, (b) showing how mental acts are caused by physical processes, and (c) showing how mental acts are logically entailed by physical processes. No other way of explaining mental acts in terms of physical processes seems conceivable.

The first option, as we have seen, is ruled out: as we saw earlier, mental acts cannot be equated with physical processes, because the former possess inherent meaning as a real, intrinsic property, while the latter do not.

The second option is also impossible, for two reasons. Firstly, if the causal law is to count as a genuine explanation of mental acts, then it must account for their intentionality, or inherent meaningfulness. In other words, we would need a causal law that not only links physical processes to mental acts, but a causal law that links physical processes to meanings. However, meaningfulness is a semantic property, whereas the properties picked out by laws of nature are physical properties. To suppose that there are laws linking physical processes and mental acts, one would have to suppose the existence of a new class of laws of nature: physico-semantic laws.

Secondly, we know for a fact that there are some physical processes (e.g. precipitation) which are incapable of generating meaning: they are inadequate for the task at hand. If we are to suppose that certain other physical processes are capable of generating meaning, then we must believe that these processes are causally adequate for the task of generating meaning, while physical processes such as precipitation are not. But this only invites the further question: why? We might be told that causally inadequate processes lack some physical property (call it F) which causally adequate processes possess – but once again, we can ask: why is physical property F relevant to the task of generating meaning, while other physical properties are not?

So much for the first and second options, then. Mental acts which possess inherent meaning are neither identifiable with physical processes, nor caused by them. The third option is to postulate that mental acts are logically entailed by physical processes. This option is even less promising than the first two: for in order to show that physical processes logically entail mental acts, we would have to show that physical properties logically entail semantic properties. But if we cannot even show that they are causally related, then it will surely be impossible for us to show that they are logically connected. Certainly, the fact that an animal (e.g. a human being) has the property of having a large brain with complex inter-connections that can store a lot of information does not logically entail that this animal – or its brain, or its neural processes, or its bodily movements – has the property of having an inherent meaning.

Hence not only are mental acts distinct from brain processes, but they are incapable of being caused by or logically entailed by brain processes. Since these are the only modes of explanation open to us, it follows that mental acts are incapable of being explained in terms of physical processes.

Let us recapitulate. We have argued that eliminative materialism is false, as well as any version of materialism which identifies mental acts with physical processes, and also any version of materialism in which mental acts supervene upon brain processes (either by being caused by these processes or logically entailed by them). Are there any versions of materialism left for us to consider?

It may be objected that some version of monism, in which one and the same entity has both physical and mental properties, remains viable. Quite so; but monism is not materialism.

We may therefore state the case against materialism as follows:

1. Mental acts are real.
(Denial of this premise entails denying that there can be successful arguments, for an argument is an attempt to change the thoughts and beliefs of the listener, and if there are no mental acts then there are no thoughts and beliefs.)

2. At least some mental acts – e.g. thoughts – are have the real, intrinsic property of being inherently meaningful.
(Justification: it is impossible to account for the meaningfulness of any act, event or process without presupposing the existence of inherently meaningful thoughts.)

3. Physical processes are not inherently meaningful.

4. If process X has a real, intrinsic property F which process Y lacks, then X cannot be identified with Y.

5. By 2, 3 and 4, physical processes cannot be identified with inherently meaningful mental acts.

6. Physical processes are only capable of causing other processes if there is some law of nature linking the former to the latter.

7. Laws of nature are only explanatory of the respective properties they invoke, for the processes they link.
(More precisely: if a law of nature links property F of process X with property G of process Y, then the law explains properties F and G, but not property H which also attaches to process Y. To explain that, one would need another law.)

8. The property of having an inherent meaning is a semantic property.

9. There are not, and there cannot be, laws of nature linking physical properties to semantic properties.
(Justification: No such “physico-semantic” laws have ever been observed; and in any case, semantic properties are not reducible to physical ones.)

10. By 6, 7, 8 and 9, physical processes are incapable of causing inherently meaningful mental acts.

11. Physical processes do not logically entail the occurrence of inherently meaningful mental acts.

12. If inherently meaningful mental acts exist, and if physical processes cannot be identified with them and are incapable of causing them or logically entailing all of them, then materialism is false.
(Justification: materialism is an attempt to account for mental states in physical terms. This means that physical processes must be explanatorily prior to, or identical with, the mental events they purport to explain. Unless physical processes are capable of logically or causally generating mental states, then it is hard to see how they can be said to be capable of explaining them.)

13. Hence by 1, 5, 10, 11 and 12, materialism is false.

Why doesn’t the mind remain sober when the body is drunk?

The celebrated author Mark Twain (1835-1910) was an avowed materialist, as is shown by the following witty exchange he penned:

Old man (sarcastically): Being spiritual, the mind cannot be affected by physical influences?
Young man: No.
Old man: Does the mind remain sober when the body is drunk?

Drunkenness does indeed pose a genuine problem for substance dualism, or the view that mind and body are two distinct things. For even if the mind (which thinks) required sensory input from the body, this would only explain why a physical malady or ailment would shut off the flow of thought. What it would not explain is the peculiar, erratic thinking of the drunkard.

However, the view I am defending here is not Cartesian substance dualism, but a kind of “dual-operation monism”: each of us is one being (a human being), who is capable of a whole host of bodily operations (nutrition, growth, reproduction and movement, as well as sensing and feeling), as well as a few special operations (e.g. following rules and making rational choices) which we perform, but not with our bodies. That doesn’t mean that we perform these acts with some spooky non-material thing hovering 10 centimeters above our heads (a Cartesian soul, which is totally distinct from the body). It just means that not every act performed by a human animal is a bodily act. For rule-following acts, the question, “Which agent did that?” is meaningful; but the question, “Which body part performed the act of following the rule?” is not. Body parts don’t follow rules; people do.

Now, it might be objected that the act of following a rule must be a material act, because we are unable to follow rules when our neuronal firing is disrupted: as Twain pointed out, drunks can’t think straight. But this objection merely shows that certain physical processes in the brain are necessary, in order for rational thought to occur. What it does not show is that these neural processes are sufficient to generate rational thought. As the research of the late Wilder Penfield showed, neurologists’ attempts to produce thoughts or decisions by stimulating people’s brains were a total failure: while stimulation could induce flashbacks and vividly evoke old memories, it never generated thoughts or choices. On other occasions, Penfield was able to make a patient’s arm go up by stimulating a region of his/her brain, but the patient always denied responsibility for this movement, saying: “I didn’t do that. You did.” In other words, Penfield was able to induce bodily movements, but not the choices that accompany them when we act freely.

Nevertheless, the reader might reasonably ask: if the rational act of following a rule is not a bodily act, then why are certain bodily processes required in order for it to occur? For instance, why can’t drunks think straight? The reason, I would suggest, is that whenever we follow an abstract rule, a host of subsidiary physical processes need to take place in the brain, which enable us to recall the objects covered by that rule, and also to track our progress in following the rule, if it is a complicated one, involving a sequence of steps. Disruption of neuronal firing interferes with these subsidiary processes. However, while these neural processes are presupposed by the mental act of following a rule, they do not constitute the rule itself. In other words, all that the foregoing objection shows is that for humans, the act of rule-following is extrinsically dependent on physical events such as neuronal firing. What the objection does not show is that the human act of following or attending to a rule is intrinsically or essentially dependent on physical processes occurring in the brain. Indeed, if the arguments against materialism which I put forward above are correct, the mental act of following a rule cannot be intrinsically dependent on brain processes: for the mental act of following a rule is governed by its inherent meaning, which is something that physical processes necessarily lack.

I conclude, then, that attempts to explain rational choices made by human beings in terms of purely material processes taking place in their brains are doomed to failure, and that whenever we follow a rule (e.g. when we engage in rational thought) our mental act of doing so is an immaterial, non-bodily act.

Implications for immortality

The fact that rational choices cannot be identified with, caused by or otherwise explained by material processes does not imply that we will continue to be capable of making these choices after our bodies die. But what it does show is that the death of the body, per se, does not entail the death of the human person it belongs to. We should also remember that it is in God that we live and move and have our being (Acts 17:28). If the same God who made us wishes us to survive bodily death, and wishes to keep our minds functioning after our bodies have cased to do so, then assuredly He can. And if this same God wishes us to partake of the fullness of bodily life once again by resurrecting our old bodies, in some manner which is at present incomprehensible to us, then He can do that too. This is God’s universe, not ours. He wrote the rules; our job as human beings is to discover them and to follow them, insofar as they apply to our own lives.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

139 Responses to Minds, brains, computers and skunk butts

  1. Coyne thinks human intelligence is no more special than a skunk’s odoriferous butt!

    If we’re talking about the kind of intelligence that he is displaying, he may have a point!

  2. The soul/mind is mysterious and unknowable only to those who think that you have to be able to take something apart and find out what it is made of before you can be sure that it exists. Science produces this type of blindness in some people.

    The other way to know that something exists is through the effects that it causes. My soul/mind enables me to control my body and all of the bodily passions, emotions, and feelings that go with it. I can eat, sleep, work, engage in sexual activity, and fight in accord with reason. I am not a biological machine. Free will is not an illusion. Intelligence is not just data processing. And yes, even atheists have souls.

  3. The last paragraph from this article is fitting:

    Why I believe again – A N Wilson
    Excerpt: Gilbert Ryle, with donnish absurdity, called God “a category mistake”. Yet the real category mistake made by atheists is not about God, but about human beings. Turn to the Table Talk of Samuel Taylor Coleridge – “Read the first chapter of Genesis without prejudice and you will be convinced at once . . . ‘The Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life’.” And then Coleridge adds: “‘And man became a living soul.’ Materialism will never explain those last words.”
    http://www.newstatesman.com/re.....ce-atheism

  4. vj

    You are hard on Hawking when you talk of vulgar characterisations – surely his only point is that he believes that just as a computer will fail when its components fail so will a brain?

    Obviously there is a massive difference between a brain and any known computer – but half-seriously I note that for most of the differences you list there is either a computer or some aspect of computing that is on the “brain” side of the comparison:

     

    1. Brains are analogue; computers are digital.

    There have been analogue computers for decades and they are still in use.

    2. The brain uses content-addressable memory.

    Computers have used content-addressable memory in some contexts since the 1970s (maybe before) and they are still widely used.

    3. The brain is a massively parallel machine; computers are modular and serial.

    Parallel computing is almost standard these days – from simple multiprocessors to loose networks working on the same problem.

    4. Processing speed is not fixed in the brain; there is no system clock.

    As you point out some CPUs don’t use system clocks – but you if think of networks of computers working on the same problem this is even more true.

    5. Short-term memory is not like RAM.

    Surely RAM is full of pointers to storage on disk!

    6. No hardware/software distinction can be made with respect to the brain or mind.

    I have been explaining how microcode and similar have muddied the distinction between software and hardware for decades. Is BIOS hardware or software?

    7. Synapses are far more complex than electrical logic gates.

    True.

    8. Unlike computers, processing and memory are performed by the same components in the brain.

    Depends on what you mean by the same components – processing and RAM both comprise transistors and capacitors – just organised differently.

    9. The brain is a self-organizing system.

    Self-repair and organisation is in a very primitive state in computing – but not unknown.

    10. Brains have bodies.

    http://www.bbc.co.uk/news/technology-13366929 

    As a bonus, Chatham adds an eleventh difference between brains and computers:

    11. The brain is much, much bigger than any [current] computer.

    If you think of the internet as a whole it is catching up rather fast!

  5. Thank you for expressing in careful argument something that is obvious. Obvious things are always the hardest to argue. The statement “I have chosen to be a materialst,” is self-contradictory and cannot possibly be true.

    One comment I have. I like to look at the fact that physical processes cannot produce meaning from another angle. I know the below argument may not be philosophically formal, but I think it helps to see why the above is true.

    Purely physical processes can be separated into two classes.
    1. ( Non-chaotic processes ) Where specification of the initial conditions to a certain tolerance leads to a known result within certain tolerances. ( example: the calculation of what angle of elevation to lift a gun with a certain muzzle velocity to hit a distant ship ).
    2. ( Chaotic processes ) Where due to eddies (spatial or temporal) in the advancement of events, no amount of specification of the initial conditions can guarantee knowledge of the result within a prescribed tolerance. ( example, what time a leaf dropped into a river is going to appear at a certain other point along a river that contains many eddy currents )

    Neither of these can convey meaning. Meaning can only be extracted when a result shows intention. Intention can not be shown by a result that is either dictated by physical law, or not constrained within any tolerances. In the first case, intention can not be shown because only one result is possible ( within the tolerances specified ). The second case can not convey intention because any result can be accounted for by purely physical processes.

    Meaning only comes about when the result can possibly be attributed to the choice of the causing agent. As shown above NO purely physical process (either non-chaotic or chaotic ) can indicate intention. Thus purely physical processes can not have inherent meaning.

  6. I’ve always wondered what would cause a brain to begin to get larger, and then what would cause it to stop gaining in size all at the same time being coordinated with many other necessary changes.

    Did the skull get bigger and the brain grew to fill the available space? Why didn’t our eyeballs fall out as our skull grew?

    Surely Hawking does not think that just because a computer stops working that it ceases to exist.

    Yet he seems to think that when his brain stops working he will cease to exist.

    I wonder what it is about the brain that allows people to be so different. I mean, if you look at two people, they certainly have a great deal in common. Yet they are vastly different.

    It’s not too difficult to see how variation feeds the evolutionary process, but isn’t there ever a time that there is just too much variation for evolution to work its magic? And if so, why wouldn’t the brain be the one organ of the body that would do so.

  7. Mung,

    Surely Hawking does not think that just because a computer stops working that it ceases to exist.

    Yet he seems to think that when his brain stops working he will cease to exist.

    Well, the alternative would be that he still exists even after he’s dead. And that, like a broken computer being repaired, he may live yet again someday. Though that may or may not require some kind of crazy law like matter being unable to be created or destroyed, only changed. But what’re the odds of that?

  8. Mung & Null,
    Does anything actually “cease to exist”? I mean matter, energy etc…
    I learned that in grade school. Didn’t we all?

  9. 9

    Looks like an Oregon play call.

  10. TM @9,
    could you elaborate… I don’t think I get the reference :(

  11. Does anything actually “cease to exist”? I mean matter, energy etc…

    That things “begin to exist” is the basis of one of the great arguments for the existence of God.

    For a thing cannot be the cause of it’s own beginning to exist.

  12. 12

    But this objection merely shows that certain physical processes in the brain are necessary, in order for rational thought to occur. What it does not show is that these neural processes are sufficient to generate rational thought.

    VJ, this would imply the mind exists at least partly in the brain would it not?

  13. 13

    Also, in your “dual-operation monism,” does the mind exist immaterially in the same way information does?

    If it’s not dualism and it holds the mind to be immaterial I assume this is what you mean.

  14. Tragic: I think I see VJ’s point: a necessary process means that the outcome cannot happen without that contribution. A sufficient process is one that can create the outcome on its own.

    I think that’s what VJ meant. I’m used to making these arguments in mathematics not in real life and I can’t think of a good, non-disputable example.

  15. 15

    If necessary processes for the mind exist in the brain then it would naturally follow that the mind exists at least partly in the brain.

  16. Mung,
    Very true. . . I was just wondering if the “matter and energy do not cease to exist only change” type of argument can be viably used when talking to a Materialist..in regards to consciousness or the mind.

  17. vjtorley:
    for the mental act of following a rule is governed by its inherent meaning, which is something that physical processes necessarily lack.

    The human mind also has the ability to lie, to follow false rules and generate false meaning.

    To comprehend an account that is real and correct and then to fabricate a conflicting account that is false and not based upon physical reality, and while knowing which account is true and which false, to tell the false account with the intent to portray it as real and true all the while deliberately, consciously intending to decieve, to cause another mind to perceive a falsehood as if it were truth.

    How can ostensibly rule-following physical brain processes, rooted in the rules of physical reality, fabricate a falsehood not rooted in physical reality that violates rules? Further, how can brain processes rooted in physical reality, simultaneously while telling a lie, hold two conflicting and irreconcilable versions of “reality” and distinguish between the two sufficient to “keep the story straight” while telling it.

    I’m also curious if there is any indication of animals being able to lie. Concealment, like “hiding” a bone, is not necessarilly lying, whereas a hunting dog that misdirects its master to keep the quarry for itself would seem like lying.

  18. MedsRex @16

    I don’t think that argument can be used meaningfully. If I understand the materialist argument correctly, consciousness/mind is an emergent property of matter, once that matter has reached a certain level of organization and complexity. As a result, I believe the thinking is that once the brain ceases to function, whatever consciousness/mind that had emerged from it is no longer there.

    Granted, I don’t think there is any evidence for such a view, but I’m not sure the other option — that mind exists prior to and independent of the brain — is much more satisfying. I mean, are we saying that consciousness/mind simply exists as a self-existent propery?

  19. 1. Brains are analogue; computers are digital.

    This is often argued and disputed.

    2. The brain uses content-addressable memory.

    Questionable. If it were true, we should not have the “it’s on the tip of my tongue, but I just can’t recall the word” problem. By suggesting that it uses content addressable memory, you go farther toward considering the brain to be a computer than I would.

    3. The brain is a massively parallel machine; computers are modular and serial.

    True, but less important than you seem to think.

    4. Processing speed is not fixed in the brain; there is no system clock.

    Again, probably true, but probably not important.

    5. Short-term memory is not like RAM.
    As Chatham writes: “Short-term memory seems to hold only ’pointers’ to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk.”

    True (that it’s not like RAM). But that “pointers” comment suggests that you think it far more like a computer than do I.

    6. No hardware/software distinction can be made with respect to the brain or mind.

    Of little or no importance.

    7. Synapses are far more complex than electrical logic gates.

    True, and widely recognized. But it is hard to draw conclusions from this.

    8. Unlike computers, processing and memory are performed by the same components in the brain.

    True, but of doubtful importance.

    9. The brain is a self-organizing system.

    True, and probably important.

    10. Brains have bodies.

    True, and important.

    I agree with you, contra Hawking, that the brain is not a computer. I agree with Coyne, that the evolution of the brain is no more remarkable than other evolved creature features. Your argument to the contrary is not at all persuasive.

    I do not agree with Coyne’s view that the mind is identical to the brain. However, although I agree with your conclusion on that matter, I do not agree with your argument on that issue. I do not see anything magical or mystical about intentionality.

    I’ll note that the above should be considered opinion. I won’t be taking time to try to argue those points. Cognitive science is too young to be able to settle such issues, and there is a great diversity of views among those studying the issues.

  20. Neil Rickert@19

    “I do not see anything magical or mystical about intentionality.”

    Then try to follow this argument.

    The problem is the expansion of the phase space of allowed results once intentionality is assumed.

    If I am any kind of rule follower, I only have one thing I can do. Granted the decision tree may be very complex, but as I explained above, in the end the process is either non-chaoitic ( in which case there is one inevitable result depending on the initial conditions ) or chaotic ( in which case any result is possible but dependence on the initial conditions is lost ).

    Intentionality is a different animal altogether. It allows the selection of an arbitrary result despite the initial conditions. And the space of arbitrary results is so vast that it quickly dwarfs the probability power contained in the phase space of those initial conditions.

    Point of case. Right now I can think of a sequence of 100 ASCII characters. Its really not that hard. This response has many more than 100 ASCII characters.

    But the possible sequences available from 100 ASCII characters ( upper and lower case + digits + some special characters ) is 95^100. This is a number so huge that it dwarfs the estimates of the number of physical particles in the universe.

    So the fact that something as simple as the selection of a sequence of 100 characters ( let alone something really complicated like designing a computer chip ) has a resultant phase space larger than the total number of particles in the universe means that either I have intentionality, or I am not really able to select an arbitrary sequence.

    The point is intentionality is different because it can not logically follow from law following processes.

  21. Neil Rickert: “I do not see anything magical or mystical about intentionality.”

    Well, I’m not sure about “magical” or “mystical,” but intentionality is real. It sounds like you are suggesting that intentionality arises from the brain, meaning it is some kind of emergent property of the brain’s structure? I’m not necessarily arguing the point (although I would lean away from that view); just trying to ascertain where you are coming from.

  22. JDH @20,

    Great point, and well stated.

  23. JDH #20

    This is an interesting argument and I like the way you describe it.  I think it fails because of the different senses of “possible”.  As I understand it you are arguing from the sheer number of possible outcomes that a mind might produce that the outcome cannot be based on initial state plus rules plus random fluctuation.

    You write:

    But the possible sequences available from 100 ASCII characters ( upper and lower case + digits + some special characters ) is 95^100.

     

    In what sense is it possible that you might select any of them?  It is, in one sense, possible for a computer to print any of these sequences.  If I am simply looking at the screen and know nothing about how it is programmed then any of the sequences are possible.  Maybe it is programmed to churn out a number based on seeding a pseudo random number generator with the current time. So even the programmer would have no idea which string it would actually produce.  It is still following rules.  Of course it is only possible for it to display one number given the program and the time and only a subset of the 95^100 numbers given the program but not the time (because there aren’t enough different times).  “Possible” is a term that is relative to given set of constraints.  The more constraints that are known or specified the less is possible.

    In a similar way when you choose a particular sequence it is in one sense possible that you could have chosen any other. I have no idea what rules and initial conditions, both within your brain and externally, caused you to choose a particular sequence  (I guess you have no idea either). It may be that given the rules and the conditions there was only one number you could have come up with. Or there may be a truly random element in your brain that means you could have come up with a range even given the rules and initial conditions. 

    You would probably describe this as denial of free will but that is precisely the issue being debated.  I would say free will is just a particular type of rule following.  You cannot use the phase space argument without assuming your premise – that choice is in some indefinable way different from causality or random fluctuation and thus opening up a different type of possibility.

  24. Hi everyone,

    Well, I see that my latest post has attracted quite a few comments. I’ll be back in a few hours. A few things you might like to think about in the meantime:

    (1) For those who objected to my list of differences between the brain and a computer: how would you define a computer?

    (2) If some of the necessary conditions for plant growth exist outside of the plant, does that mean the plant is larger than itself?

    (3) Can a material process have a meaning? If so, how?

    (4) Does lying presuppose a capacity for story-telling?

  25. Vj:

    (2) Umm . . . not exactly sure what you mean. I’d say that some of the necessary ingredients for plant growth exist outside the plant. Conditions . . . you mean that there is a form of life that can replicate and form multi-cellular bonds?

    (3) Depends on what you mean by meaning. (I can hear Bill Clinton now . . . or was it Dick Chaney?) Clearly a material process can have meaning imposed on it. Since I believe that life arose through strictly material processes then I’d say yes since that gave rise to us and we have meaning to ourselves at least.

    So, I suppose what you’re really asking is: does meaning actually exist out side of our imposed values and perceptions? I’d say that might be impossible to answer since we cannot break free of our sensory input and the mental models and constructions we use to make our living. In my opinion obviously. For what it’s worth.

    Is meaning an absolute or a relative concept . . . . how can we tell since we’re limited to what we pick up through our physical senses and what we churn out in our brain computers (according to Dr Hawkings anyway)?

    (4) I’d say no. I think they’re separate abilities that use some of the same processing. My son was pretty good at using his imagination to tell stories or make up things before he learned to lie. I remember the moment, you could see his brain tick over when I asked him a question he didn’t want to answer and I knew he was coming to the realisation: hey, I don’t have to be factual. heh heh heh. That was a mental and moral step different from making up stuff that everyone acknowledged was fiction and creative. He knew lying was wrong but he thought he might get away with it to avoid punishment. But I know other kids who are terribly good at lying but aren’t very good story tellers. I work in a primary school in England.

  26. Looking at the name of this thread I’d say skunk butts definitely have meaning if you’ve got one in your face.

    :-)

  27. vj #24

    As always your essays are too long but interesting.

    1) For those who objected to my list of differences between the brain and a computer: how would you define a computer

    A device for processing data and doing computations.

    3) Can a material process have a meaning? If so, how?

    Of course it can – both what Grice would call natural meaning (clouds mean rain) and what he would call non-natural meaning (a cross on the door means the plague is in this house). In fact I would argue that meaning can only be given to something which can potentially be observed because meaning derives from the potential to influence an observer.

    But I suspect you are not talking about any old meaning. Above you  talked about inherent meaning. My problem is I have no idea what you mean by that.  

  28. JDH (#20)

    Then try to follow this argument.

    Your argument doesn’t do anything for me.

    I had already agreed that the brain is not a computer. Your argument is on what you see as a limitation of computers (or of rule following), so it is a misfire.

    Perhaps my earlier post was not sufficiently clear. When I said that I don’t see anything magical or mystical about intentionality, I was suggesting that I see intentionality as something that could easily have evolved.

  29. 29

    VJ:

    (2) If some of the necessary conditions for plant growth exist outside of the plant, does that mean the plant is larger than itself?

    If the analogy to the relationship between the mind and the brain is accurate, then you could say that the mind is completely independent of the brain.

    However I do not believe the analogy is accurate. The mind clearly controls processes in the brain. A plant does not control environmental growth conditions. The mind uses the brain (and the body, so you say) as memory storage which it can access at any time it wants. I can think of no similar relationship between plants and outside growth conditions.

    I will be back. ;)

  30. 30
    Elizabeth Liddle

    I would agree that brains are only slightly like computers, and differ from present computers in most of the ways you outline.

    And I won’t comment on the evolutionary argument.

    But I do challenge your contention that:

    The most serious defect of a materialist account of mind is that it fails to explain the most fundamental feature of mind itself: intentionality

    which you define, citing Professor Edward Feser, as “the mind’s capacity to represent, refer, or point beyond itself”

    Intention is a well studied concept in neuroscience and there is a large literature on the subject, so this claim needs some hefty support!

    I suggest that your argument that your “second option”, namely

    showing how mental acts are caused by physical processes

    is flawed, and is exactly what neuroscience attempts – and succeeds – in doing.

    You argue that:

    The second option is also impossible, for two reasons. Firstly, if the causal law is to count as a genuine explanation of mental acts, then it must account for their intentionality, or inherent meaningfulness. In other words, we would need a causal law that not only links physical processes to mental acts, but a causal law that links physical processes to meanings. However, meaningfulness is a semantic property, whereas the properties picked out by laws of nature are physical properties. To suppose that there are laws linking physical processes and mental acts, one would have to suppose the existence of a new class of laws of nature: physico-semantic laws.

    Not really. Or, it depends what you think that means means. To a predator, another animal “means” dinner. It’s not very difficult to imagine (or even design) a physical system where some signal (low voltage, for instance) triggers some power-saving protocol. My laptop does it. For my laptop, low voltage means: “close down active processes”, and it mostly works.

    Sure, brains aren’t exactly like computers, but they aren’t exactly not like them neither – incoming data is parsed into objects that have meaning in the sense that they trigger programs for action, often alternative programs for action, that are then evaluated, by mechanisms we understand quite well, for congruence with both proximal and distal goals.

    You then write:

    Secondly, we know for a fact that there are some physical processes (e.g. precipitation) which are incapable of generating meaning: they are inadequate for the task at hand. If we are to suppose that certain other physical processes are capable of generating meaning, then we must believe that these processes are causally adequate for the task of generating meaning, while physical processes such as precipitation are not. But this only invites the further question: why?

    Because the relevant processes are a whole cascade of processes, triggered by a hugely complex set of logic gates (that’s where I disagree with your rejection of the computer analogy, actually, neural populations do work as logic gates, summing inputs to determine outputs), not a single physical process. We tend to reserve the word “meaning” for scenarios in which a range of possible actions are contingent on some signal, which had “meaning” because it weights in favour of some action. That wouldn’t be covered by your precipitation example.

    We might be told that causally inadequate processes lack some physical property (call it F) which causally adequate processes possess – but once again, we can ask: why is physical property F relevant to the task of generating meaning, while other physical properties are not?

    Because “physical property F” is a process involving countless subprocesses, it isn’t really comparable to a physical property like, say mass.

  31. markf (#27)

    Thank you for your post. You wrote:

    Above you talked about inherent meaning. My problem is I have no idea what you mean by that.

    I’d like to return to Professor Feser’s comments:

    Notice, though, that considered merely as a set of ink marks or (if spoken) sound waves, “car” doesn’t represent or mean anything at all; it is, by itself anyway, nothing but a meaningless pattern of ink marks or sound waves, and acquires whatever meaning it has from language users like us, who, with our capacity for thought, are able to impart meaning to physical shapes, sounds, and the like.

    Now the puzzle intentionality poses for materialism can be summarized this way: Brain processes, like ink marks, sound waves, the motion of water molecules, electrical current, and any other physical phenomenon you can think of, seem clearly devoid of any inherent meaning. By themselves they are simply meaningless patterns of electrochemical activity. Yet our thoughts do have inherent meaning – that’s how they are able to impart it to otherwise meaningless ink marks, sound waves, etc.

    You mentioned Grice’s “natural meaning”. Unfortunately Grice never gave a clear definition for this term in his 1957 essay. Spots “mean” measles; storm clouds “mean” rain. In the former instance, spots are a symptom, and hence an effect; in the latter instance, storm clouds are a cause. In any case, spots as such don’t really mean anything; all we can say is that a physician can reliably infer that a patient has measles from the fact that he/she has spots.

    Now ask yourself: could any sufficient condition for the occurrence of X be said to “mean” X, in Grice’s “natural” sense? I think not. Consider two problems: (i) wayward causal chains; (ii) lack of scientific knowledge. A sufficient condition for X doesn’t naturally mean X unless we can reliably infer X from its occurrence. That presupposes a reliable causal connection, and a knowledge on our part of the causal connection. Spots didn’t mean measles in 100,000 B.C. At that time, we didn’t even have a term for the malady.

    “Natural meaning” is, it seems, a derived rather than a primitive usage of the term “meaning”: it assumes the existence of a community of observers who possess a stock of shared scientific knowledge.

    You then go on to make the following suggestion:

    In fact I would argue that meaning can only be given to something which can potentially be observed because meaning derives from the potential to influence an observer.

    Now ask yourself: are your thoughts meaningless until they have influenced you? Surely not. Your thoughts are meaningful because they are intended by you. For instance, if you formulate a plan to build a house, the plan in your mind manifestly does not acquire its meaningfulness from how it subsequently affects you, when you mull over it. For you could not mull over it unless it already possessed a meaning in its own right.

    Attempts to deny that thoughts possess inherent meaning are self-refuting. For any utterance by a speaker A, we can sensibly ask: did B understand the meaning of A’s utterance? But in the absence of any original meaning intended by A, the question would become nonsensical. We must assume, then, that the meaning of a mental act (i.e. a thought) is underived, or inherent.

  32. Charles (#17)

    Thank you for a very interesting post. You write:

    I’m also curious if there is any indication of animals being able to lie. Concealment, like “hiding” a bone, is not necessarily lying, whereas a hunting dog that misdirects its master to keep the quarry for itself would seem like lying.

    My question would be: does the dog merely intend to divert the master from the quarry, or does it intend to make its master believe that the quarry is somewhere other than where it actually is? The former alternative would be a more parsimonious interpretation.

    Now ask yourself another question: if the dog knew that the master would never retrieve the quarry as a result of its misdirection, would that fact alone suffice to satisfy the dog? If the dog were a proper liar, then it shouldn’t. Suppose, for instance, that the master (i) saw through the dog’s ruse, (ii) correctly inferred that the quarry was in the opposite direction to the direction in which his dog was leading him, and (iii) subsequently located the quarry, but (iv) took pity on the dog and decided to let it kill the quarry instead. If the dog were capable of being made aware of (i) to (iv) then it should feel crestfallen and somewhat deflated at the failure of its attempt to deceive its master. But in reality dogs never worry about such matters. A dog doesn’t have any third-order mental states (e.g. beliefs about other individuals’ beliefs about your own intentions), and it seems to me that to be a proper liar, you have to have those.

  33. Neil Rickert (#28):

    You wrote:

    … I see intentionality as something that could easily have evolved.

    Would you care to explain how our capacity to have thoughts that possess inherent meaning could have evolved from physical systems that completely lack it?

  34. vjt (#32):

    Would you care to explain how our capacity to have thoughts that possess inherent meaning could have evolved from physical systems that completely lack it?

    All biological organisms exhibit what can be described as apparently purposeful behavior. I see that as a sufficient precursor for intentionality.

  35. Neil Rickert (#19)

    Thank you for your post. You wrote:

    I do not agree with Coyne’s view that the mind is identical to the brain. However, although I agree with your conclusion on that matter, I do not agree with your argument on that issue. I do not see anything magical or mystical about intentionality.

    If you disagree with Professor Coyne’s view that the mind is identical with the brain, but disagree with my argument from intentionality, may I ask why you think the two are distinct? The most common response I’ve heard is that qualia (e.g. my sensation of the color red) show that mind and brain are distinct. But they don’t. As even David Chalmers has acknowledged, it could just be a law of nature that certain wavelengths cause sensations of a certain kind. All the qualia argument establishes is property dualism.

    Do you have any other grounds for saying that mind and brain are distinct – e.g. Thomistic arguments? (See here: http://dhspriory.org/thomas/ContraGentiles2.htm#49 .)

  36. “All biological organisms exhibit what can be described as apparently purposeful behavior. I see that as a sufficient precursor for intentionality.”

    Didn’t answer the question. A exists, therefore A exists.

  37. Neil Rickert (#33):

    You write:

    All biological organisms exhibit what can be described as apparently purposeful behavior. I see that as a sufficient precursor for intentionality.

    I’m afraid I fail to see how an inherent meaning can be derived from an apparent purpose.

  38. tragic mishap (#15, #29)

    Thank you for your posts. You wrote:

    If necessary processes for the mind exist in the brain then it would naturally follow that the mind exists at least partly in the brain.

    Not so. All that needs follow is that the mind is extrinsically (but not intrinsically) dependent on processes occurring in the brain.

    You also wrote:

    If the analogy to the relationship between the mind and the brain is accurate, then you could say that the mind is completely independent of the brain.

    I would not claim that – otherwise, as I pointed out in my post, it would be difficult to account for the befuddled thinking of drunkards.

    I also would not say that “the mind uses the brain.” I would say that people use their brains when they think, and that thinking itself is not a material act. That way of putting it seems to get it right.

  39.  

    vj #30

     

    Thanks. I am clearer what you and Feser mean by inherent meaning.  think the idea is that the “meaning” is something attached to a mental act independently of any other context.  You will not be surprised to learn that I don’t think there is such a thing as inherent meaning and that all meaning is dependent on context – even for a thought. In particular it is dependent on intention.

    I entirely accept that what Grice calls natural meaning is dependent on our knowledge.  Clouds only mean rain if you know something about the relationship between the two.  I am sure Grice would accept this and I don’t think it is controversial.

    What about non-natural meaning?  Consider first the case where someone gives an observable event such as a spoken word or a picture non-natural meaning.  You will remember that Grice defines non-natural meaning as meaning derived from person A’s attempt to get person B to do something by getting B to recognise A’s intention.  So if A draws a picture of B’s wife misbehaving with Mr X this will only work if B recognises why A drew the picture (and it wasn’t for example an idle fantasy).  Does the picture have inherent meaning?  You would presumably respond – no, because that meaning is derived from A’s intention.

    But this case be extended to where A and B are the same person!  I might draw up a plan of a house I would like to build for future reference.  When I look at it later it will only mean something if I remember why I drew it.  Otherwise it is just a drawing which might have been a doodle with no meaning.

    Here is the hard bit.  I think this equally applicable to “thoughts”.  Suppose I now plan that house in my head without writing anything.  What does that comprise? I might well create a mental picture of that house in my head (and remember a computer may contain a representation of an object in its memory).  But that mental picture by itself has no more meaning than the physical picture unless there is an intention associated with that picture.  I might speak to myself in my head. Again those internal words have no more meaning than external words without the associated intention.

    In fact you rather gave the game away when you wrote:

    “Your thoughts are meaningful because they are intended by you.”

    I accept this – but it is just as true of non-natural meaning given to ordinary physical events like text and pictures.

    Of course it raises the question “can materialists account for intention?”  Clearly I believe we can – but that is different (long) story.

  40. vjt (#34)

    The most common response I’ve heard is that qualia (e.g. my sensation of the color red) show that mind and brain are distinct.

    I’m a qualia skeptic.

    Do you have any other grounds for saying that mind and brain are distinct – e.g. Thomistic arguments?

    I am inclined to think that “the mind” is metaphor, a cultural construct that attempts to account for various aspects of behavior. The brain, however, is actually observable so not merely a metaphor.

    I can put that differently. As I see it, what the brain does is different from what we credit the mind with doing, though they are related. Sorry, I haven’t paid much attention to Thomist philosophy.

  41. Phaedros (#35)

    Didn’t answer the question. A exists, therefore A exists.

    So you see “apparently purposeful behavior” as the same thing as “inherent intentionality”. By contrast, vjtorley (#36) thinks that apparently purposeful behavior is insufficient to account for inherent intentionality.

    Maybe you and vjt should get together and compare notes.

    I’m closer to your position on that. However, I was answering the question of how it could evolve. If it is already present in the simplest biological organisms, then that doesn’t even require explanation.

    In case you were looking for more, I see homeostatic biochemical processes as providing the basis for the apparently purposeful behavior that we observe.

  42. markf (#27)

    You define a computer as “a device for processing data and doing computations.” I have to say that defining a computer in terms of computations sounds a bit circular.

    Leaving that concern aside, five things worry me about attempts to argue that the brain is a computer:

    (1) Vagueness and infinite elasticity of definition. Whenever a difference between brains and computers is cited, AI proponents tell us that the difference is a non-essential one: theoretically, at least, a computer could be built which was like the brain in that respect. Which makes me want to respond: “The term ‘computer’ has no meaning unless there are certain things that are unambiguously NOT computers, and never could be. Can you name some?” For instance, I believe Steve Wolfram regards any physical system as a computational device of some sort. If one’s definition of “computer” is that vague, then of course any physical object – including a brain – could qualify.

    (2) Over-reliance on conceivability-type arguments. AI proponents are prone to argue that the brain is a computer because computers could conceivably replicate any of the brain’s feats. That, to me, is like saying that a horse could conceivably fly. Until someone builds a computer that can do what a brain does, we don’t know if it’s possible or not.

    (3) Failure to supply a proof of concept, which is the normal standard for evaluating a claim that you can make an A (i.e. a brain) out of B’s (i.e computers). The proper response to such a claim should be: “Fine. Take some computers, and build me a brain – or at the very least, something that can do everything that the brain can do.” Or if that cannot be feasibly done using current technology, one should be able to supply some rigorous mathematical argument to the effect that computers are capable of replicating all of the feats of the brain, before one’s claim that the brain is a computer can be taken seriously.

    (4) Shifting the onus of proof. AI proponents typically argue: “It makes sense to regard the brain as a computer – at least, there’s no reason in principle why it couldn’t be one. So the burden is on you to show that the brain is more than a computer. If you think it’s something more, you should be able to explain clearly why.” Well, no. Maybe the brain is just too complex for us to understand – in which case, we cannot properly express exactly why it differs from a computer (an object which we can understand).

    Typically, when arguing that A’s are B’s, we do not rely on thought experiments. We need to show that one can substitute for the other, in relevant contexts. And to do that, we need to get our hands dirty with something called empirical evidence – a concept foreign to many AI proponents, sad to say.

    (5) Organic reductionism. For instance, you provided a BBC link to an article on robots to show that computers can have bodies. But the relation of a robot to the computer controlling it is but a pale imitation of the intrinsic finality of the human body, with its built-in ends. (This point remains valid, regardless of the process by which the body came to possess such ends.)

    For me, the mere fact that brains are parts of living bodies, while computers are not, suffices to demonstrate the inadequacy of the computer metaphor. Brains can indeed compute, but that’s not all they do.

  43. vjt (#36)

    I’m afraid I fail to see how an inherent meaning can be derived from an apparent purpose.

    In talking of “apparent purpose”, I am not suggesting that it is mere appearance. I think it real enough. However, we normally do not credit an amoeba with having a mind, and in ordinary discussion people like to associate purpose with the existence of a mind. My use of “apparent” was just a way of making clear that I am not implying that there is a mind involved.

    As for “inherent meaning”, it is not at all clear what that “inherent” part means. A newborn child isn’t able to have meaningful thoughts (or perhaps isn’t able to have any thoughts) about the world. I see intentionality as resulting from the learning process. I see acquiring knowledge as having far more to do with acquiring intentionality than with acquiring beliefs.

  44. 44
    Elizabeth Liddle

    I think mind is most usefully thought of as what the brain does. Consciousness is better thought of as a verb (“to be conscious”) than as a noun IMO.

    I don’t think it makes me a dualist though. Most properties turn out to be what things do.

  45. markf (#38)

    Thanks for a very interesting response. The crux of your argument boils down to your assertion that thoughts are mental pictures which we create intentionally, and which (somehow) represent their objects, but which have no inherent meaning of their own. You write:

    But that mental picture by itself has no more meaning than the physical picture unless there is an intention associated with that picture. I might speak to myself in my head. Again those internal words have no more meaning than external words without the associated intention.

    Here’s my question: do intentions themselves have meaning? It seems incontrovertible that they do. When I wrote, “Your thoughts are meaningful because they are intended by you”, I didn’t mean that thoughts are internal words that are endowed with whatever meaning you intend them to have. I meant that a thought itself is nothing more than an intention, or a chain of inter-related intentions. For example, my plan to build a house includes my intention to clear a block of land, followed by my intention to dig a hole for the foundation, and then pour in cement, etc.

    It is possible for me to forget my plan for building a house, only insofar as I forget the sequence of intentional acts that need to be performed. But if I have an architect’s fully fleshed out concept of a house as a finished product, such that I can explain to myself all the whys and wherefores of the various features of a house, in relation to the final end, then forgetting is out of the question. Everything hangs together, so to speak.

    By the way, you might be interested in the following excellent articles by Professor Feser, which address many of your concerns, in a way far better than I could do:

    Putnam on causation, intentionality, and Aristotle,

    Dretske on meaning ,

    Stoljar on intentionality and

    Fodor’s trinity .

    Hope that helps.

  46. Ellazimm

    Upon reflection, I would agree that liars need not be good story-tellers and vice versa. However, whatever story a liar tells, the liar must intend that I believe it, and must believe that other people will think he is telling the truth. That is, a liar need to have beliefs about other individuals’ beliefs about his own intentions. Materialism cannot account for such beliefs.

  47. bornagain77

    Would you like to make any comments on this recent post by Professor Jerry Coyne? I know that you’re an expert on NDEs, so I’d appreciate your thoughts.

    http://whyevolutionistrue.word.....prove-god/

  48. Dr. Torley, I’m surely no expert on NDE’s. But one thing I do know is that Pam Reynold’s recalled events during her ‘extremely monitored’ NDE, thus shooting a hole in Coyne’s assertion. Thus Coyne would lose his bet, but of course he would probably just refuse to accept the testimony;

    The Near Death Experience of Pam Reynolds – Video
    http://www.metacafe.com/watch/4045560/

    ,,,and this,,,

    Blind Woman Can See During Near Death Experience (NDE) – Pim von Lommel – video
    http://www.metacafe.com/watch/3994599/

    Kenneth Ring and Sharon Cooper (1997) conducted a study of 31 blind people, many of who reported vision during their Near Death Experiences (NDEs). 21 of these people had had an NDE while the remaining 10 had had an out-of-body experience (OBE), but no NDE. It was found that in the NDE sample, about half had been blind from birth. (of note: This ‘anomaly’ is also found for deaf people who can hear sound during their Near Death Experiences(NDEs).)
    http://findarticles.com/p/arti....._65076875/

    ,,,But for a more thorough treatment, this guy,,,

    Near Death Experiences – Scientific Evidence – Dr Jeff Long M.D. – video
    http://www.metacafe.com/watch/4454627/

    ,,, has recently written a book,,,

    Evidence of the Afterlife: The Science of Near-Death Experiences [Hardcover]
    http://www.amazon.com/Evidence.....0061452556

    ,, which has several instances of recall that Coyne is looking for,,,

    ,,,The one thing I have against Dr. Long’s book is that he does not include NDE studies of foreign cultures and tries to extrapolate the findings he has found for Judeo-Christian cultures to be a world-wide phenomena, and this extrapolation is not warranted for homogeneity simply is not the case for foreign NDE’s. (i.e. non-Judeo-Christian cultures tend to have very unpleasant NDE’s!!),,,

    ,,, But of course for me the clincher is that reality itself conforms to what we would expect if NDE’s were real;

    It is also very interesting to point out that the ‘light at the end of the tunnel’, reported in many Near Death Experiences(NDEs), is also corroborated by Special Relativity when considering the optical effects for traveling at the speed of light. Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as an observer moves towards the ‘higher dimension’ of the speed of light, with the ‘light at the end of the tunnel’ reported in very many Near Death Experiences:

    Traveling At The Speed Of Light – Optical Effects – video
    http://www.metacafe.com/watch/5733303/

    ,,,further notes,,,

    Higher Dimensional component to Life and Physics
    https://docs.google.com/document/pub?id=1s4jILvAKR5WqGVfbej1k3Y62gmZ6Ds047JTUVN4ekTw

  49. #43 vj

    Here’s my question: do intentions themselves have meaning? It seems incontrovertible that they do. When I wrote, “Your thoughts are meaningful because they are intended by you”, I didn’t mean that thoughts are internal words that are endowed with whatever meaning you intend them to have. I meant that a thought itself is nothing more than an intention, or a chain of inter-related intentions. For example, my plan to build a house includes my intention to clear a block of land, followed by my intention to dig a hole for the foundation, and then pour in cement, etc.

    First, just to be absolutely sure, when I talk about “intention” I mean intentions as roughly synonymous with  purposes. This is not the same as the rather technical use of “intentional” as in being about something. 

    I do not believe intentions have meanings, nor are they thoughts. And I do not understand why you think it incontrovertible that they do have meanings. I believe intentions are propensities or dispositions to behave in certain ways probably caused by certain brain states.  Thoughts are not propensities or dispositions.  They are events.

    This is of course is a very well worn debate.  My main point is whatever status you give intentions they are the distinguishing characteristic that gives any event non-natural meaning, whether it be a mental act or an external event. Therefore meaning is not a unique characteristic of mental acts.

  50. markf @23 “In a similar way when you choose a particular sequence it is in one sense possible that you could have chosen any other.”

    Uh, yes. Not just “in one sense,” but in every practical sense in the real world. We not only feel like we have choices, but we in fact treat people (in personal interactions, under our laws, etc.) as though they have choices.

    “I have no idea what rules and initial conditions, both within your brain and externally, caused you to choose a particular sequence (I guess you have no idea either). It may be that given the rules and the conditions there was only one number you could have come up with. Or there may be a truly random element in your brain that means you could have come up with a range even given the rules and initial conditions.”

    Are you seriously arguing that JDH couldn’t have chosen different words to express his thoughts, or that the above paragraph that you wrote was the only possible outcome of what you could write?

    The problem with the “no free will” point of view is that, in addition to being self-refuting, it is utterly and completely useless as a vehicle for understanding how we ourselves approach life and how to interact with others around us. The whole thing boils down to: it’s all just an illusion. Useless. To the point that even those who claim to expouse it, don’t conduct their lives by it.

  51. Isn’t it sufficient to say that the brain is not like a computer because no known computer needs to be conscious to function? Given that, I have no idea where people get the idea that consciousness arises once a neural computer gets complicated enough.

  52. I know that you’re an expert on NDEs, so I’d appreciate your thoughts.

    For what it’s worth, I have had multiple near death experiences. I’m probably the resident “expert.”

  53. Mung @50,
    Are you serious? I know you have a bit of a trickster in you. Lol but if you are serious I would love to hear your thoughts on Coyne’s post.
    I would also be greatly interested in Denyse’s thoughts..i mean she did right a book that partially dealt with the subject.

  54. vj: You wrote: “That is, a liar need to have beliefs about other individuals’ beliefs about his own intentions. Materialism cannot account for such beliefs.”

    Why can’t the liar’s beliefs about other people’s beliefs just be an educated guess based on past experience, perception of body language and knowledge of the person involved? It just sounds like an intelligent liar sussing the situation to estimate the chances of success.

    Maybe I’m missing your point. Probably.

  55. OT: hi vjtorley.

    In his book A Fine Tuned Universe, Alister McGrath has a chapter on Augustine you might find interesting.

  56. “It used to be supposed in Science that if everything was known about the Universe at any particular moment then we can predict what it will be through all the future…. More modern science however has come to the conclusion that when we are dealing with atoms and electrons we are quite unable to know the exact state of them; our instruments being made of atoms and electrons themselves.” – Alan Turing (1932)

  57.  

    Eric #48

     

    I am sorry.  I obviously didn’t explain myself clearly enough – although I am struggling to find better ways to explain it.

     

    Uh, yes. Not just “in one sense,” but in every practical sense in the real world. We not only feel like we have choices, but we in fact treat people (in personal interactions, under our laws, etc.) as though they have choices.

    Yes we treat people as though they could have chosen differently and indeed they could have.  But “could” is a modal word like “possible”.  It means it was possible that they chose differently.  And modal words are relative to a set of conditions as I tried explain in #23 (do you deny this?). The question is what set of conditions are implied when we say someone could have chosen differently?

     

    Are you seriously arguing that JDH couldn’t have chosen different words to express his thoughts, or that the above paragraph that you wrote was the only possible outcome of what you could write?

    As I say whenever you write “could” this is relative to a set of conditions.  With respect to one set of conditions he could have written different words.  With respect to another set he could not or it may have been he could only have chosen from a limited set according to some randomising element.

    The problem with the “no free will” point of view is that, in addition to being self-refuting, it is utterly and completely useless as a vehicle for understanding how we ourselves approach life and how to interact with others around us. The whole thing boils down to: it’s all just an illusion. Useless. To the point that even those who claim to expouse it, don’t conduct their lives by it.

    My position is not “no free will”.  It is compatibilism – that free will is compatible with determinism plus a random element.  Free means possible to choose differently according to some types of conditions (not physically constrained, not asleep or unconscious etc).  It does not mean choices are without cause.  My choice of words is caused by my education, my desire to explain what I believe, by the limited time I have available etc.  These causes are a set of conditions and if we knew them all then it is not possible I would choose any other words given these conditions.

    There is nothing useless about this and it doesn’t mean free will is an illusion.  I do live my life by it and it makes little difference to how I live my life compared to yours.  I do find, however, that most people find it hard to understand – although it has a long and very respectable historical tradition.

  58. Are you serious?

    NEVER! Unless I am.

    YES and NO!

    I have indeed had numerous experience of not dying.

    I know you have a bit of a trickster in you. Lol but if you are serious I would love to hear your thoughts on Coyne’s post.

    Coyne is an idiot. I have this on great authority from “the other side.”

    Has Coyne had more NDE’s than I?

    I think not.

    NEVER take me seriously, lol.

    Likewise , the idea that BA77 is an expert in NDE’s is also ludicrous.

    I have had far more experience with NDE’s than BA77.

  59. 60

    Mung,

    Coyne is an idiot. I have this on great authority from “the other side.”

    Has Coyne had more NDE’s than I?

    I think not.

    NEVER take me seriously, lol.

    Likewise , the idea that BA77 is an expert in NDE’s is also ludicrous.

    Ease up on calling folks idiots and claiming that other folks’ knowledge is ludicrous.

  60. Ease up on calling folks idiots and claiming that other folks’ knowledge is ludicrous.
    OK. No problem.

    How does one gain knowledge of such things?

  61. ellazimm (#54)

    Thank you for your post. My point was simply that a liar has to have the capacity to entertain the following thought:

    “I’m capable of fooling other people. For instance, if I say ‘The food is over there’, other people will believe that the food is over there.”

    Now, in order to have this thought, the liar has to believe that if he intentionally says something, other people will believe what he says. In other words, the liar has to be capable of entertaining a belief about what other individuals will believe if he performs an intentionally deceitful act – e.g. telling a lie, or otherwise misdirecting someone. If the liar is not capable of this level of cognitive sophistication, then he is not a real liar, but just an animal engaging in deceptive behavior, which requires a lot less cognitive sophistication – just a capacity to believe that doing X will help me get something I want.

  62. 63
    Elizabeth Liddle

    But vjtorley, what makes you think there that “materialism cannot account for such beliefs”? Again, there is a whole empirical literature on Theory of Mind, at neuronal, developmental, and evolutionary levels (as in primate studies). Cognitive psychology and cognitive neuroscience have excellent models of such functions, just as we have excellent models of intention.

    In fact, I’d say that the big difference between the “design” exhibited by evolution and the design exhibited by things-with-brains, is that things-with-brains can intend, and we already know a lot about just how that intention is coded.

  63. bornagain77 (#53)

    Thanks very much for the links on NDEs. They were extremely interesting. I wonder if Professor Coyne has seen them!

  64. Elizabeth Liddle (#63)

    Thank you for your post. You seem to adopt a more robust account of intentions than markf. Incidentally, one problem with markf’s behavioral characterization of intentions (as dispositions to act in certain ways) is that it fails to explain intentions relating to speech. A speech utterance has propositional content; consequently, it must have an inherent meaning.

    You write that “things-with-brains can intend, and we already know a lot about just how that intention is coded.”

    I would beg to differ here. The work of the late Wilder Penfield provides direct empirical evidence to the contrary: no matter how he stimulated his patients’ brains, he was unable to make them intend to do anything. He was able to make them raise their arms, but inevitably their response was: “I didn’t do that. You did.” Evidence of this sort caused Penfield to reject his earlier belief in materialism.

    What we do know a lot about is how intentions are realized, as motor patterns. But of course, some intentions don’t relate to bodily movements at all, while others relate to bodily movements only generally, or in the distant future. For instance, I might formulate the intention to henceforth multiply numbers in my head from left to right instead of from right to left, when performing mental arithmetic (left to right is much better, by the way). Or I might formulate the intention to pray silently while meditating, instead of trying to achieve a Zen-like state of “empty mind”. (As it happens, I don’t meditate.) Or I might formulate the general intention to get up 15 minutes earlier on weekdays, or the long-term intention to complete a course of study. How are these intentions “coded” in the brain? I don’t think they are. What’s there to code? To be sure, all of these intentions have an inherent meaning – but as I argued in my post above, that’s one thing that a neural state cannot possess, in any case.

  65. vjtorley:
    @32:

    A dog doesn’t have any third-order mental states (e.g. beliefs about other individuals’ beliefs about your own intentions), and it seems to me that to be a proper liar, you have to have those.

    @62:

    Now, in order to have this thought, the liar has to believe that if he intentionally says something, other people will believe what he says.

    That 3rd-order mental intentionality must further be capable of determining credibility in the mind of the listener. The liar must be able to discern a compelling believable lie from an incredulous lie.

    For example, a child caught by his mother with cookie crumbs on his face saying he didn’t eat any cookies, vs that same child saying he wiped his face with his brother’s napkin.

    Another point perhaps worth noting is the ability of liars to learn to decieve lie detectors. The lie detectors depend upon the mind following the physical rules and processes that trigger telltale responses measured by lie detectors. That the lying mind is capable of learning to override what are otherwise autonomic telltales suggests both a cause-effect direction and a distinction of the mental intention from the physical brain processes.

  66. Mung (#55)

    I haven’t read A Fine-Tuned Universe, but I’ve come across a couple of articles by Alister McGrath which mention St. Augustine, and in my opinion somewhat mis-characterize Augustine’s actual views. I’ll be writing a few posts on St. Augustine in the near future.

  67. markf @58

    Thanks for the response and the additional detail. I’ll read up a bit on compatabilism at the link you provided.

    It seems like your description, however, is still falling back to determinism. Specifically, you state: “These causes are a set of conditions and if we knew them all then it is not possible I would choose any other words given these conditions.”

    All you’ve said here is that we don’t know all the conditions, but if we knew them, then we’d realize that only one outcome is possible. In other words, there is pure determinism, but because we don’t know all the conditions it appears like some kind of free will. I don’t see how that example differs at all from pure determinism. Let’s get down to the specifics: you are saying that what you wrote @ 58 was inevitable, given the conditions that existed prior to your writing 58. That is most certainly a statement disputing free will in the matter — although I realize you didn’t choose to say it that way, it was just an inevitable result. :)

    I’m hoping perhaps you just provided a poor example, and as I said, I’ll read up a bit on compatabilism, but if that is all it has to offer, then I don’t see how it can be any more useful than pure determinism.

  68. markf @23

    You fail to see the strength of my argument. I don’t know if it is because you truly can’t understand it or if you are not willing to understand it.

    You say I am assuming free will. Yes I am. But I am showing that due to the nature of abstract symbols ( particularly characters from the alphabet ) ANY admission of choice leads to an inevitable choice between eliminative materialism and a non-materialist view of the universe.

    1. The simplest choice is a binary choice, let’s say 1 or 0.
    2. There is nothing in the characters 1 or 0 outside of the meaning that we attach to them that makes one preferable to the other. We know this because the binary choice might as well have been A or B, ” or for that matter 1 or 0.
    3. Once we admit a human subject can make a choice between 0 or 1, we have to allow the human subject to make multiple choices.
    4. Experience tells us that the number of choices is arbitrary.
    5. So if I can choose 0 for the first number, I can at this moment choose that the 50th number will be 0, or that the 100th number will be 0, or the Nth number will be 0.
    6 I can make the choice in 5, even before I say any member of the sequence.
    7. Therefore I am not constrained in the sequences I can generate.
    8 SO it is not a question of some set of sequences vs. arbitrary sequences. It is a choice of ONLY ONE vs. ALL.
    9. Therefore any admission of sequenced choices, because of the exponential growth of probability space, leads to a phase space larger than the possible initial conditions that effect my choice.
    10. So ANY materialist must insist that I could only come up with ONE sequence.
    11. But this means I don’t have binary choice. I have no choice.
    12. Thus ALL materialists are eliminative materialists. They deny the ability of humans to make a single rational choice.
    13. But as Dr. Torley points out, eliminative materialists defeat themselves the minute they make an argument to try and convince someone of their position. What they argue for their position, they are denying that they believe it.

    Therefore, it is my humble opinion, that all forms of materialism must reduce to eliminative materialism.

    Either we do not have free will and all points are moot, or materialism and all forms of so-called compatibilism are false. There is no middle ground allowed because of the arbitrary nature of sequences of binary choices of abstract symbols.

  69. JDH:

    “8 SO it is not a question of some set of sequences vs. arbitrary sequences. It is a choice of ONLY ONE vs. ALL.

    9. Therefore any admission of sequenced choices, because of the exponential growth of probability space, leads to a phase space larger than the possible initial conditions that effect my choice.

    10. So ANY materialist must insist that I could only come up with ONE sequence.”

    I hope you explain these steps a bit better ’cause I’m not seeing the reasoning.

    I’m assuming you’re not allowing the use of coins or tables of the decimal expansion of pi as a way of selecting a sequence but I still don’t see why there would have to be an insistence on only one sequence.

    I think I’m missing something in 9. . . . . what is phase space?

  70. 71

    JDH at 69.

    nail/head

  71. EZ: Phase space — a key aspect of statistical thermodynamics and wider dynamics, and configuration or state spaces are in effect cut down from them. GEM of TKI

  72. KF: Okay, I got that. How does phase state relate to selecting a sequence of zeroes and ones? Sample space in this context I would get.

    Just trying to get the argument . . . .

  73. EZ: Degrees of freedom –> dimensions in the phase space. Here, n binary digits, to n degrees, forming an n-dimensional hyperspace, with values unique to each possible sequence of 1′s and 0′s. Exponential possibilities as 2^n (Each additional bit doubles the number of possibilities.) Closely related to the islands of function metaphor. G

  74. KF: I know about degrees of freedom but in statistical analysis when applying different distributions like student-t. Referring to the adjustments made for sample size on the PDF (probability density function).

    n digits of either zero or one . . . . how is that to n degrees?

    Okay, a sequence of n digits either zero or one is going to be selected. So, taking all possible sequences into account, that gives us a sample space, i.e. a list of all the possible outcomes, of size 2^n. That is clear. That’s not a hyperspace issue UNLESS you’re defining each digit value as a unit vector in n-space . . . and why would you do that? And we are talking about a sequence of zeroes and ones not a collection in which the order of selection would not matter.

    Closely related to the islands of function metaphor?? Ummm . . . . we’re assigning a flat probability distribution yes? So that each sequence is equally likely? With no preferred sequence or type of sequence?

    I apologise if I’m still being dense but I don’t see how a materialist would expect/restrict an individual to only picking one sequence based on this argument.

    I suspect the point you’re trying to make is that IF our brains are just meat computers THEN we don’r really make choices at all. It’s all predetermined based on our particular particle configuration. Correct me if I’m wrong.

  75. #68 Eric

    All you’ve said here is that we don’t know all the conditions, but if we knew them, then we’d realize that only one outcome is possible. In other words, there is pure determinism, but because we don’t know all the conditions it appears like some kind of free will. I don’t see how that example differs at all from pure determinism. Let’s get down to the specifics: you are saying that what you wrote @ 58 was inevitable, given the conditions that existed prior to your writing 58. That is most certainly a statement disputing free will in the matter

    Eric – my position is not that determinism is false (although I think there is scope for random events).  My position is determinism is compatible with free will.  So what I wrote can be inevitable and arise from my free will.  I know it seems strange but actually it makes sense. 

  76. #69 JDH

    I am going to approach this in a simpler fashion.

    A fairly simple computer is no more limited in the range of ASCII sequences it can generate than we are. I can easily program a computer to generate ASCII sequences by using a random number to select an ASCII character at each point. Such a machine is capable of generating any of the possible sequences for any ASCII string of any finite length you specify. Of course that doesn’t mean in can generate all of them. But nor can a human.

  77. 78

    “So what I wrote can be inevitable and arise from my free will. I know it seems strange but actually it makes sense.”

    :)

  78. EZ: The set of all possible values for a population is a phase space. G

  79. ellazimm: “I apologise if I’m still being dense but I don’t see how a materialist would expect/restrict an individual to only picking one sequence based on this argument.”

    Isn’t it the case that the materialist has one of the following two options?

    1. Actions are the result of matter and energy interacting only, namely, whatever neural pathways, interactions, etc. exist cause an action to take place. Free will is therefore not real, but illusory.

    2. Somehow at some level of complexity and organization, consciousness arises as an emergent property, which then is somehow (at least partly) decoupled from the underlying matter. In this scenario, choices would be real, free will would exist, but it would have arisen from materialistic processes (and in some viewpoints would still depend on the underlying matter, such that upon the death or destruction of the matter, the consciousness would cease to exist).

  80. KF: Okay, I’m not sure how your definition of phase space differs from sample space in this example but I hear you.

    Eric: I suppose but how EXACTLY do either of those two options . . .oh I get it now. I was going to say how do either of those two options allow for my clear ability to come up with many different sequences of zeroes and ones of a designated length. And you would say, I think, see, that proves your consciousness is not a strictly materialistic process which would only give one output. Is that right?

    I’m not saying I agree with that but am I getting your argument?

  81. 82
    Elizabeth Liddle

    #65 vjtorley

    You write:

    You write that “things-with-brains can intend, and we already know a lot about just how that intention is coded.”

    I would beg to differ here. The work of the late Wilder Penfield provides direct empirical evidence to the contrary: no matter how he stimulated his patients’ brains, he was unable to make them intend to do anything. He was able to make them raise their arms, but inevitably their response was: “I didn’t do that. You did.” Evidence of this sort caused Penfield to reject his earlier belief in materialism.

    But an awful lot has happened since Penfield! Great man though he was. Not only are there many cases of illusions of alien agency, there are also many accounts of induced illusions of self-agency, not least being the results of simple priming experiments,but also including technologies like TMS.

    There’s an excellent review of the current neuroscience of volition by Patrick Haggard here:

    http://www.nature.com/nrn/jour.....n2497.html

    What we do know a lot about is how intentions are realized, as motor patterns. But of course, some intentions don’t relate to bodily movements at all, while others relate to bodily movements only generally, or in the distant future.

    Absolutely. That’s why I mentioned both distal and proximal goals.

    For instance, I might formulate the intention to henceforth multiply numbers in my head from left to right instead of from right to left, when performing mental arithmetic (left to right is much better, by the way). Or I might formulate the intention to pray silently while meditating, instead of trying to achieve a Zen-like state of “empty mind”. (As it happens, I don’t meditate.) Or I might formulate the general intention to get up 15 minutes earlier on weekdays, or the long-term intention to complete a course of study. How are these intentions “coded” in the brain?

    Well, I would say they are encoded as repertoire of weighted models of options (although that probably suggests something far simpler than I have in mind, which is a highly nested and contingent set of options).

    I don’t think they are. What’s there to code? To be sure, all of these intentions have an inherent meaning – but as I argued in my post above, that’s one thing that a neural state cannot possess, in any case.

    I think you are confusing levels here. Certainly a “neural state” cannot possess a “meaning”. “Meaning” inheres at a higher level of analysis than than the state. That doesn’t mean that a temporal sequence of states doesn’t embody what we (e.g. me, as an agent) calls “meaning”. When I hear the alarm go off in the morning I know it “means” I have to get up. That meaning is not inherent in a given neural state, it’s inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning.

    Like markf I’m a compatibilist, but unlike markf, I don’t think the “free will” we possess depends on a bit of quantum randomness; I regard my freedom as the freedom that the thing I call “I” possesses in virtue of being a highly evolved decision-maker. I am free to choose, not just randomly (which would be a funny kind of freedom, but freedom of a sort, and I do possess that too – I can decide not to decide, but to flip a coin instead), but after taking account of the pros and cons, short and long-term. The fact that we can account (at least I don’t see why we can’t) for that account-taking in physical terms doesn’t make my freedom any less, it just incorporates (literally) as the decision-making thing.

  82. 83
    Elizabeth Liddle

    Oh, and thanks for having me :)

  83. Well, I would say they are encoded as repertoire of weighted models of options (although that probably suggests something far simpler than I have in mind, which is a highly nested and contingent set of options).

    But what supplies the meaning of the ‘code’? In the case of computer programs, while code takes place in a software/hardware situation, what does or does not count as code is determined by us in virtue of our minds. Just like a line of pebbles on the ground may “mean” ‘stop here’ – it’s not that a line of pebbles, even that specific line of pebbles, innately (again, under normal views) “means stop”. It’s because that’s the meaning assigned to it by a mind.

    When I hear the alarm go off in the morning I know it “means” I have to get up. That meaning is not inherent in a given neural state, it’s inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning.

    And again, the same problem. You talk about programs in the brain, but in the case of computers what does and doesn’t count as a program (and what meaning those programs ‘encode’) is determined by us to begin with. That’s like saying, “of course a string of 1s and 0s has no inherent meaning, but by the time you get to a textfield in Actionscript, meaning “emerges” and now taken together, these hardware states and this software means “duck”. No doubt they do – in my mind. But now how does meaning arise in my mind? By virtue of myself assigning the meaning to my own actions? By virtue of someone else assigning meaning to my actions? I think the problem there is obvious.

    On the other hand, if you want to turn around and say that no, there are programs in the human brain that have intrinsic meaning – “original intentionality” – alright. But then materialism is out of the question anyway.

  84. 85

    I long for the day when people stop taking Aristotle so seriously.

    Somebody needs to take him out forever. The dude is seriously annoying the heck out of me.

  85. I long for the day when people stop taking Aristotle so seriously.

    I think Aristotle isn’t taken very seriously at all. It’s those damn persuasive arguments he has and that others have developed, building on him.

  86. ellazimm @ 81,

    Thanks.

    I guess I don’t see how a strictly deterministic approach is of any use whatsoever. In approach #1 I outlined, we end up with a purely deterministic position that, while perhaps interesting for a few academic minutes as we discuss angels on the head of a pin, is really useless, both because it is self-refuting and because it doesn’t give us any useful information about ourselves or anyone else we interact with. I don’t think there is much good evidence that consciousness is illusory, and plenty of evidence that it is real. Certainly we all (even the alleged reductionists) conduct our lives as though it is real.

    That said, I think approach #2, while being materialistic in origin, arguably provides a basis for consciousness/free will. Again, I don’t necessarily hold to that view, but I’m just trying to make sure I understand the options available to someone who argues for a materialistic origin of consciousness.

    In other words, in discussing these issues, I think we have to be careful (and some have not been careful) to distinguish between: (i) the idea of an ongoing materialistic cause for all action, which negates free will, and (ii) the idea of a materialistic *origin* for consciousness, which argues for a materialistic basis or underpinning for consciousness, but also views consciousness as a real phenomenon that has somehow become partially disconnected from its underlying source and can act in its own right.

  87. 88

    @null:

    LOL!

    Too bad his persuasive arguments only apply to problems that exist within his ridiculous worldview.

    The real problem is a Catholic church that insists on holding onto all its ancient and medieval non-Christian nonsense.

    Just you wait null. Aristotle is going down. You heard it here first.

  88. Eric: Ahhhhhh, I think (or do I?) I’ve got you now. Thanks for taking the time.

    It all seemed so easy when Descarte said: Cogito ergo sum. Now it all seems so complicated . . . .

  89. tragic,

    Too bad his persuasive arguments only apply to problems that exist within his ridiculous worldview.

    Actually, the A-T arguments highlight problems in other worldviews, and which don’t exist in the A-T worldview.

    The real problem is a Catholic church that insists on holding onto all its ancient and medieval non-Christian nonsense.

    Materialism is as ancient as the Aristotilean view. That you don’t understand something, doesn’t make it nonsense.

    Just you wait null. Aristotle is going down. You heard it here first.

    No, I didn’t. This is a centuries, even millenia-old line – you’re not original at all, not even in what you misunderstand. But by all means, endorse materialism if you wish.

  90. #82 Elizabeth Liddle

    Like markf I’m a compatibilist, but unlike markf, I don’t think the “free will” we possess depends on a bit of quantum randomness;

    Actually I almost totally agree with you. I don’t think free will depends on quantum randomness. I just think it might include an element of such randomness in that sometimes when we make a decision the result might be truly random and not determined by our current brain state plus environment.

    I am really enjoying your comments. So nice to hear from someone with some real knowledge.

  91. Elizabeth Liddle (#82)

    Thank you for your comments. You write:

    When I hear the alarm go off in the morning I know it “means” I have to get up. That meaning is not inherent in a given neural state, it’s inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning.

    It seeems to me that you are claiming that meaning is inherent in causal action patterns that occur reliably, and that a brain state can embody inherent meaning to the extent that it is part of a program which reliably triggers an action pattern in its human bearer.

    The problem I have with this view is that it would make our mental states only retrospectively meaningful: they would only possess meaning insofar as we could say that they were part of a program that successfully resulted in the action pattern desired and intended by the individual. Even desires and intentions would, on your view, only acquire meaning by virtue of their resulting in actions.

    However, I think that introspection would overwhelmingly contradict you on this point. Just ask anyone who has ever planned a murder or a bank robbery whether their plans had meaning only after being communicated, written down or executed. I think any criminal would say that plans per se are meaningful, whether or not they’re revealed to anyone.

    You proposal would also obviate the distinction between first- and second-degree murder, it seems. For in both cases, someone dies. If a death is called intentional only by virtue of its being properly executed, then it seems to me that only truly accidental deaths would get off the hook, legally speaking.

    Finally, you argue that you are free even if your choices are fully determined:

    I regard my freedom as the freedom that the thing I call “I” possesses in virtue of being a highly evolved decision-maker.

    Any sentient animal could claim the same freedom. Yet we don’t jail chimps.

    While your account explains how my actions can still be said to be mine, even if they’re fully determined, it fails to explain how they can be said to be free. There’s more to freedom than just doing what you want.

  92. vj #65 & 92

    Thank you for your post. You seem to adopt a more robust account of intentions than markf.

    I think Elilzabeth and my views of intentions are compatible – although she is more knowledgeable and therefore more specific.  She sees intentions as “as repertoire of weighted models of options”. I am saying that having such a model results in a disposition to act in certains ways.

    Incidentally, one problem with markf’s behavioral characterization of intentions (as dispositions to act in certain ways) is that it fails to explain intentions relating to speech. A speech utterance has propositional content; consequently, it must have an inherent meaning.

    Not at all – speech gets its meaning from what we intend when we speak.  If I utter the words “Brutus is an honourable man” this can mean many different things depending on what I intend. It might even be irony and mean he has behaved badly.  As a keen amateur actor I spend a lot of time trying to decide on what characters mean by their lines!  To do that I analyse what they are trying to do when they speak.  This is in turn is largely determined by context including a history of what lead up to that moment.

     

    The problem I have with this view is that it would make our mental states only retrospectively meaningful: they would only possess meaning insofar as we could say that they were part of a program that successfully resulted in the action pattern desired and intended by the individual. Even desires and intentions would, on your view, only acquire meaning by virtue of their resulting in actions.

    However, I think that introspection would overwhelmingly contradict you on this point. Just ask anyone who has ever planned a murder or a bank robbery whether their plans had meaning only afterbeing communicated, written down or executed. I think any criminal would say that plans per se are meaningful, whether or not they’re revealed to anyone.

    This is just the problem that is avoided by recognising that intentions are dispositions.  A person or object can have a disposition to do something without having the opportunity to actually do it.  For example, a chess playing computer will have a disposition to put its opponent in checkmate (in this case we might even say it has an intention) but if it does not get the opportunity it will never actually get an opponent in checkmate.

     

    Any sentient animal could claim the same freedom. Yet we don’t jail chimps.

    That’s not because they lack free will.  It is because they don’t understand the consequences of their actions and they have little or sense of acting rightly or wrongly.

    There’s more to freedom than just doing what you want.

    Well that is the issue under discussion.  Acting according to free will includes:

    * acting according to your desires (including  such complications as a balance of long and short term desires and desires to be moral)

    * acting consciously (as opposed to in your sleep or a reflex action)

    What other ingredient is there?  How do you know that or anyone else has it?  Why does it matter if you have it or not?

  93. markf (#93)

    I must say I had no idea you were an amateur actor. Of course I very much enjoyed reading Shakespeare’s plays at school, including Julius Caesar. Unfortunately, I never was much of an actor, although my wife did some stage acting for a while, and my brother-in-law acted in a Japanese rendition of Julius Caesar at a theater in Tokyo a few years ago.

    Getting back to intentions: it seems you are saying that Brutus intends to kill Caesar if he has a disposition to perform an action (e.g. stabbing) that would normally result in Caesar’s death. Hmm. Suppose Brutus first decided to kill Caesar and then asked himself: “How? (Dagger, spear or sword?) When? Where? With whom?” Until these questions are answered, it seems that we cannot speak of a disposition to perform a particular action pattern. Yet you would surely agree that Brutus had the intention when he first decided to kill Caesar – never mind how.

    Here’s another problem. If an intention to kill is a disposition to implement an action pattern which normally results in someone’s death (e.g. stabbing), then how does Brutus know that he has the intention of killing Caesar, in advance of the act? Is it only because he (physically or mentally) goes through the motion of stabbing (i.e. practices the act, with or without a dagger) while rehearsing the assassination of Julius Caesar? Would you say that until then, he has no intention of killing Caesar?

    Regarding freedom: suppose it turns out that we are all living inside a Matrix-style simulation. Are Brutus’ actions still free?

    Or suppoose it transpires that a Calvinist God predestined Brutus to act as he did. Are Brutus’ actions still free?

    I’m curious to see how you would answer these questions.

  94. 95
    Elizabeth Liddle

    vjtorley (@92
    Thank you for your thoughtful response. I think there are two separable issues here: one is the meaning of meaning; the second is the nature of moral responsibility. So let me tackle your response in two parts:

    It seeems to me that you are claiming that meaning is inherent in causal action patterns that occur reliably, and that a brain state can embody inherent meaning to the extent that it is part of a program which reliably triggers an action pattern in its human bearer.

    I wouldn’t say that meaning is “inherent in the causal action patterns” because I can’t actually parse that :). Again, I think we have a level-of-analysis problem. I would say that I make meaning when I interpret a signal as having implications for some future action. In common parlance something “means” something when it acts as a token for something else. So “money means power”; “dark clouds mean rain”; “the word cat means a four legged furry mammal with a tail”; “the time means I’m late”; “the fire alarm means I have to get out in a hurry if I don’t want to burn alive”. And all those things are easy to think of in neural terms, especially if you think of each sign (i.e. meaningful stimulus) as being a trigger for potential action. So meaning is simply inherent in the concept of a sign itself, which is why “to signify” is a synonym for “to mean”. So for my cat, the rattle of the tin-opener “means” food, and I observe that it “means” food for the cat, because I observe that it making a dash for the kitchen, and looks up expectantly. In reverse, the cat “means” that it wants food when it rubs against my legs and mews pathetically. So we have two-directional communication of meaning. But there is nothing mysterious about that. My laptop can tell me that its battery is low, and I understand its meaning, and I can tell my laptop to shut down, and it understands mine.
    So it’s not either mine, or the cat’s (or even the laptop’s) neural state that “embodies inherent meaning” it’s the interaction between me and the cat with regard to the food, or me and the laptop with regard to the battery.

    The problem I have with this view is that it would make our mental states only retrospectively meaningful: they would only possess meaning insofar as we could say that they were part of a program that successfully resulted in the action pattern desired and intended by the individual. Even desires and intentions would, on your view, only acquire meaning by virtue of their resulting in actions. .

    I don’t think so, or only in the trivial sense that we make sense of some word, or event, or sign, we have to perceive it before we can interpret it – consider what it means. The reason being that actions are programmed long before they are executed, and in many cases never are – the brain operates by means of re-entrant circuits in which simulated output, as it were (e.g. a program for action that is not executed, but rises to near-execution threshold) is re-entered as input. This is the essence, I suggest, of the mechanism of intention. As sophisticated brain possessers, human beings have the ability to test possible action options before execution, and evaluate their simulated consequences against both proximal and distal goals. For example, imagine I am trying to decide between having a coffee-break and carrying on working: my brain activates the motor programs involved in going for a coffee-break, which in turn activates with the sensory programs that will result in such a break, and we call this “imagining going for a coffee-break”; it also activates the motor programs involved in staying at my desk and working, and, in turn the consequences of doing so, including the sense of satisfaction I will have if I get my project finished by lunchtime. These options duke it out in my brain, some pathways being mutually excitatory, some being mutually inhibitory, until a winning action reaches execution threshold and I act.
    I call this “deciding, on the basis of alternative outcomes, whether to have a coffee-break”, and if I do decide to have a coffee-break, that I “intended” to have a coffee-break. It’s not retrospective – nothing gets executed until the options have been considered. But nor is it very mysterious – it’s fairly easy to model a very simple version of that kind of decision making, and indeed lots of control programs work on just that sort of basis – it’s an implementation of fuzzy logic if you like.

    However, I think that introspection would overwhelmingly contradict you on this point. Just ask anyone who has ever planned a murder or a bank robbery whether their plans had meaning only after being communicated, written down or executed. I think any criminal would say that plans per se are meaningful, whether or not they’re revealed to anyone.

    Well, no. I think the concept of the “forward model” is relevant here, and it’s well established in the motor control literature. We constantly, at quite a trivial level, make a “forward model” of the consequences of actions that have not yet taken place, and then revise that model in the light of results of that action once it has. It’s fundamental to coordinated movement, including eye-movements, which means it’s also fundamental our data-collection processes – we notice things that are unexpected, i.e. which violate our forward model, and we even know how this works at a very precise neural level (at the level of individual neurons, even). Brains are, par excellence, predicting machines, which is why they are so efficient – they are set up to reserve processing power for the unexpected, which is obviously advantageous to survival!

    You proposal would also obviate the distinction between first- and second-degree murder, it seems. For in both cases, someone dies. If a death is called intentional only by virtue of its being properly executed, then it seems to me that only truly accidental deaths would get off the hook, legally speaking.

    Well, I apologise for being unclear, but hope it is now clearer that this is not what I am saying! Forming the intention to kill someone involves a “forward model” of the consequences of the killing action. If it misfires, because your forward model was inaccurate, there’s no reason to think that lets you off the hook.
    Which lets us segue nicely into your second issue:

    Finally, you argue that you are free even if your choices are fully determined:

    I regard my freedom as the freedom that the thing I call “I” possesses in virtue of being a highly evolved decision-maker.

    Any sentient animal could claim the same freedom. Yet we don’t jail chimps.
    While your account explains how my actions can still be said to be mine, even if they’re fully determined, it fails to explain how they can be said to be free. There’s more to freedom than just doing what you want.

    Heh. OK, this is a big one, and I’m not going to be able to do it justice in a comment on a blog! But let me have a go:
    Yes, indeed, there is “more to freedom than doing what you want”, in the narrow sense of that phrase, but part of that more is the very freedom to want different things. To take my coffee-break example, I have, in fact, two competing wants. I want my cup of coffee; I also want to finish my project. One is, if you like, a proximal want, the other a more distal want, although often the battle can be between two proximal or two distal wants, or any combination of a vast range of options with varying payback timescales, and indeed, complex combinations of rewards. For example, something that might weigh on my decision to have a coffee-break might be that I know that a particular colleague is feeling a bit lonely, and that it would be a good excuse to have a chat. Or something that might weigh on my decision to keep working is the thought that if I finish early, I would have time to visit my elderly aunt on the way home, or even slip into the pub for a quick pint. All these things are wants. I would say my freedom resides in the sheer number of options, and possible outcomes that I am capable, by virtue of my sophisticated human, symbol-using brain, of putting into the melting pot before initiating an actual course of action. What makes us so much freer than other animals, and even from other primates like chimps, is what is sometimes called our “freedom from immediacy”, conferred, I suggest, largely by our extraordinary capacity for language, and the tools it provides us with for simulating distal goals, and recalling outcomes from previous actions.
    So, is that why we don’t jail chimps? Partly, I suggest – we don’t hold a chimp responsible, just as we don’t always hold a child responsible, in part because we agree that the chimp/child was not in a position to consider the full import of his/her actions. Most importantly, we agree, I think, that neither a chimp nor a small child has much “Theory of Mind capacity” – cannot easily imagine – simulate – the consequences of their action from the point of view (literally) of another being. But I also suggest, following Dennett, that moral responsibility is coterminous with the act of defining the self; as Dennett repeats throughout Freedom Evolves: “if you make yourself really small, you can externalise virtually anything”. By the same token, he argues, it is by accepting moral responsibility for our actions that we define ourselves. And this is relevant to the chimp question – we don’t jail chimps in part because we don’t accord them a full human self. With adult human beings we mostly do, which is why we sometimes jail them when they fail to accept their human moral responsibilities. Sometimes we don’t, in which case we say they are “not fully responsible”, and, by the same token, we make them a little smaller – we say they are damaged, ill, crazy, not fully in control of their own actions. In other words, we draw the boundaries of their selves rather tightly, and regard much of what their brains do as “not them”.

    I regard myself as free, even if the universe proves after all to be deterministic, not because there could be an alternative universe in which I could have done something different, but because I identify the thing I call “I” with the decision-making machinery that is my brain (together with all the things that make it what it is, including my own past decisions). In other words, I am free because I accept moral responsibily, not morally responsible because I’m free :)

  95. Hmmm, yes, nice work, Elizabeth. A plenitude of wants = freedom. Of course you realize this is the position of Thomas Aquinas and in fact the basis of the Judeo-Christian religion. “Love the Lord your God with all your heart and soul and mind” and all that. First, “God is love.” Next, the fall of man is depicted as a false and destructive choice between vanity (the desire to be “like God”) and a God-like love. The law, which is wholly based on love of God and neighbor, is offered as an opportunity to correct this colossal death-bringing blunder (“do this and you shall live”). And the cross is described as a sacrifice of love that reconciles man to God in spite of his limitations and false choices. But here is the difference: According to Thomas, what you refer to as “morality” comes about only through grace, not through felicitous brain chemistry. Right choices—life-giving choices—which, in the Bible, are choices to love sincerely and unselfishly—are against our mortal nature, which is in “bondage to the grave,” and are only made when we have been changed, through grace, and brought over into the realm of life. Grace is the difference between what you are describing and the Christian view of the human predicament.

  96. 97
    Elizabeth Liddle

    nullasus @84

    Well, I would say they are encoded as repertoire of weighted models of options (although that probably suggests something far simpler than I have in mind, which is a highly nested and contingent set of options).

    But what supplies the meaning of the ‘code’? In the case of computer programs, while code takes place in a software/hardware situation, what does or does not count as code is determined by us in virtue of our minds. Just like a line of pebbles on the ground may “mean” ‘stop here’ – it’s not that a line of pebbles, even that specific line of pebbles, innately (again, under normal views) “means stop”. It’s because that’s the meaning assigned to it by a mind.

    Right. I hope I have clarified this now, a little, in my response above to vjtorley (#95). I certainly don’t think that a line of pebbles “means stop” except in the context of an interpreting brain. What I am trying to say is that when a person infers a meaning from what s/he then calls a sign, what is happening, neurally, is that a diverging cascade of possible action programs are triggered, produce simulated output which is then fed back into the system as input until an output of some kind is executed, which may be no more than a series of eye movements, but which may also be a complex utterance, which may itself remain un-uttered but leave a trace as what we call a “memory” of a “thought” that is implemented as an increased probability of that pattern being repeated. But that doesn’t mean that the coding mechanism is the meaning – the medium is not the message! As for what supplies the “code” (scare quotes intentional) is probably beyond the scope of this OP, and strays on to ID territory :) I would say that what supplies the code is evolution, but you may think otherwise.

    But either way, I think, we have to be very careful not to press the “code” metaphor too far – I mean it only in the sense that the creek beds in mudflats tend to “code” for their own persistence – once a creekbed is established, water tends to run down it, and deepen the creek. In neuroscience we call that “Hebb’s rule”: “what fires together, wires together”. A new feature in the estuary – a landslide, for instance – will disrupt the existing creeks, and new creeks will form. Nothing more than physical laws are required to execute the “coding”, and nothing more than physical laws are required to explain why the water flows down the creeks, rather than over the top of the flats. What makes it what we call a “code” is the feedback between water-flow and creek topography. Same with brains, except we call it “long-term potentiation” (LTP).

    When I hear the alarm go off in the morning I know it “means” I have to get up. That meaning is not inherent in a given neural state, it’s inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning.

    And again, the same problem. You talk about programs in the brain, but in the case of computers what does and doesn’t count as a program (and what meaning those programs ‘encode’) is determined by us to begin with. That’s like saying, “of course a string of 1s and 0s has no inherent meaning, but by the time you get to a textfield in Actionscript, meaning “emerges” and now taken together, these hardware states and this software means “duck”. No doubt they do – in my mind. But now how does meaning arise in my mind? By virtue of myself assigning the meaning to my own actions? By virtue of someone else assigning meaning to my actions? I think the problem there is obvious.

    And what is obvious ain’t necessarily so :)
    I’d say that meaning arises in your mind by means of neural mechanisms that ensure that a stimulus (say that line of pebbles) triggers a range of options for action, and those action programs (this is the tricky part) include actions (highly attenuated actions, many well below execution threshold) implicated in your perception of yourself as a meaning-making agent. Yes it may sound as though there is an obvious problem there, but that’s because (I suggest) we have an innate tendency to regard feedback loops as some kind of intellectual cop-out (“which came first, the chicken or the egg?” “who or what was the prime mover?”). But in this case, I suggest, it’s not a cop-out at all, any more than the sand ripples below the tideline on a shallow beach are a cop-out (I seem to be into shoreline metaphors today) – once a ripple gets started, it self-perpetuates, and the frequency and amplitude of the ripples will be highly stable over many years, even though the actual topography is constantly changing, and the actual sandgrains constantly being moved and replaced. Over our developmental history, I suggest, we gradually come to perceive ourselves as one of the kind of creatures we seem to share a world with, who also seem to be causal, meaning-making agents, but over whose decision-making and meaning-making we have uniquely elevated control. And bingo- problem solved.

    On the other hand, if you want to turn around and say that no, there are programs in the human brain that have intrinsic meaning – “original intentionality” – alright. But then materialism is out of the question anyway.

    No, I won’t say that, because it’s not what I’m trying to say. Our motor programs have no “intrinsic meaning”. What has meaning is their relationship to their inputs. A waving arm has no “intrinsic meaning”. The waving arm I raise in response to struggling in a rip tide, however, is my meaningful response to the sensation of water up my nose, i.e. a motor action intended to elicit help. I can only hope that my intended meaning is correctly interpreted by those on the shore, and that it triggers motor programs in their brains that involve throwing me a life-belt – that it “means” I’m drowning.

  97. 98
    Elizabeth Liddle

    allanius @ 96

    Thanks for your post, and yes, I was a bit of a Thomist once (well, I used to attend a Dominican priory on Sundays, which was a hotbed of Thomist scholars), so I’m happy with the first part of your post.

    Then you say:

    But here is the difference: According to Thomas, what you refer to as “morality” comes about only through grace, not through felicitous brain chemistry. Right choices—life-giving choices—which, in the Bible, are choices to love sincerely and unselfishly—are against our mortal nature, which is in “bondage to the grave,” and are only made when we have been changed, through grace, and brought over into the realm of life. Grace is the difference between what you are describing and the Christian view of the human predicament.

    Well, you may be right. But as I see it, it’s partly “a difference that makes no difference” (would it matter if “grace” was reflected in “felicitous brain chemistry”?) and partly an assertion I don’t find supported. I don’t think “loving sincerely and unselfishly” are particularly “against our …nature” and I don’t find the adjective “mortal” terribly illuminating. Yes, we are mortal, but the fact that we are mortal doesn’t seem to make us particularly selfish. Indeed I’d argue that it is our very awareness of our own mortality (something shared with few, if any, other species) that allows us to entertain “wants” that go beyond our own immediate physical needs and embrace the needs of others (for example, because we “know we can’t take it with us” we are inclined to leave our possessions to cat’s homes and such). I’d agree that we have inherited (we might even choose to call it “original sin”) selfish desires, not surprisingly (or not surprising to one who accepts evolution) but I’d strongly suggest that we have also inherited (also through evolutionary processes) unselfish desires, as well as the capacity to present those desires as distal goals that may often trump our proximal wants and needs.

    In my religious days, I called this “grace” – or at least the capacity for grace, the inherited capacity to reify unselfish, often distal goals and present them as desirable alternatives to the fulfilment of selfish, often proximal, desires, a capacity enlarged by what I then called prayer. And I can see the mythic power of presenting those unselfish desires as belonging to some other kind of “life” than the one we call “mortal”, or “earthly”, “physical”> But I can also see terrible traps, not least being a dualistic view of the mind/soul and body which is not supported by evidence, nor necessary (IMO) for a perfectly viable account of human behaviour and experience, but which also include a denigration of the physical world and our physical selves which can be profoundly destructive. Yes, I agree with you that there is a richness to lives in which the self is sublimated into a greater whole, and the emulation of such lives is a noble goal. But I see no reason to think that such lives are exclusively Christian, and much evidence that such “grace” is equally prevalent amongst non-Christians, and, indeed, non-theists. I still like the concept of “grace” – I still find it powerful. But I don’t think it’s necessary to think of it as magic-stuff. I think it is, precisely “felicitous brain chemistry” and it is an aspect of our freedom that it is available to all of us, whatever belief system we happen, or not, to hold.

  98. vj #94

    I must say I had no idea you were an amateur actor.

    No reason why you should – I get up to all sorts of things which I don’t put on the internet :-).  I expect the same is true of you.

    Getting back to intentions: it seems you are saying that Brutus intends to kill Caesar if he has a disposition to perform an action (e.g. stabbing) that would normally result in Caesar’s death. Hmm. Suppose Brutus first decided to kill Caesar and then asked himself: “How? (Dagger, spear or sword?) When? Where? With whom?” Until these questions are answered, it seems that we cannot speak of adisposition to perform a particular action pattern. Yet you would surely agree that Brutus had the intention when he first decided to kill Caesar – never mind how.

    I don’t see the problem. Initially Brutus has a disposition to kill Caesar  – by any means – after some planning this becomes a disposition to kill him in a specific manner. 

    Here’s another problem. If an intention to kill is a disposition to implement an action pattern which normally results in someone’s death (e.g. stabbing), then how does Brutus know that he has the intention of killing Caesar, in advance of the act? Is it only because he (physically or mentally) goes through the motion of stabbing (i.e. practices the act, with or without a dagger) while rehearsing the assassination of Julius Caesar? Would you say that until then, he has no intention of killing Caesar?

    This is a more interesting point.  How does you know that you intend to do something? Wittgenstein would ask whether this question meant anything – but I think it can be answered. Suppose I decide to run a marathon. I might state my intention to myself  and others. I  might have fantasies of crossing the finish line.  But if I did not train when the opportunity was there and did not fill in the application form, or did both and simply did not run – then I would have to acknowledge that I never really intended to do it.  i.e. we know our intentions in much the same way as others know them – through our behaviour (with the addition of being able to do things such as imagine and speak to ourselves).  And like others we can be wrong and realise we never did have that disposition.

    Regarding freedom: suppose it turns out that we are all living inside a Matrix-style simulation. Are Brutus’ actions still free?

    I have never seen The Matrix but I guess we are talking about the world where none of our perceptions or actions are real – like living in a dream.  I would say in this context Brutus has free will, and an intention, to kill Ceasar, but he is not free because he was being fooled in the same sense that a prisoner has free will but is not free. Both are being denied the opportunity to fulfil their dispositions – albeit in different ways because in the Matrix presumably they think they have fulfilled them.

    Or suppose it transpires that a Calvinist God predestined Brutus to act as he did. Are Brutus’ actions still free?

    Yes. And most Calvinists would agree I think. 

  99. 100

    Elizabeth Liddle,

    I certainly don’t think that a line of pebbles “means stop” except in the context of an interpreting brain.

    And then the problem becomes: What series of actions in a brain “means interpreting”? Or is it, again, that a brain only “means interpreting” in the context of yet another interpreting brain?

    You sketch out a series of physical actions that take place when a brain – given a certain context – is reacting to a stimulus. But as far as the question of meaning and thought itself goes, the explanation doesn’t show up. You say that a “memory” or a “thought” is “implemented as an increased probability of that pattern being repeated”. Now, if you mean that memories or thoughts are nothing but particular physical states in mundane operation, then you’re either taking an eliminative stance about these things (‘there are no thoughts or intentions, there’s nothing but blind mechanical material states’), or you’re making the “material” out to be more than it was (property dualism, or any other number of options).

    Nothing more than physical laws are required to execute the “coding”, and nothing more than physical laws are required to explain why the water flows down the creeks, rather than over the top of the flats. What makes it what we call a “code” is the feedback between water-flow and creek topography. Same with brains, except we call it “long-term potentiation” (LTP).

    And another way of putting what you’re apparently saying here is “humans have as much intention, thought and experience as we take creek beds in mudflats to have – none at all”. You say that we shouldn’t push the “code” metaphor too far, but it seems to me that what would be “too far” would also happen to be the only way the metaphor would really make sense of the view you’re advocating. Otherwise it’s like saying, “Our brains encode our thoughts. Also, there’s no such thing as encoding.”

    I’d say that meaning arises in your mind by means of neural mechanisms that ensure that a stimulus (say that line of pebbles) triggers a range of options for action, and those action programs (this is the tricky part) include actions (highly attenuated actions, many well below execution threshold) implicated in your perception of yourself as a meaning-making agent. Yes it may sound as though there is an obvious problem there, but that’s because (I suggest) we have an innate tendency to regard feedback loops as some kind of intellectual cop-out (“which came first, the chicken or the egg?” “who or what was the prime mover?”).

    But sometimes the reason something sounds like an intellectual cop-out is precisely because that’s what it is. You’re saying that meaning arises because sometimes a stimulus triggers a range of options (but given determinism there’s only a range in a poetic sense since only one response is actually possible) for action programs (but there aren’t really programs, that’s just metaphor) implicated in my perception of myself as a meaning-making agent (but my perception of myself is yet another meaning which has to be explained, and ultimately ‘means’ your explanation for meaning is yet more meaning).

    Really, sometimes a problem is obvious because that’s what it is. I submit it’s the case here.

    Now, you object that it’s not a cop-out and really is an explanation, and try to illustrate that with your sand metaphor. But that metaphor is woefully inadequate, because what’s going on in that example is nothing but physical causation producing phenomena that, under a mechanistic understanding of matter, is mundane – there’s no thought, mind, or meaning to speak of there. Unless that’s what you want to say in the case of brains and go down a full-fledged eliminative route. In which case, problems with that view aside, you’re not offering an explanation for meaning anyway – you’re saying there is no meaning to explain.

    Our motor programs have no “intrinsic meaning”. What has meaning is their relationship to their inputs.

    But then I ask, is the meaning of that relationship intrinsic? Let’s look at your example.

    A waving arm has no “intrinsic meaning”. The waving arm I raise in response to struggling in a rip tide, however, is my meaningful response to the sensation of water up my nose, i.e. a motor action intended to elicit help.

    But motor actions in and of themselves have no intentions or meaning – they have them only relative to a mind. If I create a robot that plays a recording which says “Help! There’s water here!” whenever it’s in the proximity of water, insofar as there’s nothing there but stimulus and response, there’s also no meaning or intention on the part of the robot. If I press a button on an mp3 player and it plays the sound of a scream, there’s no reason to apologize to the mp3 player.

  100. Hi Markf and Elizabeth Lidddle,

    Thank you very much for your thoughtful and detailed responses. Both of you appealed to forward models, which makes sense. I’ll be back in about 15 hours, but for the time being, a quick response re the drowning example: the real reason why the message gets across is because the spectators are able to perform a mental simulation and ask themselves, “If I were out there in the sea, why would I be waving so frantically like that? Aha! That person must be drowning!”

    On the waver’s part, the intentionality, I submit, derives not from previewing the action of waving in one’s head, but from a pre-existing need to communicate a proposition which one already knows (“I am drowning”). The drowning person then performs forward models of various actions which might convey this fact (“Jump out of the water like a dolphin? No, that won’t get the message across. Wave vigorously? Yes, that’ll do it!”) Thus the intention is logically prior to the choice of motor sequence in this case.

    Still, I liked the example very much.

    Markf, I’m quite surprised to find an atheist who has no problem with the injustice of predestination. Interesting.

  101. 102
    Elizabeth Liddle

    Nullasus @ #100

    Thanks for your response! OK:

    I certainly don’t think that a line of pebbles “means stop” except in the context of an interpreting brain.

    And then the problem becomes: What series of actions in a brain “means interpreting”? Or is it, again, that a brain only “means interpreting” in the context of yet another interpreting brain?

    Well, I thought I’d clarified that, but maybe not! I’ll have another go, but essentially, what I mean is that it is, IMO, only sensible to talk about meaning, where there is an interpreter, so, as you rightly note, that shifts the problem on to how interpretation occurs within a brain. However, I don’t think that’s an insoluble problem, as I tried to say.

    You sketch out a series of physical actions that take place when a brain – given a certain context – is reacting to a stimulus. But as far as the question of meaning and thought itself goes, the explanation doesn’t show up. You say that a “memory” or a “thought” is “implemented as an increased probability of that pattern being repeated”. Now, if you mean that memories or thoughts are nothing but particular physical states in mundane operation, then you’re either taking an eliminative stance about these things (‘there are no thoughts or intentions, there’s nothing but blind mechanical material states’), or you’re making the “material” out to be more than it was (property dualism, or any other number of options).

    No, I’m not saying “there are no thoughts or intentions” – just because I think they can be accounted for by observable mechanisms, doesn’t mean I think they don’t exist! Yes, the “material state” is “blind” but, as I tried to make clear, that doesn’t mean that theperson is blind, because the person exists at a higher level of analysis than a given “material state”. “I” am not coterminous with my state at the instant I typed the letter “I”, any more than the light from the light bulb above my desk is coterminous with the specific photon that just arrived on my keyboard. What I call “I” is a whole decision-making shebang, and my material state at any given time point is simply a snapshot of that decision-maker in action. And the name I give to the kind of decision-making in which I weigh up various options and decide, with malice (or otherwise) aforethought, on a specific course of action, is “intending”. And I experience it as “deciding”. But what happens in my brain when I do that deciding, I would contend,is that a serious of “blind mechanical material states” chunter through a series of operations the final output of which is “my” decision. And I call it mine because I consider myself incorporated (again I use the word absolutely literally) in that neural machinery.

    Nothing more than physical laws are required to execute the “coding”, and nothing more than physical laws are required to explain why the water flows down the creeks, rather than over the top of the flats. What makes it what we call a “code” is the feedback between water-flow and creek topography. Same with brains, except we call it “long-term potentiation” (LTP).

    And another way of putting what you’re apparently saying here is “humans have as much intention, thought and experience as we take creek beds in mudflats to have – none at all”.

    No, I’m not saying that. I’m simply using a creekbed as an illustration of very simple natural feedback coding. I don’t think any intention is encoded in a creekbed for the very simple reason that for a creekbed, there is no simulation involved. It is what it is. Whereas a brain is able to simulate output from a potential course of action, and re-enter that output as input. Creeks can’t do this, so we don’t regard them as intentional agents (except poetically). It’s our capacity to simulate, i.e. to re-enter simulated output from potential actions as input that endows us with the capacity to intend – to foresee the consequences of various course of actions and choose between them. But the feedback process, at the level of protein expression, isn’t, I’d argue, any less mechanical than what happens in a creek. The difference lies in the architecture of feedback networks themselves.

    You say that we shouldn’t push the “code” metaphor too far, but it seems to me that what would be “too far” would also happen to be the only way the metaphor would really make sense of the view you’re advocating. Otherwise it’s like saying, “Our brains encode our thoughts. Also, there’s no such thing as encoding.”

    OK, let’s try to do this without the c word at all. If metaphors become a problem we need to drop them. To go no lower than the neuron (though lots of extremely interesting things happen within the neuron, but we can work above that level for now): neurons essentially sum inputs over time to produce outputs. So if lots of positive inputs come in over a short time period, the neuron will fire. If lots of negative inputs come in, it will be inhibited, and become less likely to fire. So we can think of a neuron as a logic gate. But because it synapses on to other neurons which in turn contribute to its own inputs, we have potential feedback loops, and ongoing oscillations. We also have billions of neurons, and the number of synapses is orders of magnitude greater than that. And when a sensory stimulus arrives (light on the retina for instance) that triggers a whole cascade of neural firing patterns that resonate through the whole brain, potentiating that pathway so that in future, any given pair of neurons is more likely in future to fire together.

    I used to be a musician in an earlier life, and I have a lovely old viola da gamba which has been played for over 300 years, and I like to think that everyone who ever played it left their mark on it in the form of folding patterns in the sound board that make that folding pattern – that sound – more likely to be triggered by subsequent players. That’s probably a better metaphor than my muddy creek, come to think of it. I could say that those sounds have been “coded” into the sound board, but I won’t ? But back to the brain – if the light pattern on my retina turns out to “signify”, for example, a fork in the road, then that cascade of neural firing will also include the preparation of my muscles for turning right, and for turning left. The winning program (the one that reaches execution threshold by means of excitatory connections from the most other brain regions in the cascade) will be my decision, which I will refer to as my intended action.

    I’d say that meaning arises in your mind by means of neural mechanisms that ensure that a stimulus (say that line of pebbles) triggers a range of options for action, and those action programs (this is the tricky part) include actions (highly attenuated actions, many well below execution threshold) implicated in your perception of yourself as a meaning-making agent. Yes it may sound as though there is an obvious problem there, but that’s because (I suggest) we have an innate tendency to regard feedback loops as some kind of intellectual cop-out (“which came first, the chicken or the egg?” “who or what was the prime mover?”).

    But sometimes the reason something sounds like an intellectual cop-out is precisely because that’s what it is. You’re saying that meaning arises because sometimes a stimulus triggers a range of options (but given determinism there’s only a range in a poetic sense since only one response is actually possible) for action programs (but there aren’t really programs, that’s just metaphor) implicated in my perception of myself as a meaning-making agent (but my perception of myself is yet another meaning which has to be explained, and ultimately ‘means’ your explanation for meaning is yet more meaning).
    No, there’s a range in an absolutely real and measurable sense, and by “programs” I also mean something perfectly real and measurable. For example, we can physically measure the degree to which a neuron responsible for triggering a movement (for example a neuron implicated in a saccadic eye movement) responds to a stimulus in a given location in the visual field, and the degree to which a second stimulus, in a different location, stimulates a competing neuron, and how that competition between the two is resolved. So yes, there are two real options, and the outcome depends on various factors, and is the result of what I am calling a “motor program” – an electrical signal that actually results in a physical eye movement. The fact that in a deterministic universe (not that the universe seems, at present, to be deterministic, but let us assume for now that I think it is) only one outcome is possible for the actual scenario we are considering, is, I think, irrelevant – the whole concept of “choice” implies a decision is not a reflex, but is the result of considered options. What I am doing is trying to outline the process by which those options are considered – the manner in which the decision process is weighted by factors other than a simple imperative stimulus.

    Really, sometimes a problem is obvious because that’s what it is. I submit it’s the case here.

    Well, I disagree :)

    Now, you object that it’s not a cop-out and really is an explanation, and try to illustrate that with your sand metaphor. But that metaphor is woefully inadequate, because what’s going on in that example is nothing but physical causation producing phenomena that, under a mechanistic understanding of matter, is mundane – there’s no thought, mind, or meaning to speak of there. Unless that’s what you want to say in the case of brains and go down a full-fledged eliminative route. In which case, problems with that view aside, you’re not offering an explanation for meaning anyway – you’re saying there is no meaning to explain.

    But it seems to me that your argument is somewhat circular. You seem to be saying that because my explanation is materialistic that it must eliminate the phenomenon it seeks to explain, which is non-material. I disagree, and I think the problem, as I keep saying (apologies for repetition) is one of level-of-analysis. To take yet another marine metaphor: an ocean wave is a phenomenon that consists of a pattern movement of air and water. But an ocean wave is not made of either air or water, and it can be travelling in a quite different direction to both. The wave is actually a property not of either the air or the water, but of the interface. But, if I attempt to give a “materialistic” account of the wave in terms of movement of air or water molecules, you might turn round and say I have “eliminated” the wave. No I haven’t – it’s that the wave exists at a different level of analysis from the water and air molecules. Same with thought and intentions. I can explain them (I would contend) in terms of neurons and ions and action potentials and networks and protein expression. That doesn’t mean I’ve eliminated thoughts and intentions, it’s just that I’ve accounted for them at a level below that at which it is normally most useful to consider them (just as it is far more useful to describe a wave in terms of frequency and amplitude and direction than in terms of the trajectories of air and water molecules). In everyday language we speak of thoughts and intentions, just as we talk of breakers and tsunamis. That doesn’t mean that we can’t account for them at a molecular level, nor does it mean that by doing so we are eliminating the phenomenon.

    Our motor programs have no “intrinsic meaning”. What has meaning is their relationship to their inputs.

    But then I ask, is the meaning of that relationship intrinsic? Let’s look at your example.
    A waving arm has no “intrinsic meaning”. The waving arm I raise in response to struggling in a rip tide, however, is my meaningful response to the sensation of water up my nose, i.e. a motor action intended to elicit help.
    But motor actions in and of themselves have no intentions or meaning – they have them only relative to a mind. If I create a robot that plays a recording which says “Help! There’s water here!” whenever it’s in the proximity of water, insofar as there’s nothing there but stimulus and response, there’s also no meaning or intention on the part of the robot. If I press a button on an mp3 player and it plays the sound of a scream, there’s no reason to apologize to the mp3 player.

    Sure, because you know, and I know, that the mp3 player is not screaming because it needs help. So, as receivers, we know it doesn’t “mean” help, and as sender, we know that the mp3 player isn’t screaming “help” as its response to being doused in water. In the case of the robot, however, its scream does mean that “there’s is water here”, so as receivers we can regard the robot’s announcement as “meaning” precisely what the words say. However we do not regard the robot as a sender as having an “intention” because we have no reason to think it weighed up the options and decided that really, the best outcome would be if we mopped the floor. Although we might one day, and it’s of interest that in that apparently robots that collectively develop their own language with which to communicate with each other have just been been devised:
    http://www.bbc.co.uk/news/technology-13510988

  102. 103
    Elizabeth Liddle

    @ vjtorley: yes, I like your exposition of the waving/drowning scenario, but it was exactly what I was trying to convey!

    You conveyed it better, however :)

  103. 104

    Elizabeth Liddle,

    No, I’m not saying “there are no thoughts or intentions” – just because I think they can be accounted for by observable mechanisms, doesn’t mean I think they don’t exist! Yes, the “material state” is “blind” but, as I tried to make clear, that doesn’t mean that theperson is blind, because the person exists at a higher level of analysis than a given “material state”.

    The only possible way to make sense of a “higher level of analysis” in this context would be either A) as a useful fiction (and if it’s a fiction, it’s not going to be explanatory), B) in terms of weak emergence (in which case intention and meaning is ‘nothing but’ operation by that which is devoid of meaning and intention, and thus ultimately eliminative), or C) strong emergence (in which case appeals to the material constituents will not be explanatory, even if they are in some sense required – there’s something above and beyond those constituents in play that they themselves don’t explain, or our understanding of said constituents is incomplete, and there’s more to the physical than materialism and mechanism supposed.)

    If there’s another option, you’re going to have to outline it – “higher level of analysis” in and of itself isn’t very helpful. :)

    And the name I give to the kind of decision-making in which I weigh up various options and decide, with malice (or otherwise) aforethought, on a specific course of action, is “intending”. And I experience it as “deciding”. But what happens in my brain when I do that deciding, I would contend,is that a serious of “blind mechanical material states” chunter through a series of operations the final output of which is “my” decision. And I call it mine because I consider myself incorporated (again I use the word absolutely literally) in that neural machinery.

    As I said above, this either results in your explanation of meaning and/or intention as a useful fiction (and ultimately non-explanatory), weakly emergent (and thus eliminative), or strongly emergent (and thus the material isn’t what we thought it was after all.)

    Let me put this another way: Let’s say a person claims that they have a dog who can play chess. I ask to see the evidence, and they show me a dog who, when faced with a chess board, will pick up some of the pieces in his mouth and gnaw at them. It does not good for that person to tell me, “See, I call what the dog is doing “playing chess”.”

    Likewise, if you commit yourself to the view that all that exists is a material world, blindly and deterministically churning out results without thought or intention, it does little good to point at one or another particular bit of churning and say “I’m going to call this ‘decision-making’!” The matter is making decisions the way a dog plays chess. :)

    It’s our capacity to simulate, i.e. to re-enter simulated output from potential actions as input that endows us with the capacity to intend – to foresee the consequences of various course of actions and choose between them.

    But what counts as a simulation only does so relative to a mind to begin with. If I arrange a few sticks and stones on the ground to represent or simulate the layout of (say) a camp, they ‘simulate’ or ‘represent’ only in virtue of what meaning I or another person assign to the sticks, stones and their arrangement. And explaining “the meaning in the sticks and stones” this way, what I’ve done is assert that the sticks and stones are devoid of meaning – the meaning is in my mind. If I then say that the mind is nothing but the brain… etc.

    OK, let’s try to do this without the c word at all. If metaphors become a problem we need to drop them.

    So we can think of a neuron as a logic gate.

    Maybe it’s not possible to get rid of these metaphors, eh?

    Regardless, you gave a good material rundown of what goes on in a brain. But if that description is offered up as total – as in, total for the brain, and therefore total for the mind – then please note that there is no intention or meaning anywhere in your description. Yes, yes, I know – higher level of analysis. As I mentioned above, eventually just what the ‘higher level of analysis’ means has to be cashed out, and it can only be cashed out in constituents you’re willing to let into your metaphysics.

    I mean, it’s not as if eliminative materialists are a non-existent breed of thinker. They do exist, and on can come to conclusions or make assertions that place one in that camp. I don’t think this can be evaded by simply changing metaphorical language (‘Well, so long as I call this a decision and the EM doesn’t, I’m not an EM even if we mean the same exact thing.’)

    What I am doing is trying to outline the process by which those options are considered – the manner in which the decision process is weighted by factors other than a simple imperative stimulus.

    But there is no “decision process”, nor is anything being “considered”. At least not given your view of matter, and I think that’s by admission. Take the example of a brain “simulating” this or that. Is the relevant portion of the brain or the process objectively, intrinsically ‘about’ what it is ‘simulating’? Well, if so, then we’re no longer dealing with a material world as traditionally conceived. Is it not? Then the brain is only ‘about’ something else, and is only ‘simulating’, in virtue of a third-person view.

    You seem to be saying that because my explanation is materialistic that it must eliminate the phenomenon it seeks to explain, which is non-material. I disagree, and I think the problem, as I keep saying (apologies for repetition) is one of level-of-analysis.

    And I’ve repeated myself here as well. More than that, it seems to me like your reply can basically be summed up as “just because X is really (this description) doesn’t mean I’m not allowed to use a metaphor or a useful fiction!” To which I’ll respond, sure you can – but I can also point out just what is ‘really’ meant, and must be ‘really’ meant, once we push away poetic language, metaphor, and fiction.

    The example of the wave doesn’t work, because there’s no need to dispute that some physical thing X is ultimately constituted by a number of smaller physical things Y. Put another way, just because a bowling ball really is just a conglomeration of smaller material things (though whether it’s even right to call them ‘material’ anymore, given quantum physics, is an open question) poses no problem here, precisely because a “bowling ball” as a useful fiction, or only ‘really’ existing relative to a mind, isn’t terribly controversial to most people. Just as the same knife can be ‘a piece of cutlery’, ‘an antique’, ‘a weapon’, etc relative to a mind, though most everyone would agree that the knife is just a collection of atoms, etc, in this or that arrangement, which we call various things in different contexts and as shorthand.

    Put another way: If someone tells me that they saw a ghost, and if my investigation indicates that what they saw was a white sheet attached to a string, did I just provide an explanation for ghosts (‘Ghosts are sheets attached to strings!’)? Or is it more apt to say ‘You didn’t see a ghost at all’?

    Eliminative positions do exist. And we should call them that when they are embraced. :)

    Sure, because you know, and I know, that the mp3 player is not screaming because it needs help. So, as receivers, we know it doesn’t “mean” help, and as sender, we know that the mp3 player isn’t screaming “help” as its response to being doused in water. In the case of the robot, however, its scream does mean that “there’s is water here”, so as receivers we can regard the robot’s announcement as “meaning” precisely what the words say.

    Sure we can – because we assign the meaning to those words, and to the robot itself. The robot is not trying to communicate anything to us, and the “meaning” of the scream only exists relative to us and our minds. No, the robot is not really ‘asking for help’. No, the meaning is not “precisely what the words say”. Really, you could eliminate the words “There’s water here” from the robot’s cry, and “Help” would “mean” “there’s water here” – if you decided to assign it that meaning.

    Strip away the metaphors, the useful fictions and the poetic language when talking about intention and meaning (and even consciousness and experience) in a materialist world and there’s just not much left.

  104. ellazimm @ 89,

    Thanks. I’d be interested to know your thoughts about my comments in 87. Specifically, does my understanding of the options available to the materialist approach capture the essence, or is there something fundamental I’ve left out?

    Thanks,

  105. Eric: Oh man . . . . I always get lost in discussions like this, way over my head really.

    I’d say I mostly agree with your scenario 2 (“Somehow at some level of complexity and organization, consciousness arises as an emergent property, which then is somehow (at least partly) decoupled from the underlying matter. yadda, yadda, yadda) except I’m not sure about the uncoupling.

    Hey, I don’t want to admit that my mind is merely a product of my neurons firing but I have yet to see any evidence that convinces me otherwise. I don’t want to die and disappear but . . . .

    A few years ago I had an operation and was under general anaesthetic which was one of the creepiest experiences of my life. It was a complete blank. Nothing. No passage of time, no sensations. Just a big discontinuity. And I keep thinking . . . if there is any part of me that exists outside of my body then why is that time (a few hours) just gone? It’s like the clocks jumped. Really. And the best explanation I can find is that my brain was turned off, mostly.

  106. 107
    Elizabeth Liddle

    @ nullasalus:

    Thanks for your detailed and thoughtful response. I’m a bit busy today, but I’ll try to get to it this evening.

    Cheers

    Lizzie

  107. Elizabeth,
    I think you have explained very well how the brain works in all the more intelligent animals. But on both the metaphysical and the phenomenological level you seem to be dismissing everything that is uniquely human about how intentionality enters into human experience. Abstract / symbolic thought simply does not exist in the same type of physical relationship with the brain as sense experience does. Husserl makes a strong case for a motivational causality grounded in intentionality that is quite unlike any form of real or physical causality. I have posted a paper, linked to my name, that I think you and several others here might be interested in.

  108. #104 nullasus

    Let me put this another way: Let’s say a person claims that they have a dog who can play chess. I ask to see the evidence, and they show me a dog who, when faced with a chess board, will pick up some of the pieces in his mouth and gnaw at them. It does not good for that person to tell me, “See, I call what the dog is doing “playing chess”.”

    I am sure Elizabeth will do an excellent job of responding to most of your comment – but I would like to pick up on the question of definitions.

    It is a common objection to materialism in the philosophy of mind that the materialist has changed the definition of “decide” or “intention” or “meaning” or whatever.

    All I am aware of is

    (1) The mutually observable external world

    (2) Reconstructions of that observable world in my own head (memories etc)

    (3) Actions I take that are either external bodily actions (including audible speech) or internal reconstructions of such actions (imagining myself doing them, speaking to myself)

    (4) Experts such as Elizabeth make me aware of possible brain organisations and structure that would account for (2) and (3).

    To take one example I think a decision to do X is brain state which results in a propensity to do X (it is subtler than that – because that brain state arises through a conscious process – but I will leave the hard problem of consciousness for another day).

    Possibly you think I am lying or deluding myself about this.  It doesn’t matter.  It is sufficient to imagine that a creature exists who has the above characteristics (the perfect Turing machine if you like)

    As far as I am concerned mental constructs such as decisions are very useful ways of talking about 1, 2 and 3.  Just as a wave is a very useful, indeed almost essential, way of talking about water molecules in a certain context.

    Suppose you say that a mental activity such a decision is an immaterial something else and materialists have escaped the issue by redefining “decision”. So we are talking about different things. So let’s invent different words for the different things. Let us call this immaterial something else a DECISION (in upper case) to differentiate it from a material decision (in lower case) which is a propensity to act. Now there are a couple of problems about DECISIONs:

    (a) How do you know that what you are referring to when you refer to a “DECISION” is the same as what anyone else (including another dualist) is referring to?  All that you and another person can mutually observe is the external world.  You say there is something else going on in your head when DECIDE but how do you know it is the same thing as what is going in other people’s heads?

    (b) If a DECISION is different from a decision then that means it is at least logically possible to have one without the other. So it would be logically possible to DECIDE to do something and not do it, although there is nothing stopping you.  But someone who says they have decided to do X and then does not do X when there is nothing to prevent or dissuade them, is either lying, changed their mind, or does not  understand what “decide” means.  It is irrelevant what other activities went on in their head.  That is what “decide” normally means.  So DECIDE seems to be the word that does not fit in with normal usage.

  109. EZ: Why not start from here on? GEM of TKI

  110. ellazimm @ 106

    Thanks. The uncoupling is necessary in #2 to distinguish it from #1. In other words, either all actions/choices are simply and only the result of material processes (#1) or they involve some form of consciousness/free will. The only way the consciousness/free will can exist (in the materialist view) is if it somehow originally arose from material processes, but then took on a “life of its own” so to speak. I think if most materialists think about it carefully, they’d find (as you did) that they are more in the #2 camp than #1 (because #1 is self-refuting and useless as a practical life view, although some folks like to argue for it, I think many times just to be stubborn).

  111. Eric: You’re welcome but I’m only reacting out of my own head. I have no training, no expertise, no claim to know any more than the next guy.

    I will say that, even though I am much more in the materialist camp I kind of sort of assume I have free will. I can’t justify it within my non-theistic paradigm but it feels like that’s the way things work. I won’t pretend to be able to defend that view . . . I think I make a difference in the world and that I have a choice in what that difference is and that’s good enough for me.

    KF: I’m really sorry I didn’t acknowledge your link earlier, I just noticed it!! I’ll have a good look later (took a quick glance now) as I’m just starting my family time of the evening. BBQ chicken soaked in sweet and sour sauce I think . . . and a big bowl of salad. :-)

  112. 113

    EZ: Hey, I don’t want to admit that my mind is merely a product of my neurons firing but I have yet to see any evidence that convinces me otherwise.

    By any account, your neurons firing thrust you to the very pinnacle of Life on earth, and there they give you the unique capacity for a distinct form of symbolic representation.

    Yet you did not invent this form of symbolic representation, it existed as the basis of Life long before you arrived.

  113. Upright: Cool. When do I get to be rich and famous? :-)

    The whole free will debate has, frankly, baffled me. I suppose it’s just me being stupid and not getting the point but . . . I just can’t get worked up about it. But, as I said, on a daily, minute to minute basis I act in a way that implies free will. And I think we all do. Really.

    Whether that is true or not I leave to others.

  114. 115

    Ella,

    You couldn’t have missed my point more completely, even with twice the comedic relief. But at least you got your own point: you are correct, people who believe in free will are labeled “stupid” on a regular basis.

    Next time you say that you’ve never come into contact with any information that would cause you to believe in the authenticity of the mind, perhaps you could stop and think it through…

    Cheers :)

  115. Upright: I’ll do my best, in my stupidity, to be open minded.

    But DON’T hold your breath. Every day, I find new ways to make mistakes.

  116. 117

    “Every day, I find new ways to make mistakes.”

    Yes, it’s the thing that makes us most common.

    later

  117. :-)

  118. Hi markf and Elizabeth Liddle,

    Thank you both for your very thoughtful (!) posts. The care with which you composed them illustrates the very point that I wanted to convey, which is that meaning – and here I’m talking about inherent meaning – is, in paradigm cases, propositional. The meaning I want to convey when I wave my hands frantically while I’m in the sea is “I’m drowning” and I select the motor pattern precisely because I think it’s an excellent way to get other people to understand the proposition I wanted to communicate. They do so because they are beings like myself who are capable of putting themselves in my shoes and inferring that the only good explanation for the frantic, insistent waving they observe is an urgent need to communicate the single proposition: “I’m drowning.”

    Thus in the above example it is the proposition that does all the work of explaining the meaningfulness of the action sequence I engage in. The fact that I can preview it in my head is all well and fine, but that does not make it propositionally meaningful. In the absence of propositional language, previewing an action in one’s head might make it useful or practical at best.

    In these posts, however, we are engaging in a discussion which has no practical value whatsoever – unlike the drowning case. All of us are perfectly capable of meeting our practical wants. Our discussion pertains to the meaning of what it is to have a thought. A more theoretical discussion would be difficult to imagine. Any meaning, at this level of communication, is inherently propositional.

    Now here’s my point: bodily movements are not inherently propositional. It takes a good deal of careful selection to come up with a body movement that conveys a proposition per se, and even when it does, it’s a very simple one at that (“I’m drowning.”) In the vast majority of cases, when we communicate, the person communicating doesn’t mean what they mean simply because they’ve previewed these movements. Rather, the meaning logically precedes the movements. Bodily movements, even previewed ones, are simply incapable of accounting for the meaningfulness of the vast range of propositions we are capable of entertaining.

    And if propositional meaning does not inhere in bodily movements, then we have to look beyond them to find meaning.

  119. 120
    Elizabeth Liddle

    Nullasalus @#104

    No, I’m not saying “there are no thoughts or intentions” – just because I think they can be accounted for by observable mechanisms, doesn’t mean I think they don’t exist! Yes, the “material state” is “blind” but, as I tried to make clear, that doesn’t mean that theperson is blind, because the person exists at a higher level of analysis than a given “material state”.

    The only possible way to make sense of a “higher level of analysis” in this context would be either A) as a useful fiction (and if it’s a fiction, it’s not going to be explanatory), B) in terms of weak emergence (in which case intention and meaning is ‘nothing but’ operation by that which is devoid of meaning and intention, and thus ultimately eliminative), or C) strong emergence (in which case appeals to the material constituents will not be explanatory, even if they are in some sense required – there’s something above and beyond those constituents in play that they themselves don’t explain, or our understanding of said constituents is incomplete, and there’s more to the physical than materialism and mechanism supposed.)
    If there’s another option, you’re going to have to outline it – “higher level of analysis” in and of itself isn’t very helpful.

    And the name I give to the kind of decision-making in which I weigh up various options and decide, with malice (or otherwise) aforethought, on a specific course of action, is “intending”. And I experience it as “deciding”. But what happens in my brain when I do that deciding, I would contend,is that a serious of “blind mechanical material states” chunter through a series of operations the final output of which is “my” decision. And I call it mine because I consider myself incorporated (again I use the word absolutely literally) in that neural machinery.

    As I said above, this either results in your explanation of meaning and/or intention as a useful fiction (and ultimately non-explanatory), weakly emergent (and thus eliminative), or strongly emergent (and thus the material isn’t what we thought it was after all.)
    Let me put this another way: Let’s say a person claims that they have a dog who can play chess. I ask to see the evidence, and they show me a dog who, when faced with a chess board, will pick up some of the pieces in his mouth and gnaw at them. It does not good for that person to tell me, “See, I call what the dog is doing “playing chess”.”
    Likewise, if you commit yourself to the view that all that exists is a material world, blindly and deterministically churning out results without thought or intention, it does little good to point at one or another particular bit of churning and say “I’m going to call this ‘decision-making’!” The matter is making decisions the way a dog plays chess.

    I think we are in danger of running in non-overlapping circles here. If “decision-making” or “intention” in your view must involve a non-mechanistic component, then clearly, any attempt I make to describe either in terms of mechanisms isn’t going to satisfy you. But, equally, if I attempt to account for decision-making and intention are explicable in mechanistic then clearly I am defining decision-making and intention in terms that can be explained mechanistically!

    So let’s agree some ground rules: I’m regarding “decision-making” as a process by which something (typically an animal, but conceivably, if I’m right, an artefact – even a designed artefact – hey, if ID is true, wouldn’t you expect people to be able to design decision-making and intentional artefacts? But I digress…) has several options for action (things it is capable of doing) and is able to execute the one that is most appropriate given the context. For example, a bit of energy saving software that is able to select the optimum setting for a thermostat given the amount of solar energy it is getting, the number of occupants of the house, and the time of day. Or something. I’m not talking about intention here, just decision making. Well, I’m sure the smart engineers around here could design such a “decision-maker” fairly easily (I could probably even have a decent shot at programming it myself). That, IMO, isn’t comparable to a dog chewing the chess-pieces, and calling it a dog-playing-chess; it’s more comparable to a computer playing me at chess and winning (not actually hard for a computer to do). In fact, a computer-playing chess program is probably a good example of a mechanical decision maker.

    So I hope we can agree that decision-making, in terms that we would recognise as quite humanoid, can be at least replicated by a mechanistic algorithm. And we also know the kinds of algorithms our networks of neurons use to make decisions, and we can even emulate them.

    So what about intention? It occurs to me that a possible confound here is the issue of consciousness. I am describing intention simply in terms of a decision-making process that selects actions so as to maximise the chance of achieving a pre-defined goal. In that sense, my chess-playing computer is an intentional system – its goal is to check-mate the human player, and it selects its moves in such a way that at any given point in the game, its chances of achieving that goal is maximised, while of course constantly having to update its plans (I used the word advisedly) in light of the human player’s moves. So it plans, and replans, by making a constantly updated forward model (simulation) of likely outcomes of its next move, and feeding each output back in as input to the move-selection process.
    But here anticipate (hey, I just made a forward model!) that we will have a problem – you will not be happy to call this “intention” because the computer program, you will insist (and rightly, IMO) is not “conscious”.

    And I will accuse you of moving the goal posts :)

    Well, no I won’t, because having made my forward model I can deal with it in advance. But I will say that the real problem here is not in accounting for either decision-making, or even intention as in goal-directed decision making but in accounting for consciousness.

    Amirite?

    My account of intention does not satisfy you because I have not even mentioned the issue of consciousness. And that’s a fair cop. But I think it’s worth identifying what the cop actually is :)

    The example of the wave doesn’t work, because there’s no need to dispute that some physical thing X is ultimately constituted by a number of smaller physical things Y. Put another way, just because a bowling ball really is just a conglomeration of smaller material things (though whether it’s even right to call them ‘material’ anymore, given quantum physics, is an open question) poses no problem here, precisely because a “bowling ball” as a useful fiction, or only ‘really’ existing relative to a mind, isn’t terribly controversial to most people. Just as the same knife can be ‘a piece of cutlery’, ‘an antique’, ‘a weapon’, etc relative to a mind, though most everyone would agree that the knife is just a collection of atoms, etc, in this or that arrangement, which we call various things in different contexts and as shorthand.

    Well, my point about the wave is that it isn’t “ultimately constituted by smaller physical things”. The wave isn’t a property of those smaller things at all – it’s a property of the interface (it’s why I chose an ocean wave, rather than a sound wave for instance). The water can be travelling West, the air North East, and the wave travelling South. It is simply not possible to capture the behaviour of the wave in terms of the subunits of material either side of the interface, because the wave’s behaviour is a property of the interface, not of either material. OK, we are back in metaphorland, so let’s return to neurons: neurons fire discretely, but populations of neurons oscillate, and those oscillations can only be accounted for in terms of the population dynamics. They exist only at a higher level of analysis than the individual neuron. And yet, as with the ocean wave, which would not exist without the water or the air, but is composed of neither and is a property of neither, oscillations in neural populations wouldn’t exist if the individual neurons didn’t fire. And so on upwards. Networks of neural populations also oscillate, and do so in a chaotic (in the technical sense) manner: they can “flip” from state to state, depending on the inputs from the contributing populations, and those network oscillations are again only accountable for at the network level. And it is these these chaotically oscillating networks of neurons that finally determine the executed action, which in turn will bring in new data (an eye movement, for instance, results in a whole new cascade of neural firing as new patterns are cast on the retina), so not only is a brain a decision-maker (in my sense) its decisions include decisions to acquire new data relevant to the current goal (which makes it a lot clever than the chess program).

    Strip away the metaphors, the useful fictions and the poetic language when talking about intention and meaning (and even consciousness and experience) in a materialist world and there’s just not much left.

    Well, that’s really my point. Take apart a clock, and there’s not much left. That’s because a clock isn’t an item list of parts, it’s an assembly – it exists at a higher level of analysis than the parts – it’s an emergent property of its parts, if you like. Sure, if you insist that the material world is no more than the hadrons and leptons of which it is comprised (or whatever the current fundamental particles are these days) then “there’s not much left”. But no materialist I know insist on that, and I certainly don’t. That’s because we also have neutrons and atoms and molecules and cells and organs and organisms and brains and selves. And what makes them more than their parts is the pattern they make, over both time and space – how they relate to one another, just as the ocean wave is the pattern of the interface between two elements (in the Greek sense), not a property of either of the two elements separately. To say an ocean wave is made of water and air is not to eliminate the wave, nor to reduce it to “merely” water and air”. It is simply to state a fact that misses most of what the wave is. Yes, I think that brains are extraordinary assemblages of neurons, made of molecules and atoms and neutrons and quarks and leptons. But the “extraordinary assemblage” is in that list – to leave it out would make me an eliminativist, and I’m not :)
    I’m interested in that extraordinary assemblage, and how, inter alia, it produces what we call consciousness. But that might have to wait for another OP :)

  120. 121

    Elizabeth Liddle,

    But, equally, if I attempt to account for decision-making and intention are explicable in mechanistic then clearly I am defining decision-making and intention in terms that can be explained mechanistically!

    Another way of putting that is this: “If I assume a mechanistic materialistic point of view, and I stipulate that any definition of meaning, intention or decision must be consistent with this view (even if consistency means ‘non-existent’ in any common sense meaning of the terms), then – lo and behold – I can account for these things within my perspective.”

    Sure. And so long as you let me redefine “playing chess” to mean “something a dog is capable of doing”, it may well be easy to produce a dog playing chess.

    I’m regarding “decision-making” as a process by which something (typically an animal, but conceivably, if I’m right, an artefact – even a designed artefact – hey, if ID is true, wouldn’t you expect people to be able to design decision-making and intentional artefacts? But I digress…) has several options for action (things it is capable of doing) and is able to execute the one that is most appropriate given the context.

    So you’re assuming the truth of your metaphysics to begin with, and then stipulating that viable explanations must conform to those metaphysics? Alright. Why not go the whole nine yards and define decision-making as any mechanical output that was dependent on some initial input? Bowling balls decide to tumble down stairs when pushed.

    Not to mention, what standard is used for ‘most appropriate given the context’? The only way to determine who or what is appropriate given context, on your view, is to have a third party consider it and judge what they’re seeing. And that they are considering and judging what they are seeing would itself require more third party judgment, etc. Unless, of course, that aboutness and directionality is intrinsic, but then…

    In fact, a computer-playing chess program is probably a good example of a mechanical decision maker.

    And once again: Computers only play chess by virtue of human convention. I could keep the entire program intact while interpreting the rules and outputs differently, and it would be equally true that the computer is playing whatever game I’ve now decided on. This is pretty common in a computer games, in fact.

    I have some programming experience – I can code up a program where various outcomes are possible depending on which variables are in what state at a given time. The computer/software isn’t ‘deciding’ which outcome to produce, anymore than bowling balls decide to roll down stairs when kicked. Or rather, as much of a decision process is present in one as in the other.

    But here anticipate (hey, I just made a forward model!) that we will have a problem – you will not be happy to call this “intention” because the computer program, you will insist (and rightly, IMO) is not “conscious”.

    And I will accuse you of moving the goal posts

    Moving the goalposts? Now, you’re telling me “I’m going to stipulate that my metaphysics are true for this conversation. Therefore, I’m going to define decision-making in the only way it can be true given that assumption, even if it does violence to the words “decision-making” in any common sense use of the terms. Then I’m going to point at this or that, and say – with all those assumptions intact – that that is decision making. Object, and I’m going to say you’re moving the goalposts.”

    Yeah, color me unimpressed. And I’m sure you’d be unimpressed if I mirrored this move, applying my metaphysics in advance of yours, and changed definitions around as quickly, then announced that disagreeing was a move of the goalposts.

    Nice try, though. :)

    Well, no I won’t, because having made my forward model I can deal with it in advance. But I will say that the real problem here is not in accounting for either decision-making, or even intention as in goal-directed decision making but in accounting for consciousness.

    Amirite?

    Nah, you’re not. Qualia is a distinct question from intentionality or “aboutness”. And more an issue for yourself than myself, since you’re back to the computer metaphors, and are vacillating between treating decision-making and intentions as ‘nicknames for blind, mechanical causation’ and ‘nicknames for what you judge computers and programs to do, ignoring the fact that computers “play chess” the same way arrangements of sticks and rocks “act as maps” – that is, in virtue of an interpreting mind.’ ‘Aboutness’, intentionality or proto-intentionality does not require consciousness on a number of alternate metaphysics (Aristotileanism / Thomism being one). Now, conceivably someone can argue the qualia/consciousness line from a different perspective, but I’m not doing that here.

    Well, my point about the wave is that it isn’t “ultimately constituted by smaller physical things”. The wave isn’t a property of those smaller things at all – it’s a property of the interface (it’s why I chose an ocean wave, rather than a sound wave for instance). The water can be travelling West, the air North East, and the wave travelling South.

    And like I said, this still doesn’t capture the difference between the cases. Let me put it another way: Aboutness, decision-making, etc, as common sense know them, are specifically left out of a mechanistic materialist understanding of reality – and, when incorporated into that understanding, end up becoming ‘decisions’ by vast redefinitions, such that you can arguably say rocks decide to roll down stairs when kicked. It’s obfuscation, and in the end it’s either admitted that there is intentionality and aboutness in nature after all (and in a sense that is at odds with that materialist, mechanist understanding of nature), or that there isn’t (and that therefore that is no actual mechanistic, materialist ‘accounting for’ these things – they are simply eliminated.)

    But no materialist I know insist on that, and I certainly don’t. That’s because we also have neutrons and atoms and molecules and cells and organs and organisms and brains and selves.

    Then you need to meet more materialists. This is a little like being told ‘There are no materialists who deny that beliefs exist’, ignoring Alex Rosenberg and the Churchlands, etc. Or ‘there are no materialists who deny moral realism’, or ‘there are no mereological nihilists’. I mean, they’re inconvenient for the position, but they’re out there.

    And what makes them more than their parts is the pattern they make, over both time and space

    Are those patterns intrinsic, or extrinsic? Are there real “patterns” in nature, independent of any mind evaluating them? Or is a pattern just an impression a mind applies to its representation of the world? This applies to that ‘extraordinary assemblage’ as well.

  121. And what makes them more than their parts is the pattern they make, over both time and space

    Ah, and that’s were the intentional magic “emerges.” When you can’t explain something, go ahead and jack the scale up to awesome heights and then bluff.

    It works in poker!

  122. vj #119

    I am not clear what you mean by “propositional”. But I agree our discussion relies on the conventional use of words and it has no immediate practical value.
    I would argue that both these attributes are dependent on a more fundamental use of language for meaning. Remember Wittgenstein and his language games.

    Words can only acquire their conventional meaning through frequent use to do something. The first users of language did not have a dictionary to turn to. The first dictionaries were descriptive not prescriptive. And that meaning must have arisen because the speakers were trying to achieve an effect on the listener. They can’t just have an aired their views on philosophy of mind! Among the effects they will have wanted to achieve would be to get the listener to believe things e.g. there is good hunting over the hill (maybe this is what you mean by “propositional”?). But both the belief and the intention to get someone to believe can be explained in materialist terms – specifically a configuration of the brain that results in a disposition to behave in a certain way. However, it takes a very sophisticated use of language to develop the dispositions that correspond to beliefs in dualism etc based on tens of thousands of years of language development.

  123. nullasus

    You appear to mean something different by “decision” from Elizabeth. She has given a rather detailed definition. Why don’t you define what you mean?

    I think you will struggle and I will try to explain why through an analogy. Imagine that someone held a dualist view of chess. They believed that the concepts of chess existed in some immaterial way (maybe you do?). Elizabeth defines “checkmate” in terms of the rules of chess and what particular configurations of pieces might achieve. You respond by saying but that is not real checkmate, checkmate cannot be reduced to mere wooden pieces on a board, and she is just assuming her metaphysics by defining it that way. The trouble is there is no way to discuss the immaterial checkmate. It is just a mysterious something extra.

  124. 125

    markf,

    You appear to mean something different by “decision” from Elizabeth. She has given a rather detailed definition. Why don’t you define what you mean?

    Her definition of “decision” is, essentially, raw material input/output devoid of meaning or intention except as a useful fiction by a third party. As I said, bowling balls apparently “decide” to roll down stairs when kicked.

    I regard intentionality and meaning as real and irreducible. It’s not particularly ‘mysterious’ to anyone except a materialist. In the same way, an idealist would find matter downright mysterious.

    Imagine that someone held a dualist view of chess.

    Why? All that counts as a chess board, chess pieces, the rules of chess, etc is that which our minds project to begin with. Really, we went over this already – even with chess as an example.

    The analogy fails, because chess involves artifacts we project or token via our minds to begin with. Explaining our minds as brains the same way leads to some obvious problems.

    But if you want an analogy, here’s one that’s more apt: Imagine that someone has metaphysics that rejects the existence of, say… radiation. One day, they encounter what we’d call irradiated things. So, they start to say ‘that’s not really radiation, it’s just…’ and give explanations which tend to be vague make heavy use of metaphor. But whenever a concrete explanation is pressed for, either they deny the obvious (in this case, radiation – it’s all a big misunderstanding, an illusion, an artifact of how unguided evolution has wired our minds) or they admit to it in some esoteric way (‘that’s not radiation, that’s gravity. A special kind of gravity. A kind of gravity that really, really looks like radiation, right down to banal description.’)

    They can go on that way if they want. But the reasonable thing to do would be to say ‘alright – I suppose radiation exists after all, and I’ll have to abandon my previous position.’

    It is just a mysterious something extra.

    On materialist metaphysics, mind is downright mysterious if it’s even taken to exist – that much I can agree on.

  125. nullasus

    I note that for you decisions and intentions are irreducible and therefore presumably not definable or describable.  

    There is an important difference between radiation and mental constructs like these.   The various types of radiation are the best current explanation for mutually observable effects such as burning and X-rays.   That is our evidence for it.  It is an explanation because, although radiation is not directly observable,  we can describe certain hypothetical properties of radiation which are in the mutually observable world such as wavelength, particle size and energy and work out how they lead to the effects.

    Materialists have a (very incomplete) explanation for the behaviour we can all observe when people consider alternatives and act.  That is the evidence for the various brain structures Elizabeth describes. You propose an alternative explanation but it is irreducible so it has no describable properties and it only experienced internally – it is not mutually observable and it has no properties that are mutually observable.  The only reason for supposing something else in addition to a materialist explanation for a decision is your own experience of this indescribable something else which you assume everyone else has.

    So effectively your argument comes down to ”They do exist (although I can’t tell you anything about them). If you offer another explanation for the behaviour we see  you must be talking about something else”.  And I guess you assume Elizabeth and I have similar experiences of decision making and are either lying or deluding ourselves.

    But that raises the some interesting problems:

    Wouldn’t it be just as logical to deduce that Elizabeth and I are in some way deficient and don’t have the experience you have when making a decision? 

    If that were true what difference would it make to your life or ours?

    How do you know that your own irreducible experience of decision making is the same as anyone else’s? i.e. that anyone is talking about the same thing as you?

  126. 127

    markf,

    The only reason for supposing something else in addition to a materialist explanation for a decision is your own experience of this indescribable something else which you assume everyone else has.

    Indescribable? Funny, people seemed plenty able to understand what everyone else meant by ‘making decisions’ or other mental claims far in advance of materialism. As for assuming that everyone else has these – yes, I’m not a solipsist.

    Wouldn’t it be just as logical to deduce that Elizabeth and I are in some way deficient

    I’m trying to be polite, Mark. ;)

    How do you know that your own irreducible experience of decision making is the same as anyone else’s? i.e. that anyone is talking about the same thing as you?

    So, your gamut is solipsism and radical skepticism.

    In the words of Cave Johnson: We’re done here.

  127. #127 nullasus

    I agree we are done here.

    I don’t think my position can be accurately described as either scepticism or solipsism. I am not doubting that the external world, other people, or other minds exist. I just disagree about the nature of all our minds. You are utterly convinced there is something else but as you cannot describe it (if you can – go ahead and do so) – it becomes very hard to discuss it.

  128. Markf,
    The description of the mind you ask for is that it has two main powers – intellect and will. The intellect understands and the will decides. If you want more details read Aquinas or Husserl. What you seem to really want however, is analysis and not description. (See my comment #2) We can analyze how the brain works and that is wonderful. But as to the mind it is simple – a unitary whole that disappears as soon as you try to take it apart. You can complain about that all you want but it will not change anything.

  129. #129 Lamont

    I was asking for a description of a decision or an intention – not mind in general. Importantly this description needs to be in terms of properties that are mutually observable – otherwise it is a description of one unobservable in terms of another, which is just playing with words (different beetles in boxes to stretch Wittgenstein’s metaphor)

  130. 131
    Elizabeth Liddle

    Hi, vjtorley
    Thanks again for this conversation, which I am very much enjoying :)

    In response to this (#101):

    Hi Markf and Elizabeth Lidddle,

    Thank you both for your very thoughtful (!) posts. The care with which you composed them illustrates the very point that I wanted to convey, which is that meaning – and here I’m talking about inherent meaning – is, in paradigm cases, propositional. The meaning I want to convey when I wave my hands frantically while I’m in the sea is “I’m drowning” and I select the motor pattern precisely because I think it’s an excellent way to get other people to understand the proposition I wanted to communicate. They do so because they are beings like myself who are capable of putting themselves in my shoes and inferring that the only good explanation for the frantic, insistent waving they observe is an urgent need to communicate the single proposition: “I’m drowning.”
    Thus in the above example it is the proposition that does all the work of explaining the meaningfulness of the action sequence I engage in. The fact that I can preview it in my head is all well and fine, but that does not make it propositionally meaningful. In the absence of propositional language, previewing an action in one’s head might make it useful or practical at best.
    In these posts, however, we are engaging in a discussion which has no practical value whatsoever – unlike the drowning case. All of us are perfectly capable of meeting our practical wants. Our discussion pertains to the meaning of what it is to have a thought. A more theoretical discussion would be difficult to imagine. Any meaning, at this level of communication, is inherently propositional.
    Now here’s my point: bodily movements are not inherently propositional. It takes a good deal of careful selection to come up with a body movement that conveys a proposition per se, and even when it does, it’s a very simple one at that (“I’m drowning.”) In the vast majority of cases, when we communicate, the person communicating doesn’t mean what they mean simply because they’ve previewed these movements. Rather, the meaning logically precedes the movements. Bodily movements, even previewed ones, are simply incapable of accounting for the meaningfulness of the vast range of propositions we are capable of entertaining.
    And if propositional meaning does not inhere in bodily movements, then we have to look beyond them to find meaning.

    I think I am in fairly fundamental disagreement with you here, but it may be difficult for me to explain why, although I’ll do my best. But I do actually dispute your premise – I think propositional meaning does “inhere in bodily movements”, or, at least, I think it is created by nested and re-entrant programs for bodily movements, where those bodily movements are not necessarily executed, and where they include subvocalizations and eye movements.

    However, I think we may be getting partly at cross purposes over the way we are thinking about “meaning”. I’m using the word fairly literally, or, at least, in a fairly literal common usage sense, as I said above, as in “black clouds mean rain” or “a rising temperature means the thermostat will cut out”. In other words I’m taking, as a kind of base unit (metaphor warning) meaning as contingency. So, for a very simple organism, a chemical signal emitted by a food source may “mean” “swim towards the signal source”, and is implemented by a simple stimulus-response circuit. For a more complex organism, the circuit may have many more contingencies, and only be executed if a whole series of logic gates (not a metaphor in this instance) sum to action threshold, and inputs to those gates will include signals indicating the energy reserves of the organism (is it hungry?), the learned probability that the signal in question is from prey not a predator (learning by means of weights on the neural connections affecting the summation), the probability that a bigger food source that will yield more calories for less energy expenditure, etc. And as we nest these contingencies, and incorporate feedback loops (swim towards the signal, resample the signal; make a new forward model in light of the new data, etc) we start to get the beginnings, I would say, of propositional logic, all implemented via signals that ultimately result, if executed, in bodily movements.
    In fact, this, I suggest, is how we can account for “qualia” although I am aware that that is opening a large can of worms. But I suggest that the “qualia” of a peach, for instance – the peachiness of a peach (and yes, I’ve chosen a fairly complex stimulus) inheres in programs for a series of bodily movements and forward models, including models for reaching for a peach, grasping it, activating the neurons that would respond to its fuzziness, the model of raising it to your nose, activating the neurons that would respond to its smell, the model of taking a bite, activating the neurons that would respond to its taste, etc. And when we decide between, say, a peach and a banana, we are activating, I suggest, propositional logic system that decides on the basis of a kind of truth table, whether the motor program that will result in the ingestion of a banana or the motor program that will result in the ingestion of the peach will best match an intended end-state, which in turn, is the result of a whole other series of forward models.
    At the level of the brain user, as it were (no I’m not letting a homunculus in there, the user herself is a model generated by the brain, which doesn’t make her not real, the only access to reality we have, I suggest, is models), especially if that brain user is a language user, the decision will be expressed, linguistically, probably, as “hmmm….I think I fancy a peach”. But even that linguistic thought, whether uttered or not, will emerge from the activation motor programs involved in the vocalisation of those words (whether at execution level or not).

    Does that make any sense?

    To nullasasus: I’ve read your posts with interest, and I’ll try to get a response to you at the weekend, if the thread is still going! Not sure how long threads live for at UD. Anyway, thanks for your responses.

  131. Elizabeth Liddle (#131)

    Thank you very much for your post. Regarding your example of “Black clouds mean rain,” I suggest you have a look at my earlier post (#31), where I addressed this very example when addressing Grice’s theory of natural meaning. I concluded:

    “Natural meaning” is, it seems, a derived rather than a primitive usage of the term “meaning”: it assumes the existence of a community of observers who possess a stock of shared scientific knowledge.

    I don’t think qualia constitute a very good argument for the immateriality of the mind, so I have no objection in principle to your example of how an animal might decide on eating a peach instead of a banana. Where I would object is when you make a leap of logic to claim that in a language-using animal, this decision would be magically “propositionalized” as: “Hmmm… I think I fancy a peach.” For what is at stake here is precisely how language, which is inherently propositional, arises in the first place. To assume that it is simply there is to beg the question. You mention “subvocalizations and eye movements.” The latter are not propositional. Incidentally, I think eye movements could be very useful as a diagnostic tool for deciding whether an animal is conscious or not, although I’m not sure whether the eye movements of language-using animals like ourselves would be different from those of chimps. I suspect that they would – although I would add that this is a product of the fact that they think propositionally, not a cause. (The notion of getting from darting eyes to fully-fledged propositions sounds like a comical endeavor to me!)

    As for subvocalizations, they are already propositional, in human beings. The question is: how did they get that way?

    There is an ocean of difference between the language-using animal who thinks “Mmm. I had a banana last time but it was very ho-hum, and this isn’t the best season for them anyway. On top of that, I want to prepare a dish that will pleasantly surprise the palates of my guests, who will be having dinner with me. Mmm. I think deep dish peach pie should do the trick. I think they’ll like that one, although I’d better ask Sally first as I know she’s allergic to some foods” and the animal whose brain automatically weighs up the pros and cons of eating a peach versus eating a banana, and selects the former. The first animal means something by his/her decision to eat a banana; the second does not, although I’d be prepared to say that its movements have a (natural) meaning.

    In short, I believe that the attempt to generate linguistic meaning from an organism’s behavior (including forward models) is doomed to failure.

    Must run now. Will be back later.

  132. There are two things that can be called a decision, one is the kind of thing that Elizabeth described where the brain processes the available data and then produces some type of behavior. The other is when understands the gravity of the situation, considers the possibilities, and then chooses to act based on either the passion one feels, or the reasons one understands. The choice is therefore either, rational and moral, or irrational and immoral. The point is that in both cases it is an act of the will.

    So what is an act of the will? It is when the mind/ soul decides to either, follow the path that the brain has cranked out; or, to follow the path that one understands is the right thing to do.

    Now if it is just a matter of the brain making a difficult decision, where does the conflict between mind and brain come from? Is it just that one part of the brain is conflict with another part? If that were the case the evidence should be obvious and undeniable. But if such evidence is out there I have never seen it.

  133. vj and Elizabeth

    vj has a point when distinguishing meaning through mere contingency  such as “black clouds mean rain” and meaning arising through intentions (which is often associated with language).  This is Grice’s distinction between natural and non-natural meaning and it is quite fundamental. It is also true that only minds, possibly only human minds, can give something non-natural meaning – because it is related to intentions and only minds can have intentions. But this isn’t a problem for materialists if they can show that intentions are material (which we can by recognising them to be dispositions).

    There is more of a problem with “propositional meaning”. vj (and Feser) wants to give this some kind of fundamental status and then show that it only applies to mental things. But there is a strong (and I believe correct) philosophical tradition which regards propositional meaning as a construct derived from intentions and not inherent as vj would have it.  We are confused by our conventional use of words and forget the fundamentals of communication. In a very potted version it goes something like this:

    * The propositional meaning of “You are early.” is the set of conditions under which that sentence is true i.e. the person being addressed arriving before the set time.

    * We know those conditions because of the conventional role of the words in the sentence which we can look up in a dictionary.

    * But those conventions must have been established at some time.   The words (or their predecessors) must have been uttered without the conventions (and without a dictionary) – so we could learn the role they are meant to play.  People must have communicated without the benefit of convention.

    * So there must be something else more fundamental that establishes the truth conditions for an utterance.  Indeed  the truth conditions of real utterance often do not coincide with the conventional use of the words (e.g.  a sarcastic “You are early” addressed to a latecomer).

    * How do we establish the truth conditions?  Through the intention of the utterer.  It is the utterer’s intentions that establish the truth conditions.  In this case the intention to get the latecomer to realise they have arrived after the agreed time.

    * So propositional meaning also comes back to intentions.  In this case the intention to get someone to believe something.

    One objection is that to believe something implies propositional meaning – the truth conditions of the belief.  But the belief that X can be explained in materialist terms. It is the disposition to behave as though X.  Even quite a simple animal such as pigeon can believe it is about to be fed.

  134. Hi markf (#134)

    Thank you for your post. Your attempt to ground propositional meaning in intentions only succeeds in cases where the intention can be expressed non-propositionally. This is rarely the case, when we utter propositions: typically it applies to those simple cases where what is sought is merely a change in an individual’s behavior. In these cases, language may not even be required. A boss who wants a tardy employee to change his ways may say “You are early” sarcastically, or she may just tap the dial of her watch and give the employee a black look.

    But if I am trying to change your beliefs, as Sean Carroll attempted to do recently when he wrote his blog in “Scientific American” on “Physics and the Immortality of the Soul” (see http://www.scientificamerican......2011-05-23 , and see here for a follow-up discussion: http://www.physicsforums.com/s.....p?t=501665 ), then my intention has to be cashed out propositionally. In Sean Carroll’s case, there was a proposition P that he wanted to communicate, discuss, and get his to deny: namely, that there is a non-material spirit that drives around the particles in our brains. Here the intention is linguistic: it presupposes a proposition P that needs to be understood and shared between sender and receiver.

    You argue that beliefs can be cashed out as dispositions to behave. A pigeon’s belief that it is about to be fed can indeed be cashed out like that, but pigeons do not engage in critical thinking and debating, as rational animals like ourselves do. Most of our beliefs are incapable of being cashed out in behavioral terms, and any given behavior may be compatible with a multitude of different beliefs and intentions. (Think of standing bare-headed in the rain, and Samuel Johnson’s penance.)

    I might add that your claim that the belief that X can be explained as the disposition to behave as though X fails to achieve the desired reduction, in any case: it still leaves the variable X unexplained. And in the vast majority of cases, as I argued above, “behaving as though X” has no clear content.

    Something like your account of meaning would be fine if we all lived in a Fred Flintstone-like community of people who talked to each other using one-word utterances, like “Slab!” (to borrow an example from Wittgenstein’s Philosophical Investigations). In simple cases, the meaning of an utterance does equate to its use, which in turn equates to its function. But if we lived our lives like that, we wouldn’t need language at all – and we wouldn’t need propositions, either. (Vervet monkeys seem to do well without them.)

    Lastly, one need not look far for examples which contradict the “meaning = use = function” theory. Any science textbook will do.

  135. vj

    The key to this is whether beliefs can be “cashed out” as dispositions to behave in certain ways (and by behaviour I include internal rehearsals and mimickry of external behavior such as seeing an imagined series of events or running words in one’s head). You clearly do not think that something like believing in the theory of relativity can be accounted for this way. Before going any further I would like to identify what types of beliefs (if any) you think can be accounted for this way.

    You seem to accept the pigeon’s belief that it is about to be fed can be seen as a behavioural disposition. So presumably a person’s belief that he is about to be fed can also be cashed out behaviourally. How about these beliefs:

    That tigers are dangerous
    That it will rain soon
    That it will rain tomorrow

    I am trying to pin down why some beliefs require a “proposition” whereas others don’t.

  136. 137
    Elizabeth Liddle

    nullasalus @ 121

    Sorry this has taken me awhile to get to. OK, here is a crux in our discussion:

    I have some programming experience – I can code up a program where various outcomes are possible depending on which variables are in what state at a given time. The computer/software isn’t ‘deciding’ which outcome to produce, anymore than bowling balls decide to roll down stairs when kicked. Or rather, as much of a decision process is present in one as in the other.

    I would say that your computer program is deciding, because that’s what I normally consider deciding is – weighing up contextual factors and opting for the action that best matches a goal. So we need to figure out what it is that you think Real Deciding is :) Because as far as I am concerned, that’s exactly how brains produce a decision, and although the logic is fuzzier, we can easily implement fuzzy logic on a computer (I’m sure you can, and so can I). So the extra factor that you think that my Deciding (let’s call it LizzieDeciding, because I haven’t figured out how to do subscripts in html) doesn’t have that Real Deciding has isn’t fuzziness, right.

    So what am I failing to account for? I rather assumed consciousness, but you when I suggest that: say:

    … that the real problem here is not in accounting for either decision-making, or even intention as in goal-directed decision making but in accounting for consciousness.
    Amirite?

    You said:

    Nah, you’re not.

    So I’m not yet clear what the difference is between LizzieDeciding (i.e. an action selected as the one most likely to achieve a goal given contextual inputs) and Real Deciding. So let’s take your next point:

    Qualia is a distinct question from intentionality or “aboutness”. And more an issue for yourself than myself, since you’re back to the computer metaphors, and are vacillating between treating decision-making and intentions as ‘nicknames for blind, mechanical causation’ and ‘nicknames for what you judge computers and programs to do, ignoring the fact that computers “play chess” the same way arrangements of sticks and rocks “act as maps” – that is, in virtue of an interpreting mind.

    No, I’m not vacillating, or, at least, I’m only vacillating if you are making the very distinction that I am not making, and indeed, claiming is not a distinction!
    I am indeed “treating decision-making and intentions” as events that are mechanistically accountable, but not – by definition not – blind. I am, as I said above, defining deciding, at least LizzieDeciding, as a process by which an action is selected as the one most likely to achieve a given goal in a given context. Which can’t be “blind” because of that goal – it is that matching-to-goal part of the process that I am defining as “intention” (let’s call it LizzieIntention, for now). So they aren’t “nicknames” at all – that’s what I think they are. So let’s figure out what, according to you, they aren’t: You say that I am vacillating between that and “what [I] judge computers do”. Well, I think chess-playing computers do exactly that LizzieDeciding thing, as you rightly observe. But you think I am wrong because I am “ignoring the fact that computers ‘play chess’ the same way arrangements of sticks and rocks “act as maps” – that is, in virtue of an interpreting mind.

    Hmmm. Tbh, I’m not quite sure what that means, or at least, if it means what I think it means, I think it’s wrong, but interestingly wrong. I’m trying to put my finger on just what is wrong!

    An arrangement of pebbles left by a tramp to point at a house where a nice lady will give you a drink of water is clearly only has if a) it was actually left by the tramp and not inadvertently kicked into position by a passing horse, and b) if someone understands the intentions behind the arrangement, right? Is that the kind of pebble-map you were thinking of? And in that scenario, therefore, I would entirely agree that the arrangement of pebbles only has meaning “in virtue of an interpreting mind”. Now, if we take a chess game (and I’m no chess player, I’m afraid), again the arrangement of pieces on the chess board only has meaning “in virtue of an interpreting” something, where that something is normally the minds of two human players.

    Now, are you claiming that if one or both the “players” is a computer program that no game of chess is being played? Clearly at some level, the computer or computers “interpret” the positions of the chess pieces according to a strict set of rules with a stated goal, just as a human player does. But presumably you would say that there is no interpreting mind here, and I would be inclined to agree, but only because of the consciousness thing, but you have rejected as the critical difference).

    So what the difference here? So by what criteria do you claim that a pair of computers playing chess are only LizzieDeciding, whereas a human player playing chess is doing Real Deciding? In other words, what is it that I am ignoring? Both computer and human player have a goal, scripted in the case of the computer, which is to checkmate the opponent. Both make LizzieDecisions regarding the next move based on some kind of fuzzy logic that maximises the probability of success given the current state of the game, so both have LizzieIntentions.
    What am I not accounting for here?

    You say:

    Aboutness, decision-making, etc, as common sense know them, are specifically left out of a mechanistic materialist understanding of reality – and, when incorporated into that understanding, end up becoming ‘decisions’ by vast redefinitions, such that you can arguably say rocks decide to roll down stairs when kicked. It’s obfuscation, and in the end it’s either admitted that there is intentionality and aboutness in nature after all (and in a sense that is at odds with that materialist, mechanist understanding of nature), or that there isn’t (and that therefore that is no actual mechanistic, materialist ‘accounting for’ these things – they are simply eliminated.)

    Well, I don’t think so – or, at least, first I’d like a clearer exposition of how LizzieDeciding (which wouldn’t, incidentally, include rocks rolling downstairs when kicked) differs from Real Deciding, and how LizzieIntention (which requires actions to be selected on the basis of the degree to which they further a goal) differs from Real Intention. Yes, I think there is “intentionality” in nature – I think nature is full of it, and it’s mostly found in animals (not so much in plants, although a few come close to a fulfilling my minimal criteria) and also, increasingly, in human artefacts. So there’s a nice ID argument for you:)But it’s not an argument against a “materialistic mechanistic” account of nature, it’s an argument that mechanisms, including human-designed mechanisms as well as natural (living) mechanisms, are capable of decision-making and intentions.

    But no materialist I know insist on that, and I certainly don’t. That’s because we also have neutrons and atoms and molecules and cells and organs and organisms and brains and selves.

    Then you need to meet more materialists. This is a little like being told ‘There are no materialists who deny that beliefs exist’, ignoring Alex Rosenberg and the Churchlands, etc. Or ‘there are no materialists who deny moral realism’, or ‘there are no mereological nihilists’. I mean, they’re inconvenient for the position, but they’re out there.

    OK. As long as you aren’t including Dennett.

    And what makes them more than their parts is the pattern they make, over both time and space

    Are those patterns intrinsic, or extrinsic? Are there real “patterns” in nature, independent of any mind evaluating them? Or is a pattern just an impression a mind applies to its representation of the world? This applies to that ‘extraordinary assemblage’ as well.

    And does the tree that falls in the forest make a sound? Did light exist in a universe without retinas? We call them patterns, because that’s how we, as human brain-possessors, parse them. And one of those patterns is the pattern we call the parser herself. I submit that if you exchange your binary categories (real/not real) for a Strange Loop, the mind-body problem makes a noise like a hoop and rolls away :)

  137. Something like your account of meaning would be fine if we all lived in a Fred Flintstone-like community of people who talked to each other using one-word utterances, like “Slab!” (to borrow an example from Wittgenstein’s Philosophical Investigations).

    Wittgenstein was a fan of the Flintstones!?

    vjtorley, have you read J.P. Moreland’s The Recalcitrant Imago Dei: Human Persons and the Failure of Naturalism?

  138. 139
    Elizabeth Liddle

    Lamont @ #133
    Thanks for your very succinct and clear response! It has helped me pinpoint where I think I diverge from the OP.

    There are two things that can be called a decision, one is the kind of thing that Elizabeth described where the brain processes the available data and then produces some type of behavior. The other is when understands the gravity of the situation, considers the possibilities, and then chooses to act based on either the passion one feels, or the reasons one understands. The choice is therefore either, rational and moral, or irrational and immoral. The point is that in both cases it is an act of the will.

    You distinguish between “two kinds of things that can be called a decision”.

    The first, you describe as “where the brain processes the available data and then produces some type of behaviour”, as described by me,which is fine.

    The second, you describe as “when [someone] understands the gravity of the situation, considers the possibilities, and then chooses to act based on either the passion one feels, or the reasons one understands.”

    Where I diverge (or at least one sense in which I diverge) from you is that I do not see these as “two kinds of things that can be called a decision”. I see the second as a subset of the first.

    In other words, I regard “when [someone] understands the gravity of the situation [and] considers the possibilities” as a special case of “when the brain processes the available data”. Similarly, I regard “chooses to act based on either the passion one feels, or the reasons one understands” as a special case of “then produces some type of behaviour”.

    I do agree, however, that moral decision making involves different processes to non-moral decision-making (and, indeed, some of my colleagues are working on the brain processes specific to moral decision-making right now). So let’s take two scenarios:

    In the first, I am faced with the choice of having a snack in the Starbucks of my local supermarket before doing the shopping, or doing the shopping then having my snack. It’s not a moral decision in any sense (that I can think of), and either way I will end up both having had the snack and having done the shopping.

    In the second, I am faced with the choice of doing the shopping, then on the way home calling in on a tiresome but lonely friend to have a snack with her, or having my snack at Starbucks and going straight home. Definitely a moral choice – I really like the Starbucks coffee and brownies, and I know that if I forgo them, I will end up with Nescafe and at best, a stale Hobnob. On the other hand, my friend will hugely appreciate the visit.

    In both cases, my brain “processes the available data and produces some kind of behaviour”. More specifically, in both cases, my brain cycles through the behavioral options (Starbucks now vs Starbucks later; Starbucks vs Nescafe and Hobnob) and outputs a behavioural sequence (e.g. shopping then Starbucks; shopping then friend). And in both cases, I propose (with a fair bit of scientific backing), as my brain cycles through the behavioural options, at sub-execution threshold , it simulates the outcomes of each option (what we think of as “imagining the consequences”, and the desirability of each outcome (signalled by the degree to which our reward circuits are activated) feeds back into the motor-program until a winning program reaches execution threshold and we do that (what we describe to ourselves as “acting on our decision”).

    In the first scenario, I suggest, what goes on is something like this: first I activate the Starbucks-then-shopping scenario, and my reward circuits get excited by the thought of immediate coffee and brownie, and, indeed, I may react physiologically with salivation and tummy rumbles; however, I also activate the consequences of then shopping, and the dreariness with which I tend to respond to food choices if my tummy is full (I tend to be a much more enthusiastic and imaginative food shopper if I shop hungry); then I simulate the consequences of arriving home with a less-than-full shopping bag, and having to make a second shopping trip before the week is up, and my reward circuits get less excited, and this motor program gets less of a boost. Then (or even in parallel) I consider shopping first, and the pleasure (as indicated by my reward circuits) I will get in choosing cool ingredients, despite my escalating tummy rumbles, and then the further pleasure I will get in enjoying my well-earned coffee and brownie. And that motor program gets more of a boost, and actually “laterally inhibits” the competing program. So I walk resolutely past Starbucks and head for the fruit aisle.

    In the second scenario, something very similar happens, but in addition to simulated experience of Starbucks coffee and brownie activating my reward circuits, and laterally inhibiting the motor program that will result in Nescafe and a stale hobnob, the simulated experience of seeing the pleasure in my friends face also activates my reward circuits, and not only that, but my capacity to simulate how I would feel if a dear friend came for an unexpected visit also activates my reward circuits, and the degree to which my reward circuits are differentially activated by these simulated behavioural consequences will determine which of the mutually laterally inhibiting motor programs reaches execution threshold.
    And I will express that to myself as “I was tempted linger in Starbucks for a snack, but I decided to call on poor Mildred instead, dear old thing that she is”.

    My point being that the algorithm is the same in both cases, even though the nature of the simulations is different, and this relates to the point I made above about morality and free-will being related not to choosing what we should do over what we want to do, but about wanting good things rather than bad things ?
    Which leads me to your next point:

    So what is an act of the will? It is when the mind/ soul decides to either, follow the path that the brain has cranked out; or, to follow the path that one understands is the right thing to do.

    No, I don’t think an “act of will” is “when the mind/ soul decides to either, follow the path that the brain has cranked out; or, to follow the path that one understands is the right thing to do” – I think that is a false distinction, and, in any case, the wrong distinction ? I think myself, it is more useful to think of an “act of will” as an act that follows the consideration of options, whether moral criteria are used or not. For instance, if I choose a strawberry rather than a chocolate ice-cream, that is, IMO, an act of will, even though it can also be described as what my “brain cranked out”. Whereas if I just blindly draw a packet from the freezer, I cannot describe what I end up eating as the result of an act of will – I left it to “chance” not “will”. My brain simply did not “crank out” an answer to the question “which ice-cream do I want to eat?”

    Conversely, if I decide to see my friend rather than go to Starbucks, that is certainly an act of will (I weighed up alternatives) but it is, additionally, I would argue, a moral choice. And, interestingly, if I am such a saintly person (which I am not!) that it does not even cross my mind to go to Starbucks if the possibility of fitting in a visit to a tedious friend is a practical possibility, in which case, I may do a moral act that is not, proximally, “an act of will” at all, although it may be in a distal sense in that I have, through the will-ful acquisition of altruistic habit throughout my saintly life, come to a state of grace wherein the joy of others is tantamount to my own/

    Now if it is just a matter of the brain making a difficult decision, where does the conflict between mind and brain come from? Is it just that one part of the brain is conflict with another part? If that were the case the evidence should be obvious and undeniable. But if such evidence is out there I have never seen it.

    I don’t think there is a “conflict between mind and brain”. I think there are lots of conflicts within the brain between alternative courses of action, and how these conflicts are resolved is an extremely interesting domain of neuroscience, and yes, there is lots of evidence supporting viable hypotheses. I’d be happy to expound more, but maybe later :)

    Apologies for the lack of succinctness in my response!

Leave a Reply