Uncommon Descent Serving The Intelligent Design Community

Minds, brains, computers and skunk butts

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[This post will remain at the top of the page until 10:00 am EST tomorrow, May 22. For reader convenience, other coverage continues below. – UD News]

In a recent interview with The Guardian, Professor Stephen Hawking shared with us his thoughts on death:

I have lived with the prospect of an early death for the last 49 years. I’m not afraid of death, but I’m in no hurry to die. I have so much I want to do first. I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.

Now, Stephen Hawking is a physicist, not a biologist, so I can understand why he would compare the brain to a computer. Nevertheless, I was rather surprised that Professor Jerry Coyne, in a recent post on Hawking’s remarks, let the comparison slide without comment. Coyne should know that there are no less than ten major differences between brains and computers, a fact which vitiates Hawking’s analogy. (I’ll say more about these differences below.)

But Professor Coyne goes further: not only does he equate the human mind with the human brain (as Hawking does), but he also regards the evolution of human intelligence as no more remarkable than the evolution of skunk butts, according to a recent report by Faye Flam in The Philadelphia Inquirer:

Many biologists are not religious, and few see any evidence that the human mind is any less a product of evolution than anything else, said Chicago’s Coyne. Other animals have traits that set them apart, he said. A skunk has a special ability to squirt a caustic-smelling chemical from its anal glands.

Our special thing, in contrast, is intelligence, he said, and it came about through the same mechanism as the skunk’s odoriferous defense.

In a recent post, Coyne defiantly reiterated his point, declaring: “I absolutely stand by my words.”

So today, I thought I’d write about three things: why the brain is not like a computer, why the evolution of the brain is not like the evolution of the skunk’s butt, and why the human mind cannot be equated with the human brain. Of course, proving that the mind and the brain are not the same doesn’t establish that there is an afterlife; still, it leaves the door open to that possibility, particularly if you happen to believe in God.

Why the brain is not like a computer

For readers wishing to understand why the human brain is not like a computer, I would highly recommend a 2007 blog article entitled, 10 Important Differences Between Brains and Computers, by Chris Chatham, a 2nd year Grad student pursuing a Ph.D. in Cognitive Neuroscience at the University of Colorado, Boulder, over on his science blog, Developing Intelligence. Let me say at the outset that Chatham is a materialist who believes that the human mind supervenes upon the human brain. Nevertheless, he regards the brain-computer metaphor as being of very limited value, insofar as it obscures the many ways in which the human brain exceeds a computer in flexibility, parallel processing and raw computational power, not to mention the fact that the human brain is part of a living human body.

Anyway, here is a short, non-technical summary of the ten differences between brains and computers which are discussed by Chatham:

1. Brains are analogue; computers are digital.
Digital 0’s and 1’s are binary (“on-off”). However, the brain’s neuronal processing is directly influenced by processes that are continuous and non-linear. Because early computer models of the human brain overlooked this simple point, they severely under-estimated the information processing power of the brain’s neural networks.

2. The brain uses content-addressable memory.
Computers have byte-addressable memory, which relies on information having a precise address. With the brain’s content-addressable memory, on the other hand, information can be accessed by “spreading activation” from closely-related concepts. As Chatham explains, your brain has a built-in Google, allowing an entire memory to be retrieved from just a few cues (key words). Computers can only replicate this feat by using massive indices.

3. The brain is a massively parallel machine; computers are modular and serial.
Instead of having different modules for different capacities or functions, as a computer does, the brain often uses one and the same area for a multitude of functions. Chatham provides an example: the hippocampus is used not only for short-term memory, but also for imagination, for the creation of novel goals and for spatial navigation.

4. Processing speed is not fixed in the brain; there is no system clock.
Unlike a computer, the human brain has no central clock. Time-keeping in the brain is more like ripples on a pond than a standard digital clock. (To be fair, I should add that some CPUs, known as asynchronous processors, don’t use system clocks.)

5. Short-term memory is not like RAM.
As Chatham writes: “Short-term memory seems to hold only ’pointers’ to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk.” One advantage of this flexibility of the brain’s short-term memory is that its capacity limit is not fixed: it fluctuates over time, depending on the speed of neural processing, and an individual’s expertise and familiarity with the subject.

6. No hardware/software distinction can be made with respect to the brain or mind.
The tired old metaphor of the mind as the software for the brain’s hardware overlooks the important point that the brain’s cognition is not a purely symbolic process: it requires a physical implementation. Some scientists believe that the inadequacy of the software metaphor for the mind was responsible for the embarrassing failure of symbolic AI.

7. Synapses are far more complex than electrical logic gates.
Because the signals which are propagated along axons are actually electrochemical in nature, they can be modulated in countless different ways, enhancing the complexity of the brain’s processing at each synapse. No computer even comes close to matching this feat.

8. Unlike computers, processing and memory are performed by the same components in the brain.
In Chatham’s words: “Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain.” We can make our memories stronger by the simple act of retrieving them.

9. The brain is a self-organizing system.
Chatham explains:

…[E]xperience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit – something known as “trauma-induced plasticity” kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction…

Chatham argues that failure to take into account the brain’s “trauma-induced plasticity” is having an adverse impact on the emerging field of neuropsychology. A whole science is being stunted by a bad metaphor.

10. Brains have bodies.
Embodiment is a marvelous advantage for a brain. For instance, as Chatham points out, it allows the brain to “off-load” many of its memory requirements onto the body.

I would also add that since computers are physical but not embodied, they lack the built-in teleology of an organism.

As a bonus, Chatham adds an eleventh difference between brains and computers:

11. The brain is much, much bigger than any [current] computer.

Chatham writes:

Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn’t include the influences of dendritic geometry, or the approximately 1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion.

Readers may ask why I am taking the trouble to point out the many differences between brains and computers, when both are, after all, physical systems with a finite lifespan. But the point I wish to make is that human beings are debased by Professor Stephen Hawking’s comparison of the human brain to a computer. The brain-computer metaphor is, as we have seen, a very poor one; using it as a rhetorical device to take pot shots at people who believe in immortality is a cheap trick. If Professor Hawking thinks that belief in immortality is scientifically or philosophically indefensible, then he should argue his case on its own merits, instead of resorting to vulgar characterizations.

Why the evolution of the brain is not like the evolution of the skunk’s butt

As we saw above, Professor Jerry Coyne maintains that human intelligence came about through the same mechanism as the skunk’s odoriferous defense. I presume he is talking about the human brain. However, there are solid biological grounds for believing that the brain is the outcome of a radically different kind of process from the one that led to the skunk’s defense system. I would argue that the brain is not the product of an undirected natural process, and that some Intelligence must have directed the evolution of the brain.

Skeptical? I’d like to refer readers to an online article by Steve Dorus et al., entitled, Accelerated Evolution of Nervous System Genes in the Origin of Homo sapiens. (Cell, Vol. 119, 1027–1040, December 29, 2004). Here’s an excerpt:

[T]he evolution of the brain in primates and particularly humans is likely contributed to by a large number of mutations in the coding regions of many underlying genes, especially genes with developmentally biased functions.

In summary, our study revealed the following broad themes that characterize the molecular evolution of the nervous system in primates and particularly in humans. First, genes underlying nervous system biology exhibit higher average rate of protein evolution as scaled to neutral divergence in primates than in rodents. Second, such a trend is contributed to by a large number of genes. Third, this trend is most prominent for genes involved a implicated in the development of the nervous system. Fourth, within primates, the evolution of these genes is especially accelerated in the lineage leading to humans. Based on these themes, we argue that accelerated protein evolution in a large cohort of nervous system genes, which is particularly pronounced for genes involved in nervous system development, represents a salient genetic correlate to the profound changes in brain size and complexity during primate evolution, especially along the lineage leading to Homo sapiens. (Emphases mine – VJT.)

Here’s the link to a press release relating to the same article:

Human cognitive abilities resulted from intense evolutionary selection, says Lahn by Catherine Gianaro, in The University of Chicago Chronicle, January 6, 2005, Vol. 24, no. 7.

University researchers have reported new findings that show genes that regulate brain development and function evolved much more rapidly in humans than in nonhuman primates and other mammals because of natural selection processes unique to the human lineage.

The researchers, led by Bruce Lahn, Assistant Professor in Human Genetics and an investigator in the Howard Hughes Medical Institute, reported the findings in the cover article of the Dec. 29, 2004 issue of the journal Cell.

“Humans evolved their cognitive abilities not due to a few accidental mutations, but rather from an enormous number of mutations acquired through exceptionally intense selection favoring more complex cognitive abilities,” said Lahn. “We tend to think of our own species as categorically different – being on the top of the food chain,” Lahn said. “There is some justification for that.”

From a genetic point of view, some scientists thought human evolution might be a recapitulation of the typical molecular evolutionary process, he said. For example, the evolution of the larger brain might be due to the same processes that led to the evolution of a larger antler or a longer tusk.

We’ve proven that there is a big distinction. Human evolution is, in fact, a privileged process because it involves a large number of mutations in a large number of genes,” Lahn said.
“To accomplish so much in so little evolutionary time – a few tens of millions of years – requires a selective process that is perhaps categorically different from the typical processes of acquiring new biological traits.” (Emphases mine – VJT.)

Professor Lahn’s remarks on elephants’ tusks apply equally to the evolution of skunk butts. Professor Jerry Coyne’s comparison of the evolution to the evolution of the skunk’s defense system therefore misses the mark. The two cases do not parallel one another.

Finally, here’s an excerpt from another recent science article: Gene Expression Differs in Human and Chimp Brains by Dennis Normile, in “Science” (6 April 2001, pp. 44-45):

“I’m not interested in what I share with the mouse; I’m interested in how I differ from our closest relatives, chimpanzees,” says Svante Paabo, a geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Such comparisons, he argues, are the only way to understand “the genetic underpinnings of what makes humans human.” With the human genome virtually in hand, many researchers are now beginning to make those comparisons. At a meeting here last month, Paabo presented work by his team based on samples of three kinds of tissue, brain cortex, liver, and blood from humans, chimps, and rhesus macaques. Paabo and his colleagues pooled messenger RNA from individuals within each species to get rid of intraspecies variation and ran the samples through a microarray filter carrying 20,000 human cDNAs to determine the level of gene expression. The researchers identified 165 genes that showed significant differences between at least two of the three species, and in at least one type of tissue. The brain contained the greatest percentage of such genes, about 1.3%. It also produced the clearest evidence of what may separate humans from other primates. Gene expression in liver and blood tissue is very similar in chimps and humans, and markedly different from that in rhesus macaques. But the picture is quite different for the cerebral cortex. “In the brain, the expression profiles of the chimps and macaques are actually more similar to each other than to humans,” Paabo said at the workshop. The analysis shows that the human brain has undergone three to four times the amount of change in genes and expression levels than the chimpanzee brain… “Among these three tissues, it seems that the brain is really special in that humans have accelerated patterns of gene activity,” Paabo says.” (Emphasis mine – VJT.)

I would argue that these changes that have occurred in the human brain are unlikely to be natural, because of the deleterious effects of most mutations and the extensive complexity and integration of the biological systems that make up the human brain. If anything, this hyper-fast evolution should be catastrophic.

We should remember that the human brain is easily the most complex machine known to exist in the universe. If the brain’s evolution did not require intelligent guidance, then nothing did.

As to when the intelligently directed manipulation of the brain’s evolution took place, my guess would be that it started around 30 million years ago when monkeys first appeared, but became much more pronounced after humans split off from apes around 6 million years ago.

Why the human mind cannot be equated with the human brain

The most serious defect of a materialist account of mind is that it fails to explain the most fundamental feature of mind itself: intentionality. Professor Edward Feser, who has written several books on the philosophy of mind, defines intentionality as “the mind’s capacity to represent, refer, or point beyond itself” (Aquinas, 2009, Oneworld, Oxford, p. 50). For example, when we entertain a concept of something, our mind points at a certain class of things, and it points at the conclusion of an argument when we reason, at some state of affairs when we desire something, and at some person (or animal) when we love someone.

Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves. However, brain processes cannot possess this kind of meaning, because physical states of affairs have no inherent meaning as such. Hence our thoughts cannot be the same as our brain processes. As Professor Edward Feser puts it in a recent blog post (September 2008):

Now the puzzle intentionality poses for materialism can be summarized this way: Brain processes, like ink marks, sound waves, the motion of water molecules, electrical current, and any other physical phenomenon you can think of, seem clearly devoid of any inherent meaning. By themselves they are simply meaningless patterns of electrochemical activity. Yet our thoughts do have inherent meaning – that’s how they are able to impart it to otherwise meaningless ink marks, sound waves, etc. In that case, though, it seems that our thoughts cannot possibly be identified with any physical processes in the brain. In short: Thoughts and the like possess inherent meaning or intentionality; brain processes, like ink marks, sound waves, and the like, are utterly devoid of any inherent meaning or intentionality; so thoughts and the like cannot possibly be identified with brain processes.

Four points need to be made here, about the foregoing argument. First, Professor Feser’s argument does not apply to all mental states as such, but to mental acts – specifically, those mental acts (such as thoughts) which possess inherent meaning. My seeing a red patch here now would qualify as a mental state, but since it is not inherently meaningful, it is not covered by Feser’s argument. However, if I think to myself, “That red thing is a tomato” while looking at a red patch, then I am thinking something meaningful. (The reader will probably be wondering, “What about an animal which recognizes a tomato but lacks the linguistic wherewithal to say to itself, ‘This is a tomato’?” Is recognition inherently meaningful? The answer, as I shall argue in part (b) below, depends on whether the animal has a concept of a tomato which is governed by a rule or rules, which it considers normative and tries to follow – e.g. “This red thing is juicy but has no seeds on the inside, so it can’t be a tomato but might be a strawberry; however, that green thing with seeds on the inside could be a tomato.”)

Second, Professor Feser’s formulation of the argument from the intentionality of mental acts is very carefully worded. Some philosophers have suggested that the characteristic feature of mental acts is their “aboutness”: thoughts, arguments, desires and passions in general are about something. But this is surely too vague, as DNA is “about” something too: the proteins it codes for. We can even say that DNA possesses functionality, which is certainly a form of “aboutness.” What it does not possess, however, is inherent meaning, which is a distinctive feature of mental acts. DNA is a molecule that does a job, but it does not and cannot “mean” anything, in and of itself. If (as I maintain) DNA was originally designed, then it was meant by its Designer to do something, but this meaning would be something extrinsic to it. Its functionality, on the other hand, would be something intrinsic to it.

Third, it is extremely difficult to disagree with Feser’s premise that thoughts possess inherent meaning. To do that, one would have to either deny that there are any such things as thoughts, or one would need to locate inherent meaning somewhere else, outside the domain of the mental.

There are a few materialists, known as eliminative materialists, who deny the very existence of mental processes such as thoughts, beliefs and desires. The reason why I cannot take eliminative materialism seriously is that any successful attempt to argue for the truth of eliminative materialism – or, indeed, for the truth of any theory – would defeat eliminative materialism, since argument is, by definition, an attempt to change the beliefs of one’s audience, and eliminative materialism says we have none. If eliminative materialism is true, then argumentation of any kind, about any subject, is always a pointless pursuit, as argumentation is defined as an attempt to change people’s beliefs, and neither attempts not beliefs refer to anything, on an eliminative materialist account.

The other way of arguing against the premise that thoughts possess inherent meaning would be to claim that inherent meaning attaches primarily to something outside the domain of the mental, rather than to our innermost thoughts as we have supposed. But what might this “something” be? The best candidate would be public acts, such as wedding vows, the signing of contracts, initiation ceremonies and funerals. Because these acts are public, one might argue that they are meaningful in their own right. But this will not do. We can still ask: what is it about these acts and ceremonies that makes them meaningful? (A visiting alien might find them utterly meaningless.) And in the end, the only satisfactory answer we can give is: the cultural fact that within our community, we all agree that these acts are meaningful (which presupposes an mental act of assent on the part of each and every one of us), coupled with the psychological fact that the participants are capable of the requisite mental acts needed to perform these acts properly (for instance, someone who is getting married must be capable of understanding the nature of the marriage contract, and of publicly affirming that he/she is acting freely). Thus even an account of meaning which ascribes meaning primarily to public acts still presupposes the occurrence of mental acts which possess meaning in their own right.

Fourth, it should be noted that Professor Feser’s argument works against any materialist account of the mind which identifies mental acts with physical processes (no matter what sort of processes they may be) – regardless of whether this identification is made at the generic (“type-type”) level or the individual (“token-token”) level. The reason is that there is a fundamental difference between mental acts and physical processes: the former possess an inherent meaning, while the latter are incapable of doing so.

Of course, the mere fact that mental acts and physical processes possess mutually incompatible properties does not prove that they are fundamentally different. To use a well-worn example, the morning star has the property of appearing only in the east, while the evening star has the property of appearing only in the west, yer they are one and the same object (the planet Venus). Or again: Superman has the property of being loved by Lois Lane, but Clark Kent does not; yet in the comic book story, they are one and the same person.

However, neither of these examples is pertinent to the case we are considering here, since the meaning which attaches to mental acts is inherent. Hence it must be an intrinsic feature of mental acts, rather than an extrinsic one, like the difference between the morning star and the evening star. As for Superman’s property of being loved by Lois Lane: this is not a real property, but a mere Cambridge property, to use a term coined by the philosopher Peter Geach: in this case, the love inheres in Lois Lane, not Superman. (By contrast, if Superman loves Lois, then the same is also true of Clark Kent. This love is an example of a real property, since it inheres in Superman.)

The difference between mental acts and physical processes does not merely depend on one’s perspective or viewpoint; it is an intrinsic difference, not an extrinsic one. Moreover, it is a real difference, since the property of having an inherent meaning is a real property, and not a Cambridge property. Since mental acts possess a real, intrinsic property which physical processes lack, we may legitimately conclude that mental acts are distinct from physical processes. (Of course, “distinct from” does not mean “independent of”.)

A general refutation of materialism

Feser’s argument can be extended to refute all materialistic accounts of mental acts. Any genuinely materialistic account of mental acts must be capable of explaining them in terms of physical processes. There are only three plausible ways to do this: (a) identifying mental acts with physical processes, (b) showing how mental acts are caused by physical processes, and (c) showing how mental acts are logically entailed by physical processes. No other way of explaining mental acts in terms of physical processes seems conceivable.

The first option, as we have seen, is ruled out: as we saw earlier, mental acts cannot be equated with physical processes, because the former possess inherent meaning as a real, intrinsic property, while the latter do not.

The second option is also impossible, for two reasons. Firstly, if the causal law is to count as a genuine explanation of mental acts, then it must account for their intentionality, or inherent meaningfulness. In other words, we would need a causal law that not only links physical processes to mental acts, but a causal law that links physical processes to meanings. However, meaningfulness is a semantic property, whereas the properties picked out by laws of nature are physical properties. To suppose that there are laws linking physical processes and mental acts, one would have to suppose the existence of a new class of laws of nature: physico-semantic laws.

Secondly, we know for a fact that there are some physical processes (e.g. precipitation) which are incapable of generating meaning: they are inadequate for the task at hand. If we are to suppose that certain other physical processes are capable of generating meaning, then we must believe that these processes are causally adequate for the task of generating meaning, while physical processes such as precipitation are not. But this only invites the further question: why? We might be told that causally inadequate processes lack some physical property (call it F) which causally adequate processes possess – but once again, we can ask: why is physical property F relevant to the task of generating meaning, while other physical properties are not?

So much for the first and second options, then. Mental acts which possess inherent meaning are neither identifiable with physical processes, nor caused by them. The third option is to postulate that mental acts are logically entailed by physical processes. This option is even less promising than the first two: for in order to show that physical processes logically entail mental acts, we would have to show that physical properties logically entail semantic properties. But if we cannot even show that they are causally related, then it will surely be impossible for us to show that they are logically connected. Certainly, the fact that an animal (e.g. a human being) has the property of having a large brain with complex inter-connections that can store a lot of information does not logically entail that this animal – or its brain, or its neural processes, or its bodily movements – has the property of having an inherent meaning.

Hence not only are mental acts distinct from brain processes, but they are incapable of being caused by or logically entailed by brain processes. Since these are the only modes of explanation open to us, it follows that mental acts are incapable of being explained in terms of physical processes.

Let us recapitulate. We have argued that eliminative materialism is false, as well as any version of materialism which identifies mental acts with physical processes, and also any version of materialism in which mental acts supervene upon brain processes (either by being caused by these processes or logically entailed by them). Are there any versions of materialism left for us to consider?

It may be objected that some version of monism, in which one and the same entity has both physical and mental properties, remains viable. Quite so; but monism is not materialism.

We may therefore state the case against materialism as follows:

1. Mental acts are real.
(Denial of this premise entails denying that there can be successful arguments, for an argument is an attempt to change the thoughts and beliefs of the listener, and if there are no mental acts then there are no thoughts and beliefs.)

2. At least some mental acts – e.g. thoughts – are have the real, intrinsic property of being inherently meaningful.
(Justification: it is impossible to account for the meaningfulness of any act, event or process without presupposing the existence of inherently meaningful thoughts.)

3. Physical processes are not inherently meaningful.

4. If process X has a real, intrinsic property F which process Y lacks, then X cannot be identified with Y.

5. By 2, 3 and 4, physical processes cannot be identified with inherently meaningful mental acts.

6. Physical processes are only capable of causing other processes if there is some law of nature linking the former to the latter.

7. Laws of nature are only explanatory of the respective properties they invoke, for the processes they link.
(More precisely: if a law of nature links property F of process X with property G of process Y, then the law explains properties F and G, but not property H which also attaches to process Y. To explain that, one would need another law.)

8. The property of having an inherent meaning is a semantic property.

9. There are not, and there cannot be, laws of nature linking physical properties to semantic properties.
(Justification: No such “physico-semantic” laws have ever been observed; and in any case, semantic properties are not reducible to physical ones.)

10. By 6, 7, 8 and 9, physical processes are incapable of causing inherently meaningful mental acts.

11. Physical processes do not logically entail the occurrence of inherently meaningful mental acts.

12. If inherently meaningful mental acts exist, and if physical processes cannot be identified with them and are incapable of causing them or logically entailing all of them, then materialism is false.
(Justification: materialism is an attempt to account for mental states in physical terms. This means that physical processes must be explanatorily prior to, or identical with, the mental events they purport to explain. Unless physical processes are capable of logically or causally generating mental states, then it is hard to see how they can be said to be capable of explaining them.)

13. Hence by 1, 5, 10, 11 and 12, materialism is false.

Why doesn’t the mind remain sober when the body is drunk?

The celebrated author Mark Twain (1835-1910) was an avowed materialist, as is shown by the following witty exchange he penned:

Old man (sarcastically): Being spiritual, the mind cannot be affected by physical influences?
Young man: No.
Old man: Does the mind remain sober when the body is drunk?

Drunkenness does indeed pose a genuine problem for substance dualism, or the view that mind and body are two distinct things. For even if the mind (which thinks) required sensory input from the body, this would only explain why a physical malady or ailment would shut off the flow of thought. What it would not explain is the peculiar, erratic thinking of the drunkard.

However, the view I am defending here is not Cartesian substance dualism, but a kind of “dual-operation monism”: each of us is one being (a human being), who is capable of a whole host of bodily operations (nutrition, growth, reproduction and movement, as well as sensing and feeling), as well as a few special operations (e.g. following rules and making rational choices) which we perform, but not with our bodies. That doesn’t mean that we perform these acts with some spooky non-material thing hovering 10 centimeters above our heads (a Cartesian soul, which is totally distinct from the body). It just means that not every act performed by a human animal is a bodily act. For rule-following acts, the question, “Which agent did that?” is meaningful; but the question, “Which body part performed the act of following the rule?” is not. Body parts don’t follow rules; people do.

Now, it might be objected that the act of following a rule must be a material act, because we are unable to follow rules when our neuronal firing is disrupted: as Twain pointed out, drunks can’t think straight. But this objection merely shows that certain physical processes in the brain are necessary, in order for rational thought to occur. What it does not show is that these neural processes are sufficient to generate rational thought. As the research of the late Wilder Penfield showed, neurologists’ attempts to produce thoughts or decisions by stimulating people’s brains were a total failure: while stimulation could induce flashbacks and vividly evoke old memories, it never generated thoughts or choices. On other occasions, Penfield was able to make a patient’s arm go up by stimulating a region of his/her brain, but the patient always denied responsibility for this movement, saying: “I didn’t do that. You did.” In other words, Penfield was able to induce bodily movements, but not the choices that accompany them when we act freely.

Nevertheless, the reader might reasonably ask: if the rational act of following a rule is not a bodily act, then why are certain bodily processes required in order for it to occur? For instance, why can’t drunks think straight? The reason, I would suggest, is that whenever we follow an abstract rule, a host of subsidiary physical processes need to take place in the brain, which enable us to recall the objects covered by that rule, and also to track our progress in following the rule, if it is a complicated one, involving a sequence of steps. Disruption of neuronal firing interferes with these subsidiary processes. However, while these neural processes are presupposed by the mental act of following a rule, they do not constitute the rule itself. In other words, all that the foregoing objection shows is that for humans, the act of rule-following is extrinsically dependent on physical events such as neuronal firing. What the objection does not show is that the human act of following or attending to a rule is intrinsically or essentially dependent on physical processes occurring in the brain. Indeed, if the arguments against materialism which I put forward above are correct, the mental act of following a rule cannot be intrinsically dependent on brain processes: for the mental act of following a rule is governed by its inherent meaning, which is something that physical processes necessarily lack.

I conclude, then, that attempts to explain rational choices made by human beings in terms of purely material processes taking place in their brains are doomed to failure, and that whenever we follow a rule (e.g. when we engage in rational thought) our mental act of doing so is an immaterial, non-bodily act.

Implications for immortality

The fact that rational choices cannot be identified with, caused by or otherwise explained by material processes does not imply that we will continue to be capable of making these choices after our bodies die. But what it does show is that the death of the body, per se, does not entail the death of the human person it belongs to. We should also remember that it is in God that we live and move and have our being (Acts 17:28). If the same God who made us wishes us to survive bodily death, and wishes to keep our minds functioning after our bodies have cased to do so, then assuredly He can. And if this same God wishes us to partake of the fullness of bodily life once again by resurrecting our old bodies, in some manner which is at present incomprehensible to us, then He can do that too. This is God’s universe, not ours. He wrote the rules; our job as human beings is to discover them and to follow them, insofar as they apply to our own lives.

Comments
Lamont @ #133 Thanks for your very succinct and clear response! It has helped me pinpoint where I think I diverge from the OP.
There are two things that can be called a decision, one is the kind of thing that Elizabeth described where the brain processes the available data and then produces some type of behavior. The other is when understands the gravity of the situation, considers the possibilities, and then chooses to act based on either the passion one feels, or the reasons one understands. The choice is therefore either, rational and moral, or irrational and immoral. The point is that in both cases it is an act of the will.
You distinguish between “two kinds of things that can be called a decision”. The first, you describe as “where the brain processes the available data and then produces some type of behaviour”, as described by me,which is fine. The second, you describe as “when [someone] understands the gravity of the situation, considers the possibilities, and then chooses to act based on either the passion one feels, or the reasons one understands.” Where I diverge (or at least one sense in which I diverge) from you is that I do not see these as “two kinds of things that can be called a decision”. I see the second as a subset of the first. In other words, I regard “when [someone] understands the gravity of the situation [and] considers the possibilities” as a special case of “when the brain processes the available data”. Similarly, I regard “chooses to act based on either the passion one feels, or the reasons one understands” as a special case of “then produces some type of behaviour”. I do agree, however, that moral decision making involves different processes to non-moral decision-making (and, indeed, some of my colleagues are working on the brain processes specific to moral decision-making right now). So let’s take two scenarios: In the first, I am faced with the choice of having a snack in the Starbucks of my local supermarket before doing the shopping, or doing the shopping then having my snack. It’s not a moral decision in any sense (that I can think of), and either way I will end up both having had the snack and having done the shopping. In the second, I am faced with the choice of doing the shopping, then on the way home calling in on a tiresome but lonely friend to have a snack with her, or having my snack at Starbucks and going straight home. Definitely a moral choice – I really like the Starbucks coffee and brownies, and I know that if I forgo them, I will end up with Nescafe and at best, a stale Hobnob. On the other hand, my friend will hugely appreciate the visit. In both cases, my brain “processes the available data and produces some kind of behaviour”. More specifically, in both cases, my brain cycles through the behavioral options (Starbucks now vs Starbucks later; Starbucks vs Nescafe and Hobnob) and outputs a behavioural sequence (e.g. shopping then Starbucks; shopping then friend). And in both cases, I propose (with a fair bit of scientific backing), as my brain cycles through the behavioural options, at sub-execution threshold , it simulates the outcomes of each option (what we think of as “imagining the consequences”, and the desirability of each outcome (signalled by the degree to which our reward circuits are activated) feeds back into the motor-program until a winning program reaches execution threshold and we do that (what we describe to ourselves as “acting on our decision”). In the first scenario, I suggest, what goes on is something like this: first I activate the Starbucks-then-shopping scenario, and my reward circuits get excited by the thought of immediate coffee and brownie, and, indeed, I may react physiologically with salivation and tummy rumbles; however, I also activate the consequences of then shopping, and the dreariness with which I tend to respond to food choices if my tummy is full (I tend to be a much more enthusiastic and imaginative food shopper if I shop hungry); then I simulate the consequences of arriving home with a less-than-full shopping bag, and having to make a second shopping trip before the week is up, and my reward circuits get less excited, and this motor program gets less of a boost. Then (or even in parallel) I consider shopping first, and the pleasure (as indicated by my reward circuits) I will get in choosing cool ingredients, despite my escalating tummy rumbles, and then the further pleasure I will get in enjoying my well-earned coffee and brownie. And that motor program gets more of a boost, and actually “laterally inhibits” the competing program. So I walk resolutely past Starbucks and head for the fruit aisle. In the second scenario, something very similar happens, but in addition to simulated experience of Starbucks coffee and brownie activating my reward circuits, and laterally inhibiting the motor program that will result in Nescafe and a stale hobnob, the simulated experience of seeing the pleasure in my friends face also activates my reward circuits, and not only that, but my capacity to simulate how I would feel if a dear friend came for an unexpected visit also activates my reward circuits, and the degree to which my reward circuits are differentially activated by these simulated behavioural consequences will determine which of the mutually laterally inhibiting motor programs reaches execution threshold. And I will express that to myself as “I was tempted linger in Starbucks for a snack, but I decided to call on poor Mildred instead, dear old thing that she is”. My point being that the algorithm is the same in both cases, even though the nature of the simulations is different, and this relates to the point I made above about morality and free-will being related not to choosing what we should do over what we want to do, but about wanting good things rather than bad things ? Which leads me to your next point:
So what is an act of the will? It is when the mind/ soul decides to either, follow the path that the brain has cranked out; or, to follow the path that one understands is the right thing to do.
No, I don’t think an “act of will” is “when the mind/ soul decides to either, follow the path that the brain has cranked out; or, to follow the path that one understands is the right thing to do” – I think that is a false distinction, and, in any case, the wrong distinction ? I think myself, it is more useful to think of an “act of will” as an act that follows the consideration of options, whether moral criteria are used or not. For instance, if I choose a strawberry rather than a chocolate ice-cream, that is, IMO, an act of will, even though it can also be described as what my “brain cranked out”. Whereas if I just blindly draw a packet from the freezer, I cannot describe what I end up eating as the result of an act of will – I left it to “chance” not “will”. My brain simply did not “crank out” an answer to the question “which ice-cream do I want to eat?” Conversely, if I decide to see my friend rather than go to Starbucks, that is certainly an act of will (I weighed up alternatives) but it is, additionally, I would argue, a moral choice. And, interestingly, if I am such a saintly person (which I am not!) that it does not even cross my mind to go to Starbucks if the possibility of fitting in a visit to a tedious friend is a practical possibility, in which case, I may do a moral act that is not, proximally, “an act of will” at all, although it may be in a distal sense in that I have, through the will-ful acquisition of altruistic habit throughout my saintly life, come to a state of grace wherein the joy of others is tantamount to my own/
Now if it is just a matter of the brain making a difficult decision, where does the conflict between mind and brain come from? Is it just that one part of the brain is conflict with another part? If that were the case the evidence should be obvious and undeniable. But if such evidence is out there I have never seen it.
I don’t think there is a “conflict between mind and brain”. I think there are lots of conflicts within the brain between alternative courses of action, and how these conflicts are resolved is an extremely interesting domain of neuroscience, and yes, there is lots of evidence supporting viable hypotheses. I’d be happy to expound more, but maybe later :) Apologies for the lack of succinctness in my response!Elizabeth Liddle
May 28, 2011
May
05
May
28
28
2011
04:17 AM
4
04
17
AM
PDT
Something like your account of meaning would be fine if we all lived in a Fred Flintstone-like community of people who talked to each other using one-word utterances, like “Slab!” (to borrow an example from Wittgenstein’s Philosophical Investigations).
Wittgenstein was a fan of the Flintstones!? vjtorley, have you read J.P. Moreland's The Recalcitrant Imago Dei: Human Persons and the Failure of Naturalism?Mung
May 27, 2011
May
05
May
27
27
2011
04:55 PM
4
04
55
PM
PDT
nullasalus @ 121 Sorry this has taken me awhile to get to. OK, here is a crux in our discussion:
I have some programming experience – I can code up a program where various outcomes are possible depending on which variables are in what state at a given time. The computer/software isn’t ‘deciding’ which outcome to produce, anymore than bowling balls decide to roll down stairs when kicked. Or rather, as much of a decision process is present in one as in the other.
I would say that your computer program is deciding, because that’s what I normally consider deciding is – weighing up contextual factors and opting for the action that best matches a goal. So we need to figure out what it is that you think Real Deciding is :) Because as far as I am concerned, that’s exactly how brains produce a decision, and although the logic is fuzzier, we can easily implement fuzzy logic on a computer (I’m sure you can, and so can I). So the extra factor that you think that my Deciding (let’s call it LizzieDeciding, because I haven’t figured out how to do subscripts in html) doesn’t have that Real Deciding has isn’t fuzziness, right. So what am I failing to account for? I rather assumed consciousness, but you when I suggest that: say:
... that the real problem here is not in accounting for either decision-making, or even intention as in goal-directed decision making but in accounting for consciousness. Amirite?
You said:
Nah, you’re not.
So I’m not yet clear what the difference is between LizzieDeciding (i.e. an action selected as the one most likely to achieve a goal given contextual inputs) and Real Deciding. So let’s take your next point:
Qualia is a distinct question from intentionality or “aboutness”. And more an issue for yourself than myself, since you’re back to the computer metaphors, and are vacillating between treating decision-making and intentions as ‘nicknames for blind, mechanical causation’ and ‘nicknames for what you judge computers and programs to do, ignoring the fact that computers “play chess” the same way arrangements of sticks and rocks “act as maps” – that is, in virtue of an interpreting mind.
No, I’m not vacillating, or, at least, I’m only vacillating if you are making the very distinction that I am not making, and indeed, claiming is not a distinction! I am indeed “treating decision-making and intentions” as events that are mechanistically accountable, but not – by definition not – blind. I am, as I said above, defining deciding, at least LizzieDeciding, as a process by which an action is selected as the one most likely to achieve a given goal in a given context. Which can’t be “blind” because of that goal - it is that matching-to-goal part of the process that I am defining as “intention” (let’s call it LizzieIntention, for now). So they aren’t “nicknames” at all – that’s what I think they are. So let’s figure out what, according to you, they aren’t: You say that I am vacillating between that and “what [I] judge computers do”. Well, I think chess-playing computers do exactly that LizzieDeciding thing, as you rightly observe. But you think I am wrong because I am “ignoring the fact that computers ‘play chess’ the same way arrangements of sticks and rocks “act as maps” – that is, in virtue of an interpreting mind. Hmmm. Tbh, I’m not quite sure what that means, or at least, if it means what I think it means, I think it’s wrong, but interestingly wrong. I’m trying to put my finger on just what is wrong! An arrangement of pebbles left by a tramp to point at a house where a nice lady will give you a drink of water is clearly only has if a) it was actually left by the tramp and not inadvertently kicked into position by a passing horse, and b) if someone understands the intentions behind the arrangement, right? Is that the kind of pebble-map you were thinking of? And in that scenario, therefore, I would entirely agree that the arrangement of pebbles only has meaning “in virtue of an interpreting mind”. Now, if we take a chess game (and I’m no chess player, I’m afraid), again the arrangement of pieces on the chess board only has meaning “in virtue of an interpreting” something, where that something is normally the minds of two human players. Now, are you claiming that if one or both the “players” is a computer program that no game of chess is being played? Clearly at some level, the computer or computers “interpret” the positions of the chess pieces according to a strict set of rules with a stated goal, just as a human player does. But presumably you would say that there is no interpreting mind here, and I would be inclined to agree, but only because of the consciousness thing, but you have rejected as the critical difference). So what the difference here? So by what criteria do you claim that a pair of computers playing chess are only LizzieDeciding, whereas a human player playing chess is doing Real Deciding? In other words, what is it that I am ignoring? Both computer and human player have a goal, scripted in the case of the computer, which is to checkmate the opponent. Both make LizzieDecisions regarding the next move based on some kind of fuzzy logic that maximises the probability of success given the current state of the game, so both have LizzieIntentions. What am I not accounting for here? You say:
Aboutness, decision-making, etc, as common sense know them, are specifically left out of a mechanistic materialist understanding of reality – and, when incorporated into that understanding, end up becoming ‘decisions’ by vast redefinitions, such that you can arguably say rocks decide to roll down stairs when kicked. It’s obfuscation, and in the end it’s either admitted that there is intentionality and aboutness in nature after all (and in a sense that is at odds with that materialist, mechanist understanding of nature), or that there isn’t (and that therefore that is no actual mechanistic, materialist ‘accounting for’ these things – they are simply eliminated.)
Well, I don’t think so – or, at least, first I’d like a clearer exposition of how LizzieDeciding (which wouldn’t, incidentally, include rocks rolling downstairs when kicked) differs from Real Deciding, and how LizzieIntention (which requires actions to be selected on the basis of the degree to which they further a goal) differs from Real Intention. Yes, I think there is “intentionality” in nature – I think nature is full of it, and it’s mostly found in animals (not so much in plants, although a few come close to a fulfilling my minimal criteria) and also, increasingly, in human artefacts. So there’s a nice ID argument for you:)But it’s not an argument against a “materialistic mechanistic” account of nature, it’s an argument that mechanisms, including human-designed mechanisms as well as natural (living) mechanisms, are capable of decision-making and intentions.
But no materialist I know insist on that, and I certainly don’t. That’s because we also have neutrons and atoms and molecules and cells and organs and organisms and brains and selves.
Then you need to meet more materialists. This is a little like being told ‘There are no materialists who deny that beliefs exist’, ignoring Alex Rosenberg and the Churchlands, etc. Or ‘there are no materialists who deny moral realism’, or ‘there are no mereological nihilists’. I mean, they’re inconvenient for the position, but they’re out there.
OK. As long as you aren’t including Dennett.
And what makes them more than their parts is the pattern they make, over both time and space
Are those patterns intrinsic, or extrinsic? Are there real “patterns” in nature, independent of any mind evaluating them? Or is a pattern just an impression a mind applies to its representation of the world? This applies to that ‘extraordinary assemblage’ as well.
And does the tree that falls in the forest make a sound? Did light exist in a universe without retinas? We call them patterns, because that’s how we, as human brain-possessors, parse them. And one of those patterns is the pattern we call the parser herself. I submit that if you exchange your binary categories (real/not real) for a Strange Loop, the mind-body problem makes a noise like a hoop and rolls away :)Elizabeth Liddle
May 27, 2011
May
05
May
27
27
2011
10:43 AM
10
10
43
AM
PDT
vj The key to this is whether beliefs can be "cashed out" as dispositions to behave in certain ways (and by behaviour I include internal rehearsals and mimickry of external behavior such as seeing an imagined series of events or running words in one's head). You clearly do not think that something like believing in the theory of relativity can be accounted for this way. Before going any further I would like to identify what types of beliefs (if any) you think can be accounted for this way. You seem to accept the pigeon's belief that it is about to be fed can be seen as a behavioural disposition. So presumably a person's belief that he is about to be fed can also be cashed out behaviourally. How about these beliefs: That tigers are dangerous That it will rain soon That it will rain tomorrow I am trying to pin down why some beliefs require a "proposition" whereas others don't.markf
May 27, 2011
May
05
May
27
27
2011
03:48 AM
3
03
48
AM
PDT
Hi markf (#134) Thank you for your post. Your attempt to ground propositional meaning in intentions only succeeds in cases where the intention can be expressed non-propositionally. This is rarely the case, when we utter propositions: typically it applies to those simple cases where what is sought is merely a change in an individual's behavior. In these cases, language may not even be required. A boss who wants a tardy employee to change his ways may say "You are early" sarcastically, or she may just tap the dial of her watch and give the employee a black look. But if I am trying to change your beliefs, as Sean Carroll attempted to do recently when he wrote his blog in "Scientific American" on "Physics and the Immortality of the Soul" (see http://www.scientificamerican.com/blog/post.cfm?id=physics-and-the-immortality-of-the-2011-05-23 , and see here for a follow-up discussion: http://www.physicsforums.com/showthread.php?t=501665 ), then my intention has to be cashed out propositionally. In Sean Carroll's case, there was a proposition P that he wanted to communicate, discuss, and get his to deny: namely, that there is a non-material spirit that drives around the particles in our brains. Here the intention is linguistic: it presupposes a proposition P that needs to be understood and shared between sender and receiver. You argue that beliefs can be cashed out as dispositions to behave. A pigeon's belief that it is about to be fed can indeed be cashed out like that, but pigeons do not engage in critical thinking and debating, as rational animals like ourselves do. Most of our beliefs are incapable of being cashed out in behavioral terms, and any given behavior may be compatible with a multitude of different beliefs and intentions. (Think of standing bare-headed in the rain, and Samuel Johnson's penance.) I might add that your claim that the belief that X can be explained as the disposition to behave as though X fails to achieve the desired reduction, in any case: it still leaves the variable X unexplained. And in the vast majority of cases, as I argued above, "behaving as though X" has no clear content. Something like your account of meaning would be fine if we all lived in a Fred Flintstone-like community of people who talked to each other using one-word utterances, like "Slab!" (to borrow an example from Wittgenstein's Philosophical Investigations). In simple cases, the meaning of an utterance does equate to its use, which in turn equates to its function. But if we lived our lives like that, we wouldn't need language at all - and we wouldn't need propositions, either. (Vervet monkeys seem to do well without them.) Lastly, one need not look far for examples which contradict the "meaning = use = function" theory. Any science textbook will do.vjtorley
May 27, 2011
May
05
May
27
27
2011
02:56 AM
2
02
56
AM
PDT
vj and Elizabeth vj has a point when distinguishing meaning through mere contingency  such as “black clouds mean rain” and meaning arising through intentions (which is often associated with language).  This is Grice’s distinction between natural and non-natural meaning and it is quite fundamental. It is also true that only minds, possibly only human minds, can give something non-natural meaning – because it is related to intentions and only minds can have intentions. But this isn’t a problem for materialists if they can show that intentions are material (which we can by recognising them to be dispositions). There is more of a problem with “propositional meaning”. vj (and Feser) wants to give this some kind of fundamental status and then show that it only applies to mental things. But there is a strong (and I believe correct) philosophical tradition which regards propositional meaning as a construct derived from intentions and not inherent as vj would have it.  We are confused by our conventional use of words and forget the fundamentals of communication. In a very potted version it goes something like this: * The propositional meaning of “You are early.” is the set of conditions under which that sentence is true i.e. the person being addressed arriving before the set time. * We know those conditions because of the conventional role of the words in the sentence which we can look up in a dictionary. * But those conventions must have been established at some time.   The words (or their predecessors) must have been uttered without the conventions (and without a dictionary) – so we could learn the role they are meant to play.  People must have communicated without the benefit of convention. * So there must be something else more fundamental that establishes the truth conditions for an utterance.  Indeed  the truth conditions of real utterance often do not coincide with the conventional use of the words (e.g.  a sarcastic “You are early” addressed to a latecomer). * How do we establish the truth conditions?  Through the intention of the utterer.  It is the utterer’s intentions that establish the truth conditions.  In this case the intention to get the latecomer to realise they have arrived after the agreed time. * So propositional meaning also comes back to intentions.  In this case the intention to get someone to believe something. One objection is that to believe something implies propositional meaning – the truth conditions of the belief.  But the belief that X can be explained in materialist terms. It is the disposition to behave as though X.  Even quite a simple animal such as pigeon can believe it is about to be fed.markf
May 26, 2011
May
05
May
26
26
2011
11:23 PM
11
11
23
PM
PDT
There are two things that can be called a decision, one is the kind of thing that Elizabeth described where the brain processes the available data and then produces some type of behavior. The other is when understands the gravity of the situation, considers the possibilities, and then chooses to act based on either the passion one feels, or the reasons one understands. The choice is therefore either, rational and moral, or irrational and immoral. The point is that in both cases it is an act of the will. So what is an act of the will? It is when the mind/ soul decides to either, follow the path that the brain has cranked out; or, to follow the path that one understands is the right thing to do. Now if it is just a matter of the brain making a difficult decision, where does the conflict between mind and brain come from? Is it just that one part of the brain is conflict with another part? If that were the case the evidence should be obvious and undeniable. But if such evidence is out there I have never seen it.Lamont
May 26, 2011
May
05
May
26
26
2011
03:06 PM
3
03
06
PM
PDT
Elizabeth Liddle (#131) Thank you very much for your post. Regarding your example of "Black clouds mean rain," I suggest you have a look at my earlier post (#31), where I addressed this very example when addressing Grice's theory of natural meaning. I concluded:
“Natural meaning” is, it seems, a derived rather than a primitive usage of the term “meaning”: it assumes the existence of a community of observers who possess a stock of shared scientific knowledge.
I don't think qualia constitute a very good argument for the immateriality of the mind, so I have no objection in principle to your example of how an animal might decide on eating a peach instead of a banana. Where I would object is when you make a leap of logic to claim that in a language-using animal, this decision would be magically "propositionalized" as: "Hmmm... I think I fancy a peach." For what is at stake here is precisely how language, which is inherently propositional, arises in the first place. To assume that it is simply there is to beg the question. You mention "subvocalizations and eye movements." The latter are not propositional. Incidentally, I think eye movements could be very useful as a diagnostic tool for deciding whether an animal is conscious or not, although I'm not sure whether the eye movements of language-using animals like ourselves would be different from those of chimps. I suspect that they would - although I would add that this is a product of the fact that they think propositionally, not a cause. (The notion of getting from darting eyes to fully-fledged propositions sounds like a comical endeavor to me!) As for subvocalizations, they are already propositional, in human beings. The question is: how did they get that way? There is an ocean of difference between the language-using animal who thinks "Mmm. I had a banana last time but it was very ho-hum, and this isn't the best season for them anyway. On top of that, I want to prepare a dish that will pleasantly surprise the palates of my guests, who will be having dinner with me. Mmm. I think deep dish peach pie should do the trick. I think they'll like that one, although I'd better ask Sally first as I know she's allergic to some foods" and the animal whose brain automatically weighs up the pros and cons of eating a peach versus eating a banana, and selects the former. The first animal means something by his/her decision to eat a banana; the second does not, although I'd be prepared to say that its movements have a (natural) meaning. In short, I believe that the attempt to generate linguistic meaning from an organism's behavior (including forward models) is doomed to failure. Must run now. Will be back later.vjtorley
May 26, 2011
May
05
May
26
26
2011
03:05 PM
3
03
05
PM
PDT
Hi, vjtorley Thanks again for this conversation, which I am very much enjoying :) In response to this (#101):
Hi Markf and Elizabeth Lidddle, Thank you both for your very thoughtful (!) posts. The care with which you composed them illustrates the very point that I wanted to convey, which is that meaning – and here I’m talking about inherent meaning – is, in paradigm cases, propositional. The meaning I want to convey when I wave my hands frantically while I’m in the sea is “I’m drowning” and I select the motor pattern precisely because I think it’s an excellent way to get other people to understand the proposition I wanted to communicate. They do so because they are beings like myself who are capable of putting themselves in my shoes and inferring that the only good explanation for the frantic, insistent waving they observe is an urgent need to communicate the single proposition: “I’m drowning.” Thus in the above example it is the proposition that does all the work of explaining the meaningfulness of the action sequence I engage in. The fact that I can preview it in my head is all well and fine, but that does not make it propositionally meaningful. In the absence of propositional language, previewing an action in one’s head might make it useful or practical at best. In these posts, however, we are engaging in a discussion which has no practical value whatsoever – unlike the drowning case. All of us are perfectly capable of meeting our practical wants. Our discussion pertains to the meaning of what it is to have a thought. A more theoretical discussion would be difficult to imagine. Any meaning, at this level of communication, is inherently propositional. Now here’s my point: bodily movements are not inherently propositional. It takes a good deal of careful selection to come up with a body movement that conveys a proposition per se, and even when it does, it’s a very simple one at that (“I’m drowning.”) In the vast majority of cases, when we communicate, the person communicating doesn’t mean what they mean simply because they’ve previewed these movements. Rather, the meaning logically precedes the movements. Bodily movements, even previewed ones, are simply incapable of accounting for the meaningfulness of the vast range of propositions we are capable of entertaining. And if propositional meaning does not inhere in bodily movements, then we have to look beyond them to find meaning.
I think I am in fairly fundamental disagreement with you here, but it may be difficult for me to explain why, although I’ll do my best. But I do actually dispute your premise – I think propositional meaning does “inhere in bodily movements”, or, at least, I think it is created by nested and re-entrant programs for bodily movements, where those bodily movements are not necessarily executed, and where they include subvocalizations and eye movements. However, I think we may be getting partly at cross purposes over the way we are thinking about “meaning”. I’m using the word fairly literally, or, at least, in a fairly literal common usage sense, as I said above, as in “black clouds mean rain” or “a rising temperature means the thermostat will cut out”. In other words I’m taking, as a kind of base unit (metaphor warning) meaning as contingency. So, for a very simple organism, a chemical signal emitted by a food source may “mean” “swim towards the signal source”, and is implemented by a simple stimulus-response circuit. For a more complex organism, the circuit may have many more contingencies, and only be executed if a whole series of logic gates (not a metaphor in this instance) sum to action threshold, and inputs to those gates will include signals indicating the energy reserves of the organism (is it hungry?), the learned probability that the signal in question is from prey not a predator (learning by means of weights on the neural connections affecting the summation), the probability that a bigger food source that will yield more calories for less energy expenditure, etc. And as we nest these contingencies, and incorporate feedback loops (swim towards the signal, resample the signal; make a new forward model in light of the new data, etc) we start to get the beginnings, I would say, of propositional logic, all implemented via signals that ultimately result, if executed, in bodily movements. In fact, this, I suggest, is how we can account for “qualia” although I am aware that that is opening a large can of worms. But I suggest that the “qualia” of a peach, for instance – the peachiness of a peach (and yes, I’ve chosen a fairly complex stimulus) inheres in programs for a series of bodily movements and forward models, including models for reaching for a peach, grasping it, activating the neurons that would respond to its fuzziness, the model of raising it to your nose, activating the neurons that would respond to its smell, the model of taking a bite, activating the neurons that would respond to its taste, etc. And when we decide between, say, a peach and a banana, we are activating, I suggest, propositional logic system that decides on the basis of a kind of truth table, whether the motor program that will result in the ingestion of a banana or the motor program that will result in the ingestion of the peach will best match an intended end-state, which in turn, is the result of a whole other series of forward models. At the level of the brain user, as it were (no I’m not letting a homunculus in there, the user herself is a model generated by the brain, which doesn’t make her not real, the only access to reality we have, I suggest, is models), especially if that brain user is a language user, the decision will be expressed, linguistically, probably, as “hmmm....I think I fancy a peach”. But even that linguistic thought, whether uttered or not, will emerge from the activation motor programs involved in the vocalisation of those words (whether at execution level or not). Does that make any sense? To nullasasus: I’ve read your posts with interest, and I’ll try to get a response to you at the weekend, if the thread is still going! Not sure how long threads live for at UD. Anyway, thanks for your responses.Elizabeth Liddle
May 26, 2011
May
05
May
26
26
2011
02:16 PM
2
02
16
PM
PDT
#129 Lamont I was asking for a description of a decision or an intention - not mind in general. Importantly this description needs to be in terms of properties that are mutually observable - otherwise it is a description of one unobservable in terms of another, which is just playing with words (different beetles in boxes to stretch Wittgenstein's metaphor)markf
May 26, 2011
May
05
May
26
26
2011
08:42 AM
8
08
42
AM
PDT
Markf, The description of the mind you ask for is that it has two main powers - intellect and will. The intellect understands and the will decides. If you want more details read Aquinas or Husserl. What you seem to really want however, is analysis and not description. (See my comment #2) We can analyze how the brain works and that is wonderful. But as to the mind it is simple - a unitary whole that disappears as soon as you try to take it apart. You can complain about that all you want but it will not change anything.Lamont
May 26, 2011
May
05
May
26
26
2011
08:35 AM
8
08
35
AM
PDT
#127 nullasus I agree we are done here. I don't think my position can be accurately described as either scepticism or solipsism. I am not doubting that the external world, other people, or other minds exist. I just disagree about the nature of all our minds. You are utterly convinced there is something else but as you cannot describe it (if you can - go ahead and do so) - it becomes very hard to discuss it.markf
May 26, 2011
May
05
May
26
26
2011
03:42 AM
3
03
42
AM
PDT
markf, The only reason for supposing something else in addition to a materialist explanation for a decision is your own experience of this indescribable something else which you assume everyone else has. Indescribable? Funny, people seemed plenty able to understand what everyone else meant by 'making decisions' or other mental claims far in advance of materialism. As for assuming that everyone else has these - yes, I'm not a solipsist. Wouldn’t it be just as logical to deduce that Elizabeth and I are in some way deficient I'm trying to be polite, Mark. ;) How do you know that your own irreducible experience of decision making is the same as anyone else’s? i.e. that anyone is talking about the same thing as you? So, your gamut is solipsism and radical skepticism. In the words of Cave Johnson: We're done here.nullasalus
May 26, 2011
May
05
May
26
26
2011
02:01 AM
2
02
01
AM
PDT
nullasus I note that for you decisions and intentions are irreducible and therefore presumably not definable or describable.   There is an important difference between radiation and mental constructs like these.   The various types of radiation are the best current explanation for mutually observable effects such as burning and X-rays.   That is our evidence for it.  It is an explanation because, although radiation is not directly observable,  we can describe certain hypothetical properties of radiation which are in the mutually observable world such as wavelength, particle size and energy and work out how they lead to the effects. Materialists have a (very incomplete) explanation for the behaviour we can all observe when people consider alternatives and act.  That is the evidence for the various brain structures Elizabeth describes. You propose an alternative explanation but it is irreducible so it has no describable properties and it only experienced internally – it is not mutually observable and it has no properties that are mutually observable.  The only reason for supposing something else in addition to a materialist explanation for a decision is your own experience of this indescribable something else which you assume everyone else has. So effectively your argument comes down to ”They do exist (although I can’t tell you anything about them). If you offer another explanation for the behaviour we see  you must be talking about something else”.  And I guess you assume Elizabeth and I have similar experiences of decision making and are either lying or deluding ourselves. But that raises the some interesting problems: Wouldn’t it be just as logical to deduce that Elizabeth and I are in some way deficient and don’t have the experience you have when making a decision?  If that were true what difference would it make to your life or ours? How do you know that your own irreducible experience of decision making is the same as anyone else’s? i.e. that anyone is talking about the same thing as you?markf
May 26, 2011
May
05
May
26
26
2011
01:14 AM
1
01
14
AM
PDT
markf, You appear to mean something different by “decision” from Elizabeth. She has given a rather detailed definition. Why don’t you define what you mean? Her definition of "decision" is, essentially, raw material input/output devoid of meaning or intention except as a useful fiction by a third party. As I said, bowling balls apparently "decide" to roll down stairs when kicked. I regard intentionality and meaning as real and irreducible. It's not particularly 'mysterious' to anyone except a materialist. In the same way, an idealist would find matter downright mysterious. Imagine that someone held a dualist view of chess. Why? All that counts as a chess board, chess pieces, the rules of chess, etc is that which our minds project to begin with. Really, we went over this already - even with chess as an example. The analogy fails, because chess involves artifacts we project or token via our minds to begin with. Explaining our minds as brains the same way leads to some obvious problems. But if you want an analogy, here's one that's more apt: Imagine that someone has metaphysics that rejects the existence of, say... radiation. One day, they encounter what we'd call irradiated things. So, they start to say 'that's not really radiation, it's just...' and give explanations which tend to be vague make heavy use of metaphor. But whenever a concrete explanation is pressed for, either they deny the obvious (in this case, radiation - it's all a big misunderstanding, an illusion, an artifact of how unguided evolution has wired our minds) or they admit to it in some esoteric way ('that's not radiation, that's gravity. A special kind of gravity. A kind of gravity that really, really looks like radiation, right down to banal description.') They can go on that way if they want. But the reasonable thing to do would be to say 'alright - I suppose radiation exists after all, and I'll have to abandon my previous position.' It is just a mysterious something extra. On materialist metaphysics, mind is downright mysterious if it's even taken to exist - that much I can agree on.nullasalus
May 25, 2011
May
05
May
25
25
2011
11:18 PM
11
11
18
PM
PDT
nullasus You appear to mean something different by "decision" from Elizabeth. She has given a rather detailed definition. Why don't you define what you mean? I think you will struggle and I will try to explain why through an analogy. Imagine that someone held a dualist view of chess. They believed that the concepts of chess existed in some immaterial way (maybe you do?). Elizabeth defines "checkmate" in terms of the rules of chess and what particular configurations of pieces might achieve. You respond by saying but that is not real checkmate, checkmate cannot be reduced to mere wooden pieces on a board, and she is just assuming her metaphysics by defining it that way. The trouble is there is no way to discuss the immaterial checkmate. It is just a mysterious something extra.markf
May 25, 2011
May
05
May
25
25
2011
10:36 PM
10
10
36
PM
PDT
vj #119 I am not clear what you mean by "propositional". But I agree our discussion relies on the conventional use of words and it has no immediate practical value. I would argue that both these attributes are dependent on a more fundamental use of language for meaning. Remember Wittgenstein and his language games. Words can only acquire their conventional meaning through frequent use to do something. The first users of language did not have a dictionary to turn to. The first dictionaries were descriptive not prescriptive. And that meaning must have arisen because the speakers were trying to achieve an effect on the listener. They can't just have an aired their views on philosophy of mind! Among the effects they will have wanted to achieve would be to get the listener to believe things e.g. there is good hunting over the hill (maybe this is what you mean by "propositional"?). But both the belief and the intention to get someone to believe can be explained in materialist terms - specifically a configuration of the brain that results in a disposition to behave in a certain way. However, it takes a very sophisticated use of language to develop the dispositions that correspond to beliefs in dualism etc based on tens of thousands of years of language development.markf
May 25, 2011
May
05
May
25
25
2011
10:13 PM
10
10
13
PM
PDT
And what makes them more than their parts is the pattern they make, over both time and space
Ah, and that's were the intentional magic "emerges." When you can't explain something, go ahead and jack the scale up to awesome heights and then bluff. It works in poker!mike1962
May 25, 2011
May
05
May
25
25
2011
05:38 PM
5
05
38
PM
PDT
Elizabeth Liddle, But, equally, if I attempt to account for decision-making and intention are explicable in mechanistic then clearly I am defining decision-making and intention in terms that can be explained mechanistically! Another way of putting that is this: "If I assume a mechanistic materialistic point of view, and I stipulate that any definition of meaning, intention or decision must be consistent with this view (even if consistency means 'non-existent' in any common sense meaning of the terms), then - lo and behold - I can account for these things within my perspective." Sure. And so long as you let me redefine "playing chess" to mean "something a dog is capable of doing", it may well be easy to produce a dog playing chess. I’m regarding “decision-making” as a process by which something (typically an animal, but conceivably, if I’m right, an artefact – even a designed artefact – hey, if ID is true, wouldn’t you expect people to be able to design decision-making and intentional artefacts? But I digress…) has several options for action (things it is capable of doing) and is able to execute the one that is most appropriate given the context. So you're assuming the truth of your metaphysics to begin with, and then stipulating that viable explanations must conform to those metaphysics? Alright. Why not go the whole nine yards and define decision-making as any mechanical output that was dependent on some initial input? Bowling balls decide to tumble down stairs when pushed. Not to mention, what standard is used for 'most appropriate given the context'? The only way to determine who or what is appropriate given context, on your view, is to have a third party consider it and judge what they're seeing. And that they are considering and judging what they are seeing would itself require more third party judgment, etc. Unless, of course, that aboutness and directionality is intrinsic, but then... In fact, a computer-playing chess program is probably a good example of a mechanical decision maker. And once again: Computers only play chess by virtue of human convention. I could keep the entire program intact while interpreting the rules and outputs differently, and it would be equally true that the computer is playing whatever game I've now decided on. This is pretty common in a computer games, in fact. I have some programming experience - I can code up a program where various outcomes are possible depending on which variables are in what state at a given time. The computer/software isn't 'deciding' which outcome to produce, anymore than bowling balls decide to roll down stairs when kicked. Or rather, as much of a decision process is present in one as in the other. But here anticipate (hey, I just made a forward model!) that we will have a problem – you will not be happy to call this “intention” because the computer program, you will insist (and rightly, IMO) is not “conscious”. And I will accuse you of moving the goal posts Moving the goalposts? Now, you're telling me "I'm going to stipulate that my metaphysics are true for this conversation. Therefore, I'm going to define decision-making in the only way it can be true given that assumption, even if it does violence to the words "decision-making" in any common sense use of the terms. Then I'm going to point at this or that, and say - with all those assumptions intact - that that is decision making. Object, and I'm going to say you're moving the goalposts." Yeah, color me unimpressed. And I'm sure you'd be unimpressed if I mirrored this move, applying my metaphysics in advance of yours, and changed definitions around as quickly, then announced that disagreeing was a move of the goalposts. Nice try, though. :) Well, no I won’t, because having made my forward model I can deal with it in advance. But I will say that the real problem here is not in accounting for either decision-making, or even intention as in goal-directed decision making but in accounting for consciousness. Amirite? Nah, you're not. Qualia is a distinct question from intentionality or "aboutness". And more an issue for yourself than myself, since you're back to the computer metaphors, and are vacillating between treating decision-making and intentions as 'nicknames for blind, mechanical causation' and 'nicknames for what you judge computers and programs to do, ignoring the fact that computers "play chess" the same way arrangements of sticks and rocks "act as maps" - that is, in virtue of an interpreting mind.' 'Aboutness', intentionality or proto-intentionality does not require consciousness on a number of alternate metaphysics (Aristotileanism / Thomism being one). Now, conceivably someone can argue the qualia/consciousness line from a different perspective, but I'm not doing that here. Well, my point about the wave is that it isn’t “ultimately constituted by smaller physical things”. The wave isn’t a property of those smaller things at all – it’s a property of the interface (it’s why I chose an ocean wave, rather than a sound wave for instance). The water can be travelling West, the air North East, and the wave travelling South. And like I said, this still doesn't capture the difference between the cases. Let me put it another way: Aboutness, decision-making, etc, as common sense know them, are specifically left out of a mechanistic materialist understanding of reality - and, when incorporated into that understanding, end up becoming 'decisions' by vast redefinitions, such that you can arguably say rocks decide to roll down stairs when kicked. It's obfuscation, and in the end it's either admitted that there is intentionality and aboutness in nature after all (and in a sense that is at odds with that materialist, mechanist understanding of nature), or that there isn't (and that therefore that is no actual mechanistic, materialist 'accounting for' these things - they are simply eliminated.) But no materialist I know insist on that, and I certainly don’t. That’s because we also have neutrons and atoms and molecules and cells and organs and organisms and brains and selves. Then you need to meet more materialists. This is a little like being told 'There are no materialists who deny that beliefs exist', ignoring Alex Rosenberg and the Churchlands, etc. Or 'there are no materialists who deny moral realism', or 'there are no mereological nihilists'. I mean, they're inconvenient for the position, but they're out there. And what makes them more than their parts is the pattern they make, over both time and space Are those patterns intrinsic, or extrinsic? Are there real "patterns" in nature, independent of any mind evaluating them? Or is a pattern just an impression a mind applies to its representation of the world? This applies to that 'extraordinary assemblage' as well.nullasalus
May 25, 2011
May
05
May
25
25
2011
05:08 PM
5
05
08
PM
PDT
Nullasalus @#104
No, I’m not saying “there are no thoughts or intentions” – just because I think they can be accounted for by observable mechanisms, doesn’t mean I think they don’t exist! Yes, the “material state” is “blind” but, as I tried to make clear, that doesn’t mean that theperson is blind, because the person exists at a higher level of analysis than a given “material state”.
The only possible way to make sense of a “higher level of analysis” in this context would be either A) as a useful fiction (and if it’s a fiction, it’s not going to be explanatory), B) in terms of weak emergence (in which case intention and meaning is ‘nothing but’ operation by that which is devoid of meaning and intention, and thus ultimately eliminative), or C) strong emergence (in which case appeals to the material constituents will not be explanatory, even if they are in some sense required – there’s something above and beyond those constituents in play that they themselves don’t explain, or our understanding of said constituents is incomplete, and there’s more to the physical than materialism and mechanism supposed.) If there’s another option, you’re going to have to outline it – “higher level of analysis” in and of itself isn’t very helpful.
And the name I give to the kind of decision-making in which I weigh up various options and decide, with malice (or otherwise) aforethought, on a specific course of action, is “intending”. And I experience it as “deciding”. But what happens in my brain when I do that deciding, I would contend,is that a serious of “blind mechanical material states” chunter through a series of operations the final output of which is “my” decision. And I call it mine because I consider myself incorporated (again I use the word absolutely literally) in that neural machinery.
As I said above, this either results in your explanation of meaning and/or intention as a useful fiction (and ultimately non-explanatory), weakly emergent (and thus eliminative), or strongly emergent (and thus the material isn’t what we thought it was after all.) Let me put this another way: Let’s say a person claims that they have a dog who can play chess. I ask to see the evidence, and they show me a dog who, when faced with a chess board, will pick up some of the pieces in his mouth and gnaw at them. It does not good for that person to tell me, “See, I call what the dog is doing “playing chess”.” Likewise, if you commit yourself to the view that all that exists is a material world, blindly and deterministically churning out results without thought or intention, it does little good to point at one or another particular bit of churning and say “I’m going to call this ‘decision-making’!” The matter is making decisions the way a dog plays chess.
I think we are in danger of running in non-overlapping circles here. If “decision-making” or “intention” in your view must involve a non-mechanistic component, then clearly, any attempt I make to describe either in terms of mechanisms isn’t going to satisfy you. But, equally, if I attempt to account for decision-making and intention are explicable in mechanistic then clearly I am defining decision-making and intention in terms that can be explained mechanistically! So let’s agree some ground rules: I’m regarding “decision-making” as a process by which something (typically an animal, but conceivably, if I’m right, an artefact – even a designed artefact – hey, if ID is true, wouldn’t you expect people to be able to design decision-making and intentional artefacts? But I digress...) has several options for action (things it is capable of doing) and is able to execute the one that is most appropriate given the context. For example, a bit of energy saving software that is able to select the optimum setting for a thermostat given the amount of solar energy it is getting, the number of occupants of the house, and the time of day. Or something. I’m not talking about intention here, just decision making. Well, I’m sure the smart engineers around here could design such a “decision-maker” fairly easily (I could probably even have a decent shot at programming it myself). That, IMO, isn’t comparable to a dog chewing the chess-pieces, and calling it a dog-playing-chess; it’s more comparable to a computer playing me at chess and winning (not actually hard for a computer to do). In fact, a computer-playing chess program is probably a good example of a mechanical decision maker. So I hope we can agree that decision-making, in terms that we would recognise as quite humanoid, can be at least replicated by a mechanistic algorithm. And we also know the kinds of algorithms our networks of neurons use to make decisions, and we can even emulate them. So what about intention? It occurs to me that a possible confound here is the issue of consciousness. I am describing intention simply in terms of a decision-making process that selects actions so as to maximise the chance of achieving a pre-defined goal. In that sense, my chess-playing computer is an intentional system – its goal is to check-mate the human player, and it selects its moves in such a way that at any given point in the game, its chances of achieving that goal is maximised, while of course constantly having to update its plans (I used the word advisedly) in light of the human player’s moves. So it plans, and replans, by making a constantly updated forward model (simulation) of likely outcomes of its next move, and feeding each output back in as input to the move-selection process. But here anticipate (hey, I just made a forward model!) that we will have a problem – you will not be happy to call this “intention” because the computer program, you will insist (and rightly, IMO) is not “conscious”. And I will accuse you of moving the goal posts :) Well, no I won’t, because having made my forward model I can deal with it in advance. But I will say that the real problem here is not in accounting for either decision-making, or even intention as in goal-directed decision making but in accounting for consciousness. Amirite? My account of intention does not satisfy you because I have not even mentioned the issue of consciousness. And that’s a fair cop. But I think it’s worth identifying what the cop actually is :)
The example of the wave doesn’t work, because there’s no need to dispute that some physical thing X is ultimately constituted by a number of smaller physical things Y. Put another way, just because a bowling ball really is just a conglomeration of smaller material things (though whether it’s even right to call them ‘material’ anymore, given quantum physics, is an open question) poses no problem here, precisely because a “bowling ball” as a useful fiction, or only ‘really’ existing relative to a mind, isn’t terribly controversial to most people. Just as the same knife can be ‘a piece of cutlery’, ‘an antique’, ‘a weapon’, etc relative to a mind, though most everyone would agree that the knife is just a collection of atoms, etc, in this or that arrangement, which we call various things in different contexts and as shorthand.
Well, my point about the wave is that it isn’t “ultimately constituted by smaller physical things”. The wave isn’t a property of those smaller things at all – it’s a property of the interface (it’s why I chose an ocean wave, rather than a sound wave for instance). The water can be travelling West, the air North East, and the wave travelling South. It is simply not possible to capture the behaviour of the wave in terms of the subunits of material either side of the interface, because the wave’s behaviour is a property of the interface, not of either material. OK, we are back in metaphorland, so let’s return to neurons: neurons fire discretely, but populations of neurons oscillate, and those oscillations can only be accounted for in terms of the population dynamics. They exist only at a higher level of analysis than the individual neuron. And yet, as with the ocean wave, which would not exist without the water or the air, but is composed of neither and is a property of neither, oscillations in neural populations wouldn’t exist if the individual neurons didn’t fire. And so on upwards. Networks of neural populations also oscillate, and do so in a chaotic (in the technical sense) manner: they can “flip” from state to state, depending on the inputs from the contributing populations, and those network oscillations are again only accountable for at the network level. And it is these these chaotically oscillating networks of neurons that finally determine the executed action, which in turn will bring in new data (an eye movement, for instance, results in a whole new cascade of neural firing as new patterns are cast on the retina), so not only is a brain a decision-maker (in my sense) its decisions include decisions to acquire new data relevant to the current goal (which makes it a lot clever than the chess program).
Strip away the metaphors, the useful fictions and the poetic language when talking about intention and meaning (and even consciousness and experience) in a materialist world and there’s just not much left.
Well, that’s really my point. Take apart a clock, and there’s not much left. That’s because a clock isn’t an item list of parts, it’s an assembly – it exists at a higher level of analysis than the parts – it’s an emergent property of its parts, if you like. Sure, if you insist that the material world is no more than the hadrons and leptons of which it is comprised (or whatever the current fundamental particles are these days) then “there’s not much left”. But no materialist I know insist on that, and I certainly don’t. That’s because we also have neutrons and atoms and molecules and cells and organs and organisms and brains and selves. And what makes them more than their parts is the pattern they make, over both time and space – how they relate to one another, just as the ocean wave is the pattern of the interface between two elements (in the Greek sense), not a property of either of the two elements separately. To say an ocean wave is made of water and air is not to eliminate the wave, nor to reduce it to “merely” water and air”. It is simply to state a fact that misses most of what the wave is. Yes, I think that brains are extraordinary assemblages of neurons, made of molecules and atoms and neutrons and quarks and leptons. But the “extraordinary assemblage” is in that list – to leave it out would make me an eliminativist, and I’m not :) I’m interested in that extraordinary assemblage, and how, inter alia, it produces what we call consciousness. But that might have to wait for another OP :)Elizabeth Liddle
May 25, 2011
May
05
May
25
25
2011
04:03 PM
4
04
03
PM
PDT
Hi markf and Elizabeth Liddle, Thank you both for your very thoughtful (!) posts. The care with which you composed them illustrates the very point that I wanted to convey, which is that meaning - and here I'm talking about inherent meaning - is, in paradigm cases, propositional. The meaning I want to convey when I wave my hands frantically while I'm in the sea is "I'm drowning" and I select the motor pattern precisely because I think it's an excellent way to get other people to understand the proposition I wanted to communicate. They do so because they are beings like myself who are capable of putting themselves in my shoes and inferring that the only good explanation for the frantic, insistent waving they observe is an urgent need to communicate the single proposition: "I'm drowning." Thus in the above example it is the proposition that does all the work of explaining the meaningfulness of the action sequence I engage in. The fact that I can preview it in my head is all well and fine, but that does not make it propositionally meaningful. In the absence of propositional language, previewing an action in one's head might make it useful or practical at best. In these posts, however, we are engaging in a discussion which has no practical value whatsoever - unlike the drowning case. All of us are perfectly capable of meeting our practical wants. Our discussion pertains to the meaning of what it is to have a thought. A more theoretical discussion would be difficult to imagine. Any meaning, at this level of communication, is inherently propositional. Now here's my point: bodily movements are not inherently propositional. It takes a good deal of careful selection to come up with a body movement that conveys a proposition per se, and even when it does, it's a very simple one at that ("I'm drowning.") In the vast majority of cases, when we communicate, the person communicating doesn't mean what they mean simply because they've previewed these movements. Rather, the meaning logically precedes the movements. Bodily movements, even previewed ones, are simply incapable of accounting for the meaningfulness of the vast range of propositions we are capable of entertaining. And if propositional meaning does not inhere in bodily movements, then we have to look beyond them to find meaning.vjtorley
May 25, 2011
May
05
May
25
25
2011
03:13 PM
3
03
13
PM
PDT
:-)ellazimm
May 25, 2011
May
05
May
25
25
2011
01:42 PM
1
01
42
PM
PDT
"Every day, I find new ways to make mistakes." Yes, it's the thing that makes us most common. laterUpright BiPed
May 25, 2011
May
05
May
25
25
2011
01:26 PM
1
01
26
PM
PDT
Upright: I'll do my best, in my stupidity, to be open minded. But DON'T hold your breath. Every day, I find new ways to make mistakes.ellazimm
May 25, 2011
May
05
May
25
25
2011
01:07 PM
1
01
07
PM
PDT
Ella, You couldn't have missed my point more completely, even with twice the comedic relief. But at least you got your own point: you are correct, people who believe in free will are labeled "stupid" on a regular basis. Next time you say that you've never come into contact with any information that would cause you to believe in the authenticity of the mind, perhaps you could stop and think it through... Cheers :)Upright BiPed
May 25, 2011
May
05
May
25
25
2011
12:58 PM
12
12
58
PM
PDT
Upright: Cool. When do I get to be rich and famous? :-) The whole free will debate has, frankly, baffled me. I suppose it's just me being stupid and not getting the point but . . . I just can't get worked up about it. But, as I said, on a daily, minute to minute basis I act in a way that implies free will. And I think we all do. Really. Whether that is true or not I leave to others.ellazimm
May 25, 2011
May
05
May
25
25
2011
12:05 PM
12
12
05
PM
PDT
EZ: Hey, I don’t want to admit that my mind is merely a product of my neurons firing but I have yet to see any evidence that convinces me otherwise.
By any account, your neurons firing thrust you to the very pinnacle of Life on earth, and there they give you the unique capacity for a distinct form of symbolic representation. Yet you did not invent this form of symbolic representation, it existed as the basis of Life long before you arrived.Upright BiPed
May 25, 2011
May
05
May
25
25
2011
10:18 AM
10
10
18
AM
PDT
Eric: You're welcome but I'm only reacting out of my own head. I have no training, no expertise, no claim to know any more than the next guy. I will say that, even though I am much more in the materialist camp I kind of sort of assume I have free will. I can't justify it within my non-theistic paradigm but it feels like that's the way things work. I won't pretend to be able to defend that view . . . I think I make a difference in the world and that I have a choice in what that difference is and that's good enough for me. KF: I'm really sorry I didn't acknowledge your link earlier, I just noticed it!! I'll have a good look later (took a quick glance now) as I'm just starting my family time of the evening. BBQ chicken soaked in sweet and sour sauce I think . . . and a big bowl of salad. :-)ellazimm
May 25, 2011
May
05
May
25
25
2011
09:31 AM
9
09
31
AM
PDT
ellazimm @ 106 Thanks. The uncoupling is necessary in #2 to distinguish it from #1. In other words, either all actions/choices are simply and only the result of material processes (#1) or they involve some form of consciousness/free will. The only way the consciousness/free will can exist (in the materialist view) is if it somehow originally arose from material processes, but then took on a "life of its own" so to speak. I think if most materialists think about it carefully, they'd find (as you did) that they are more in the #2 camp than #1 (because #1 is self-refuting and useless as a practical life view, although some folks like to argue for it, I think many times just to be stubborn).Eric Anderson
May 25, 2011
May
05
May
25
25
2011
09:12 AM
9
09
12
AM
PDT
EZ: Why not start from here on? GEM of TKIkairosfocus
May 25, 2011
May
05
May
25
25
2011
03:49 AM
3
03
49
AM
PDT
1 2 3 5

Leave a Reply