Uncommon Descent Serving The Intelligent Design Community

Linguist Noel Rude on that new “metaphor” theory of consciousness

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

As in “Another new theory of consciousness: Your brain on metaphors (“It is unlikely that idioms are imagined in the brain. Most people are not in fact aware, most of the time, what common idioms reference”).

From Noel Rude, an expert in American Indian languages:

I say it’s just another attempt to limit the mind to a tangle of neurons woven together by chance and necessity.

The gist of this piece is that we use concrete imagery for abstract thought and never really escape the concrete. If we say that prices have risen we cannot escape the concrete sense of “risen”. I say this is bunk on several counts.

It overlooks the fact that the most fundamental semantic concepts are volition and consciousness. These are so basic to our experience that they are implicit in a child’s grammar before he ever resorts to metaphor. Of course we know that in official materialist dogma there is no place for volition and consciousness and therefore I should not bring this up. I would wager that no reputable linguist would dare bring it up in disputing the premise of this piece.

Also, words and expressions that are or that originated as metaphor are not all equal. Even if they are able to show (which I doubt) that when we hear or say the word “understand” our brainwaves betray some echo of “stand” and “under”, there are other words where the etymology of the original metaphor is opaque to all but linguists. Remember that the word “science” shares a common origin with the Nordic word “ski” and our Anglo-Saxon four letter “sh*t”–the ancient root being *skei- ‘cut’.

Are we to assume that there is no difference between the “embodied” concreteness of a metaphor and the abstraction to which it refers? Even after the original metaphor is now opaque?

This article (“Your Brain on Metaphors”) makes no mention of what linguists call “grammaticalization”–something that should be put center stage.

Grammaticalization is not exactly a Darwinian process. It is not an account of the evolution of language from non-language, but rather the refurbishing of language with grammatical distinctions as those distinctions are lost. The process is both phonological and semantic. What begins as a full word becomes an affix, and what begins as a metaphor goes through a process of “bleaching” until the concreteness is lost and only the grammatical concept remains. Such is the true history of all grammatical affixes (suffixes, prefixes).

Would they say that because grammar arises via metaphor it remains so and there are no abstract grammatical categories?

George Lakoff’s embodiment thesis may be “as anti-Platonic as it’s possible to get”–which explains his and Rafael Nuñez (2001) Where Mathematics Comes From: How The Embodied Mind Brings Mathematics Into Being . Somehow I doubt that Eugene Wigner (“The Unreasonable Effectiveness of Mathematics in the Natural Sciences”) or Roger Penrose would be impressed. Man’s thought really can reach great heights of abstraction. Man really can comprehend certain deep aspects of reality–this even when he does it via metaphor. Surely language gives us an edge over the beasts.

Lakoff was well received in his earlier work on metaphor and I have benefited from his insights. I think there are two extremes. One is that “embodiment” is all there is–it’s all neurons and no mind. Another extreme might be to deny that embodiment and subjective qualia have any bearing on understanding (but who would do this?). Those born blind, for example, cannot understand fully what “red” means. Yet the blind can speak and you might not know you are speaking to a blind person over the phone. Helen Keller wrote quite intelligently.

So there you have it–my two cents (is that a metaphor?).

See also: What great physicists have said about immateriality and consciousness

Follow UD News at Twitter!

Comments
ot-ish but more relevant to this thread than the whale one: http://youtu.be/oSdPmxRCWwsJGuy
September 10, 2014
September
09
Sep
10
10
2014
04:48 AM
4
04
48
AM
PDT
To continue on from the other thread:
"I would say the elephant in the living room question that they forgot to ask (in this study) is, ‘Is it even possible for information to ‘emerge’ from a material basis?’. And the answer to that most important of foundational questions the answer is a resounding NO!",,,, https://uncommondescent.com/neuroscience/another-new-theory-of-consciousness-your-brain-on-metaphors/#comment-513609
To continue on from that line of thought, it is interesting to learn why it is impossible for a material computer to create new information:
Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf
It is also interesting to learn what the biggest failure in Artificial Intelligence is, 'context',,
What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014 Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough. Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,, ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014. Erik J. Larson - Founder and CEO of a software company in Austin, Texas http://www.evolutionnews.org/2014/05/what_is_a_mind085251.html
and since a material computer cannot create information, or to take context into consideration, then one simple way of defeating the Turing test is to tell, or to invent, or to tell a joke:,,, Such as this joke:
Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon http://imgs.xkcd.com/comics/turing_test.png
The reason why a new joke will consistently trip the computer up is because of 'context',,,
“(a computer) lacks the ability to distinguish between language and meta-language.,,, As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human (from the computer)” Per niwrad https://uncommondescent.com/intelligent-design/artificial-intelligence-or-intelligent-artifices/
'context' is a far harder problem for reductive materialism to account for than most people realize,, I think pastor Joe Boot, in the following quote, illustrates the insurmountable problem that ‘context dependency’ places on reductive materialism very well:
“If you have no God, then you have no design plan for the universe. You have no prexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is flux. It’s just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other because there is no overall structure. There’s no design plan. It’s like my kids do ‘join the dots’ puzzles. It’s just dots, but when you join the dots there is a structure, and a picture emerges. Well, the atheists is without that (final picture). There is no preestablished pattern (to connect the facts given atheism).” Pastor Joe Boot – 13:20 minute mark of the following video Defending the Christian Faith – Pastor Joe Boot – video http://www.youtube.com/watch?v=wqE5_ZOAnKo
In regards to the importance of context, I highly recommend Wiker & Witt’s book “A Meaningful World” in which they show, using the “Methinks it is like a weasel” phrase, (a phrase that Richard Dawkins’ used from Shakespeare’s play, 'Hamlet', to try to illustrate the feasibility of Evolutionary Algorithms), that the problem is much worse for Darwinists than just finding the “Methinks it is like a weasel” phrase by a unguided search. This is because the “Methinks it is like a weasel” phrase doesn't makes any sense unless the entire context of the play, and culture, of Hamlet is taken into consideration,,,
A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature – Book Review Excerpt: They focus instead on what “Methinks it is like a weasel” really means. In isolation, in fact, it means almost nothing. Who said it? Why? What does the “it” refer to? What does it reveal about the characters? How does it advance the plot? In the context of the entire play, and of Elizabethan culture, this brief line takes on significance of surprising depth. The whole is required to give meaning to the part. http://www.thinkingchristian.net/C228303755/E20060821202417/
In fact it is humorous to note what the overall context is for “Methinks it is like a weasel” that is used in the Hamlet play. The context in which the phrase is used is to illustrate the spineless nature of one of the characters of the play. To illustrate how easily the spineless character can be led to say anything that Hamlet wants him to say:
Ham. Do you see yonder cloud that ’s almost in shape of a camel? Pol. By the mass, and ’t is like a camel, indeed. Ham. Methinks it is like a weasel. Pol. It is backed like a weasel. Ham. Or like a whale? Pol. Very like a whale. http://www.bartleby.com/100/138.32.147.html
After realizing what the context of ‘Methinks it is like a weasel’ actually was, I remember thinking to myself that it was perhaps the worse possible phrase Dawkins could have possibly chosen to try to illustrate his point, since the phrase, when taken into context, actually illustrates that the person saying it (Hamlet) was manipulating the other character into saying a cloud looked like a weasel. Which I am sure is hardly the idea, i.e. deception and manipulation, that Dawkins was trying to convey with his ‘Weasel’ example.
Quote and Music,, "I believe in Christianity as I believe that the sun has risen: not only because I see it, but because by it I see everything else." C. S. Lewis Sidewalk Prophets - "The Words I Would Say" https://www.youtube.com/watch?v=8t9u-LOa3OI
bornagain77
September 9, 2014
September
09
Sep
9
09
2014
04:39 PM
4
04
39
PM
PDT
"So there you have it–my two cents (is that a metaphor?)." If it is it's not worth the paper it's written on.Mung
September 9, 2014
September
09
Sep
9
09
2014
01:32 PM
1
01
32
PM
PDT

Leave a Reply