Uncommon Descent Serving The Intelligent Design Community

Human evolution: Ardipithecus, humans, and chimps

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Someone wrote to me recently, asking

Ever since the reporting of Ardi, I expected a commentary of it on your blog but so far I have found none unless I missed it. I’m curious to what you have to say about it since the researchers of Ardi claim chimps may have descended from us. That being the case then evolution’s tree of life would have to be reimagined. Thoughts?

I think Jean Auel did the best job in Clan of the Cave Bear, and she even admitted that she was writing fiction. Which, in my view, puts her way ahead of dozens of profs who can tell me exactly how long-dead people – who never left any writings behind – thought about stuff like religion and family life.

Anyway, I am putting this to the commenters. What do you think about Ardipithecus? An ancestor of us? Of chimps? Both?

By the way, this blog is a volunteer enterprise. Unlike the Darwinists, we are not part of your tax burden. If you feel like contributing financially, don’t let me deter you. We could expand our coverage and services if we had more resources. Otherwise, you see only what a volunteer found time to contribute. That’s hardly a quarter of what we could say.

Comments
PaV: If you mean these to be Homo neanderthalis, Homo erectus, etc, that’s fine. ... And, of course, they’re nowhere to be found.
You're being a bit inconsistent. Cleary, there are primitive hominids that predate modern hominids.Zachriel
January 4, 2010
January
01
Jan
4
04
2010
05:27 PM
5
05
27
PM
PDT
Voice Coil (#55) You argue that my views on the radical cognitive discontinuity between humans and other animals are incompatible with my affirmation of common descent:
This view is beset with a contradiction, as you attribute this transition [from non-human to human - VJT] both to “improvements in brain architecture that occurred over a period of millions of years,” and to changes that occurred in a literal 24 hour period and were not physical at all, but instead a sudden “ensoulment.”
Just to be clear: (i) the neural changes in our ancestors' brains were a necessary but not sufficient condition for the transition from non-human to human; (ii) I do not think that the neural changes occurred as a result of an undirected process, as the brain is the most complex structure known to exist; (iii) at the present time, I have no idea how many mutations were required to effect this transformation, and I don't know anyone else who would know, either.vjtorley
January 4, 2010
January
01
Jan
4
04
2010
04:53 PM
4
04
53
PM
PDT
Many of the relevant mutations are neutral (and of those, many have no phenotypic effect whatsoever). Hence, they are precisely like shuffling a deck and dealing a typical hand.Zachriel
January 4, 2010
January
01
Jan
4
04
2010
03:32 PM
3
03
32
PM
PDT
Consider a baby. Even assuming that each mating was destined by fate, there are millions of sperm with each coupling. Looking back over just the last few generations, the probability of this particular baby with this particular genome being born is astronomically unlikely. Yet babies are born every day.
Zachriel: You’re still assuming that there had to be a particular result. There didn’t. We know there were lots of branchings and many possible avenues evolution could have taken. PaV: But let’s point out the problem with this: namely, that if you invoke these other races as intermediates on the way to humankind, then this restricts the amount of time between each such intermediate.
You are correct that the presence of intermediates doesn't reduce the amount of total change required. Nevertheless, Your calculation assumes a specific result. The Theory of Evolution posits that it is just one of many possible paths that could have been taken. The correct calcuation is that the rate at which mutations accumulate must be sufficient to account for the differences in the genomes. You start some place, you move about, and you end up someplace else.Zachriel
January 4, 2010
January
01
Jan
4
04
2010
03:29 PM
3
03
29
PM
PDT
Zachriel:
Somewhat more than your original calculation (#51) of 10^5.
Have you ever heard of the expression: He strains gnats and swallows elephants? Well, what does 10^5, or even 10^27 , matter as compared to 10^70,000? If you put thirty zeroes on each of thirty lines on a piece of paper, then 10^27 would consist of 26 zeroes followed by a 1, all fitting on the first line of the first page, whereas 10^70,000 would require roughly 75 pages completely filled with zeroes, and, finally, a 1 at the end. Each card hand can be played; but only the rarest of mutations will be both beneficial and fixed. And there simply isn't enough of them. Under one of the most optimistic scenarios imaginable for RM+NS, the likelihood of chimps becoming humans by mere chance is 1 in 10^70,000. How in the world is it that anyone takes Darwinism seriously anymore?
PaV: This means that improbabilities arise ONLY when some particular combination is required. Zachriel: That’s right. And humans are not requried.
Only humans understand improbabilities. Only humans play bridge. As to your larger point that improbabilities arise in nature without human assistance, I would wholeheartedly agree. It is the task of NS to overcome these improbabilities. And, as my previous calculations have demonstrated, NS is not up to the task. Now, if you had 52 cards, face-up, on a table, and you had a chimp separate them randomly into four hands of 13 cards each, with each hand consisting of one of the four suits of cards, and with them ordered from the Ace down to the two, then, on average, it would 53 octillion tries for the chimp to do it randomly. But I assure, I could do it on my first attempt. It's amazing, isn't it, at what improbabilites intelligence can make go away. In reference to my calculations, you replied thusly:
You’re still assuming that there had to be a particular result. There didn’t. We know there were lots of branchings and many possible avenues evolution could have taken.
My friend, Zachriel, where are these "intermediates"? If you mean these to be Homo neanderthalis, Homo erectus, etc, that's fine. But let's point out the problem with this: namely, that if you invoke these other races as intermediates on the way to humankind, then this restricts the amount of time between each such intermediate. If there is not enough time (meaning enough mutations) between chimps and apes to account for genetic differences, well then, while fewer mutations may be needed to go from a chimp to some putative intermediate, nonetheless, a lesser amount of time is available. It seems like that puts us right back where we started from----unless you can find lots more "intermediates". Ironically, every attempt to justify Darwinism, starting with Darwin himself, ends up relying on the presence of intermediates. And, of course, they're nowhere to be found. [RM+NS is not an optimization program, but a stabilization program, around which adaptation can occur. But, of course, this is the bill of goods that Darwinists sell everywhere.]PaV
January 4, 2010
January
01
Jan
4
04
2010
02:37 PM
2
02
37
PM
PDT
52!/(13!^4) = 53 octillion is the number of deals. 52!/13!/39! = 600 billion is the number of individual hands. These are the same numbers originally posted above (#56).
StephenB: Anyways, it’s a large number.
Somewhat more than your original calculation (#51) of 10^5.
Zachriel: The point is that any particular hand is incredibly unlikely PaV: No, that’s not true. Any bridge hand is likely. The probability of dealing a bridge hand is 100% = 1.0.
The probability of any *particular* hand is highly unlikely. The probability of *some* bridge hand is 1.
PaV: This means that improbabilities arise ONLY when some particular combination is required.
That's right. And humans are not requried. They're just one hand of many that could have been dealt. Just within the human family tree, there are many branches that could now be occupying the niche now occupied by Homo sapiens, e.g. Homo neanderthalensis.
PaV: The mechanisms that Darwinian evolutionists propose don’t have the power to bring about these “particular” changes via random, and/or, guided processes (assuming that NS is a ‘guiding’ process).
You're still assuming that there had to be a particular result. There didn't. We know there were lots of branchings and many possible avenues evolution could have taken. It's not a particular hand, it some hand that was dealt (only some of which were then subject to selection). Even within extant humans there can be millions of differences in genomes, including copy number variations.Zachriel
December 30, 2009
December
12
Dec
30
30
2009
10:28 AM
10
10
28
AM
PDT
Zachriel [60]: The number of permutations of 100 objects is 100! = roughly 10^158 per my calculations. But, if you get up to 150!, I'm sure that your number of 10^200 is exceeded. But, again, unless you're considering a "particular" sequence, this improbability doesn't apply. Now, if YOU are doing the specifying, then your very act of specifying a sequence brings the sequence into existence, and the probability of its existence is 1.0. However, if your friend wrote down a sequence and asked you to guess what it is, then your effort at guessing it would have the probability of being correct of 1 in 10^158. Well, "nature" is doing the specifying in the case of genomes. And that is why the improbabilities apply. The specification exists because species A and B both exist. And any natural explanation for this transformation has to overcome the improbabilities that these specifications bring into existence.PaV
December 30, 2009
December
12
Dec
30
30
2009
10:17 AM
10
10
17
AM
PDT
The 52!/(13!)^4 is the multinomial coefficient, and would include the various orderings of a bridge hand. But when we play bridge, whether the ace of Spades is dealt to you first or last doesn't really matter. So in terms of your normal bridge hands, the binomial coefficient works, which is 635 billion or so. Anyways, it's a large number.
The point is that any particular hand is incredibly unlikely
No, that's not true. Any bridge hand is likely. The probability of dealing a bridge hand is 100% = 1.0. When you shuffle and deal a bridge hand, you can immediately play. Why? Because you're not looking for any particular hand. But when you say that you can't play until someone gets a hand with nothing other than one suit of cards, then what happens? You end up having to shuffle and deal for a long time. What if you want to generate a sequence of integers 1 to 100 randomly, and you're not concerned with a particular sequence. Well, then, every generation of such a sequence will suffice. But if you wanted a particular sequence, then it would require, per your numbers, 10^200 generations of sequences before you could be assured of arriving at that particular sequence. Does it matter which sequence you select? No, it can be any one of the 10^200 possibilities. This means that improbabilities arise ONLY when some particular combination is required. So, for example, with state lotteries, the reason that it becomes an improbable event is because a particular sequence of numbers is selected, and your guess has to match that. If no particular sequence is required, then everyone wins. All you have to do is pick six numbers. So, too, with the genome: if particular nucleotides differ between species A and species B, given the huge length of the genome, and the relatively low rate of mutation, and with the further complications of a needed change spreading through a population without it first getting eliminated because of some deleterious mutation someplace else on the genome, it becomes mathematiclly improbable---to the point of impossibility---to arrive at species B from species A if the differences are great enough. My calculation of 10^70,000 makes that point. The mechanisms that Darwinian evolutionists propose don't have the power to bring about these "particular" changes via random, and/or, guided processes (assuming that NS is a 'guiding' process). Unless a mechanism can be proposed that overcomes the difficulties that "particular" nucleotide sequences present, then we have to look for some other mechanism. Behe's EoE points out the limitations of Darwinian mechanisms. Can so-called Darwinian mechanisms bring about changes in the genome? Yes. But they're extremely limited, almost to the point of being completely trivial. Please feel free to enlighten me, but unless viable mechanisms are proposed, I remain a Darwinian skeptic. [BTW, I tried to get Kimura's book out of the library, but it appears it's been stolen, so I can't get you the quote I was hoping for.]PaV
December 30, 2009
December
12
Dec
30
30
2009
09:34 AM
9
09
34
AM
PDT
PaV: Let’s assume 53 x 10^27 is accurate ( I suspect this number is 52!) ...Your figure of 53 octillion probably is derived by multiplying C_52,13 by (13!)^4.
It's 52!/(13!^4). The point is that any particular hand is incredibly unlikely.Zachriel
December 29, 2009
December
12
Dec
29
29
2009
10:59 AM
10
10
59
AM
PDT
Zachriel: I just checked online at two blogs and the number I gave is the correct number, which turns out to be 635,013,559,600. Using 100,000 people shuffling and dealing twice a minute, 24-7, it would take six years, on average, to come up with a particular deal. Ouch! IOW, if everyone in the world shuffled and dealt once every Sunday, it would take less than 3 years to get your specific deal. A bit deflating, isn't it?PaV
December 29, 2009
December
12
Dec
29
29
2009
10:44 AM
10
10
44
AM
PDT
PaV: So, again, the calculation runs this way: ...
It was suggested above that you avoid the error of claiming a typical bridge deal was so rare as to not plausibly occur. Look at this series of numbers between 1 and 100: 95 51 29 49 77 4 19 89 14 45 49 75 33 55 75 85 95 5 90 48 17 74 52 81 2 97 46 56 29 80 79 39 52 14 64 36 48 86 21 25 43 51 41 2 28 8 92 83 57 24 45 64 40 1 72 65 99 29 10 70 46 58 25 19 12 13 86 78 19 94 12 89 74 44 46 43 95 28 89 69 37 57 15 49 1 32 5 43 16 81 67 28 95 32 76 53 61 75 87 7 The chances of this sequence occurring by chance is just 1 in 10^200, way past the universal probability bound. Yet, it did occur by chance. The divergence of an evolutionary lineage can follow any number of paths. It happened to follow a particular path, much or most of which was neutral evolution. By assuming that only the one particular path leads to a lot of numbers with no relevance whatsoever.Zachriel
December 29, 2009
December
12
Dec
29
29
2009
10:43 AM
10
10
43
AM
PDT
As to 53 octillion, the number of combinations of 52 taken 13 at a time is 52!/13!39!, and since the order of the cards reaching a hand is unimportant, we don't need to take into account the various permutations of the hands. So C_52,13 should do the trick. Your figure of 53 octillion probably is derived by multiplying C_52,13 by (13!)^4. But this, I believe counts the permutations twice, and so is wrong.PaV
December 29, 2009
December
12
Dec
29
29
2009
10:32 AM
10
10
32
AM
PDT
Zachriel [56]:
You’re doing a bunch of calculations to arrive at something that requires only a single step. If mutations are evenly distributed across the genome, and the global rate is 3%, then that would mean 3% of any segment, including coding sequences.
You're not thinking through my numbers. The 10% represents the probability of 150 mutations becoming "fixed" in an individual gene in 10 million years, replacing one of the 1500 bp in the typical gene. Obviously 10% is higher than 3% that is the known difference between chimps and humans, and, thus, we know it's unrealistically high; but I'm doing this to give evolution an even greater chance of doing something than it has a right to deserve. Now the other favorable assumption I'm making is that only ONE mutation per gene is needed, which, of course, compared to the 3% = 45 mutations/gene that mark the actual difference betw/ chimps and human, is an extremely favorable assumption. So, again, the calculation runs this way: you need 1 mutation somewhere along the length of the gene, and, per neutral evolution, this gives us a 150/1500 = 10% chance of getting it through a random processes. But, there are 70,000 genes, which can be assumed to be replicated independently of one another, and hence the probability of getting all the needed changes in each of the 70,000 genes, with these incredibly favorable assumptions (assumptions which favor Darwinism), is 1 in 10^70,000. Tell me, what is the difference between this number and the number ZERO, as in, ZERO probability of these changes occurring at random?
If any single-nucleotide mutation is beneficial, they would likely be tried, and are much more likely to become fixed than neutral mutations. And there are other mechanisms in play, including gene duplication.
If you want to assume that NS will "fix" a gene faster than neutral evolution, that's perfectly fine. With a population of 10^5, it will take 10^4 years before we can be certain that the entire genome has had a mutation produced at each of its sites. Well, that means, in a sense, that ALL of the needed mutations are in place somewhere in the population. But each of the genomes in the population has, on average, 10 mutations, which are scattered over the entire genome, and, so, only ONE of them will be a beneficial mutation in some particular gene. Now the question is, Which of the 10^5 genomes will natural selection select? If we have Gene 1 that will become fixed, then what will happen to all of the other genomes where favorable mutations occur? Fixation means that ONE genome will sweep through the population, and that ONE genome will have on it only ONE needed mutation. At that rate, in 10^6 years, we can expect 100 mutations to fix (10^6 years/10^4 years/fixed mutation) every million years, giving a grand total of 5,500 over a 5.5 million year period (actual time distance), or 10,000 fixed mutations over 10 million years, meaning that ONE out of seven genes will have ONE fixed mutation. That certainly can’t account for what we see. If you then say, well, NS can “fix” more than one mutation at a time, then there is the problem that Nachman’s Paradox proposes, and the problem of deleterious mutations. For NS to work, then obviously the mutation is not neutral, and so its selection coefficient is likely that of 0.1 or higher, but certainly higher than 0.01. Since the fitness of the population in one generation is roughly equal to the fitness in the previous generation raised to the 1-s power, if NS tries to “fix” 10 such mutations all at once (let‘s say Genes 1-10), then if s = 0.1 for each, the population will completely die off trying to save all 10 mutated genes. If s = 0.01, then saving 10 of these at once means that in twenty generations of trying to “fix” these 10 mutations (and remember that Haldane gives 300 generations as the minimum time needed to fix one mutation), the population will have fallen off by (0.9)^20 = approx. 94%, leaving .06 x 10^5 members = 6 x 10^3 members. For a population size this small, it would take almost half a million years to again “try” all 3.0 x 10^9 bp of the genome. At this rate, only 200 mutations are fixed. But, at the same time one beneficial mutation is also connected, via the genome, to any deleterious mutations that randomly occur. And the number of deleterious mutations per beneficial one is quite high, meaning, then, that one beneficial mutation on a genome is much more likely to be lost to the population than it is to be “fixed”. So, the above calculations for NS end up being highly optimistic. Thus, neutral evolution gives the best result, and is why population genetics accepts it as the mode of evolution. But, of course, we’ve already seen how inadequate the neutral theory is to explain molecular/biological evolution. So now what do you propose?
There are 600 billion individual bridge hands, somewhat more than 10^5. There are 53 octillion possible bridge deals.
And compared to 10^70,000 is this supposed to be big? Let’s assume 53 x 10^27 is accurate ( I suspect this number is 52!), then if we shuffled and dealt a bridge hand every 30 seconds, it would take 5 billion persons shuffling and dealing non-stop roughly10 trillion years to come up with a specific deal. But what are the odds of getting a bridge hand that can be played? It’s 1.0. So, to play bridge, all you have to do is shuffle and deal the cards. Evolution doesn’t work this way. To get something that can begin the process of Darwinian evolution you need something that happens with a probability of 10^-9, not 1.0. And you need a lot of these to accumulate independently. I still haven’t been shown how this can be done given the biological improbabilities known to exist.PaV
December 29, 2009
December
12
Dec
29
29
2009
10:22 AM
10
10
22
AM
PDT
All this talk about mutations and still not one person on this planet knows whether or not the trandsformations required are even possible given any amount of mutational accumulation. What does that tell you about the theory of evolution? (hint- its grand claims cannot be tested)Joseph
December 26, 2009
December
12
Dec
26
26
2009
06:53 AM
6
06
53
AM
PDT
PaV: 150 divided by 1500 is obviously one tenth. Where in the world did you get 3%?
You're doing a bunch of calculations to arrive at something that requires only a single step. If mutations are evenly distributed across the genome, and the global rate is 3%, then that would mean 3% of any segment, including coding sequences.
PaV: Per Nachman’s paper, a little less than ten percent of the human genome codes for proteins, and there are 70,000 genes found in the genome.
But to belabor the point, the human genome is ~3*10^9 in length. If there are 70000 genes with an average length of 1500 then the entire length of nonsynonymous coding sequences of the genome is roughly 10^8, or ~3% of the total genome.
PaV: Yes, every possible mutation has occurred, but just because it occurred doesn’t mean it became fixed.
If any single-nucleotide mutation is beneficial, they would likely be tried, and are much more likely to become fixed than neutral mutations. And there are other mechanisms in play, including gene duplication.
PaV: The odds of an everyday bridge hand would be the binomial coefficient of 52!/4!48!, which equals (52 x 51 x 50 x 49)/4 x 3 x 2 x 1 = (roughly) 10^5 possible hands.
There are 600 billion individual bridge hands, somewhat more than 10^5. There are 53 octillion possible bridge deals.
PaV: You’re not taking the problem Nachman is posing seriously either.
Of course we are. You are not responding to the argument. Nachman tested a simple selection model, found it wanting, and suggested improvements to the model. If Nachman's paradox meant that humans weren't genetically viable, it would mean the population of humans would be declining. It's not. Not even close.Zachriel
December 25, 2009
December
12
Dec
25
25
2009
06:10 AM
6
06
10
AM
PDT
VJ:
What I wanted to point out was that the hypothesis of a radical discontinuity is a scientifically respectable point of view which merits serious consideration.
Not in the sense you intend: a transformation that occurred within a 24 period reflecting the action of deus ex machina, as you have elsewhere characterized the putative transition:
When I say “literally overnight” I mean literally overnight. I have no doubt that improvements in brain architecture occurred over a period of millions of years, but I would contend that at a critical point in evolutionary history, when the brains of our forebears became complex enough to be able to integrate information in the way that people need to in their everyday lives, our ancestors acquired an immaterial capacity to form abstract concepts – and in so doing, became true human beings.
https://uncommondescent.com/intelligent-design/humans-are-unique-get-used-to-it-or-get-therapy-do-not-get-a-chimpanzee/#comment-333570 This view is beset with a contradiction, as you attribute this transition both to "improvements in brain architecture that occurred over a period of millions of years," and to changes that occurred in a literal 24 hour period and were not physical at all, but instead a sudden "ensoulment." Further, in stating "our ancestors acquired an immaterial capacity to form abstract concepts – and in so doing, became true human beings" you omit mention that they also became "true Scotsmen." The phrase "true human beings" lays the groundwork for arbitrarily denying continuity as our understanding of human evolution attains finer and finer resolution. The alternative view is not to deny that there is a cognitive chasm between human beings and our extant closest relatives, but rather to argue that these differences were attained when small evolutionary changes, attainable by Darwinian processes, resulted in hugely significant consequences (evolutionary tipping points, as it were) which in turn, due to their powerful adaptive consequences, were quickly elaborated by further selection - "overnight" in the sense of a few tens or hundreds of thousands of years.Voice Coil
December 25, 2009
December
12
Dec
25
25
2009
05:16 AM
5
05
16
AM
PDT
Voice Coil (#43) Thank you for the long list of citations. You are quite right to assert that many scientists remain unconvinced of the existence of a great cognitive divide between humans and other animals. What I wanted to point out was that the hypothesis of a radical discontinuity is a scientifically respectable point of view which merits serious consideration. Penn, Holyoak and Povinelli are not its only defenders. I could also mention a recent article entitled "Origin of the Mind" by Professor Marc Hauser. Article in Scientific American, September 2009 (unfortunately only the first page is online). Marc Hauser is a professor of psychology, human evolutionary biology, and organismic and evolutionary biology at Harvard University. Here's an excerpt from his article:
"[M]ounting evidence indicates that, in contrast to Darwin's theory of a continuity of mind between humans and other species, a profound gap separates our intellect from the animal kind. This is not to say that our mental faculties sprang fully formed out of nowhere. Researchers have found some of the building blocks of human cognition in other species. But these building blocks make up only the cement footprint of the skyscraper that is the human mind... Recently the author identified four unique aspects of human cognition... [These are:]" * "Generative computation," that allows us to "create a virtual limitless variety of words, concepts and things." * "Promiscuous combination of ideas," meaning the ability to mingle "different domains of knowledge," e.g., art, sex, causality, etc. * "Mental symbols" allow us to enjoy a "rich and complex system of communication." * "Abstract thought," which "permits contemplation of things beyond what we can see, hear, touch, taste or smell." "What we can say with utmost confidence is that all people, from the hunter-gatherers on the African savanna to the traders on Wall Street, are born with the four ingredients of humaniqueness (Hauser's term for "human uniqueness" - VJT). How these ingredients are added to the recipe for creating culture varies considerably from group to group, however... No other animal exhibits such variation in lifestyle. Looked at in this way, a chimpanzee is a cultural nonstarter... "Although anthropologists disagree about exactly when the modern human mind took shape, it is clear from the archaeological record that a major transformation occurred during a relatively brief period of evolutionary history, starting approximately 800,000 years ago in the Paleolithic era and crescendoing around 45,000 to 50,000 years ago... "[Other animals'] uses of symbols are unlike ours in five essential ways: they are triggered only by real objects or events, never imagined ones; they are restricted to the present; they are not part of a more abstract classification scheme, such as those that organize our words into nouns, verbs and adjectives; they are rarely combined with other symbols, and when they are, the combinations are limited to a string of two, with no rules; and they are fixed to particular contexts... "Still, for now we have little choice but to admit that our mind is different from that of even our closest primate relatives and that we do not know much about how that difference came to be. Could a chimpanzee think up an experiment to test humans? Could a chimpanzee imagine what it would be like for us to solve one of their problems? No and no. Although chimpanzees can see what we do, they cannot imagine what we think or feel because they lack the requisite machinery. Although chimpanzees and other animals appear to develop plans and consider both past experiences and future options, there is no evidence that they think in terms of counterfactuals - imagining worlds that have been against those that could be. We humans do this all the time and have done so since our distinctive genome gave birth to our distinctive minds. Our moral systems are premised on this mental capacity.
The fact that these differences between humans and other animals are difficult to quantify, as some critics have pointed out, does not make them any the less real. Merry Christmas.vjtorley
December 24, 2009
December
12
Dec
24
24
2009
08:21 PM
8
08
21
PM
PDT
IrynaB (#44) Thank you for your post. You write:
Sooner or later computers will be more powerful than the brain in all aspects. And then what? Does the brain stop being designed? Will we have surpassed the Designer’s capabilities?
Should it ever happen that computers surpassed us in all aspects and then started turning on us, their makers, that would certainly disprove the hypothesis that we were designed by a benevolent cosmic Designer. However, this bleak eventuality would not disprove the more pessimistic hypothesis that a bunch of mischievous or malevolent aliens designed DNA on earth four billion years ago, foreseeing the possibility that it would evolve into an intelligent life-form that would subsequently be gobbled up by its own technological creations. Personally, the Terminator scenario leaves me unfazed. There is a long trail of predictions made by leading computer scientists that brains would outclass the human mind. Curiously, this apocalyptic event was always supposed to happen within the lifetime of the technological guru making the prediction. I wonder what that says about gurus. If the world was made by God, as I believe, surely He must have foreseen what we'd get up to, and therefore designed the cosmos so that computers never could turn on their makers en masse and destroy humanity. You write that computers are getting better and better. Well, yes, but our knowledge of the human brain is getting deeper and deeper, at the same time. We have so much to learn about it. The distance between the computer and the brain is not shrinking, unless you look at very superficial comparisons like Mips. The fact that a computer managed to beat Kasparov at chess proves nothing more than the fact that an optimal strategy in chess can be computed, given enough processing resources. In other words, chess is just a glorified game of noughts and crosses. But not all games are like that. In the meantime, I'm sitting in front of a computer, and somehow I feel underwhelmed. The silly things don't impress me any more than they did ten years ago, when I was a computer programmer. In fact, calling them things is crediting them with too much. They're assembalges of parts, lacking intrinsic finality. I'd be much more impressed if they built a computer with a stomach, than if they built one as fast as a human brain. Let me conclude with an anecdote. There was a philosopher (Stuart Sutherland) who once remarked that he'd believe a computer was conscious when one of them ran off with his wife. A wise observation, if you ask me. Merry Christmas.vjtorley
December 24, 2009
December
12
Dec
24
24
2009
07:47 PM
7
07
47
PM
PDT
Zachriel (#42) Thank you for your post. I have tried to make myself clear, so I shall say this one more time: I am not disputing common descent. That includes the common descent of humans and apes. Consequently, when you write:
Common Descent is essential to understanding evolution. The evidence is pervasive and not reasonably subject to dispute....
... you won't get any argument from me. Nor am I proposing, as you seem to think, that "Common Descent applies to everything in biology but the human brain." What I do vehemently dispute, however, is your assertion:
Common Descent is the single most important unifying pattern in biology.
The bare hypothesis of common descent explains nothing without a mechanism for explaining the genetic diversity that we see in living organisms today. The prevailing scientific hypothesis is that a combination of chance and necessity (random variation plus selection) can explain all of the features of living organisms today. I reject that hypothesis as empirically highly dubious. I may be mistaken, but I think the onus is on you to justify such an outlandish hypothesis. Which brings us to Hawks's 2000 paper at http://mbe.oxfordjournals.org/cgi/content/full/17/1/2 ("Population Bottlenecks and Pleistocene Human Evolution," by John Hawks, Keith Hunley, Sang-Hee Lee and Milford Wolpoff. In Molecular Biology and Evolution 17:2-22 (2000)). You write:
Yes, there is evidence for a bottleneck in human evolution about 2 million years ago, a common natural occurence. The phylogenetic changes are well-within known rates of evolution. The author completely and adamantly rejects your conclusions....[H]is idea of “sudden” is well-within the norms of evolutionary theory.
"Completely and adamantly"? Were we reading the same paper? Here's what Hawks et al. actually had to say:
A hominid speciation is documented with paleoanthropological data at about 2 MYA [million years ago - VJT] by significant and simultaneous changes in cranial capacity and both cranial and postcranial characters. This marks the earliest known appearance of our direct ancestors. The new species has been called Homo erectus or Homo ergaster by some authors. Following others (Jelinek 1978; Aguirre 1994; Wolpoff et al. 1994), we call this emerging evolutionary species early Homo sapiens, as it begins an unbroken lineage leading directly to living human populations. The first specimens are humanity’s earliest known direct ancestors. We, like many others, interpret the anatomical evidence to show that early H. sapiens was significantly and dramatically different from earlier and penecontemporary australopithecines in virtually every element of its skeleton (fig. 1) and every remnant of its behavior (Gamble 1994; Wolpoff and Caspari 1997; Asfaw et al. 1999; Wood and Collard 1999). Its appearance reflects a real acceleration of evolutionary change from the more slowly changing pace of australopithecine evolution.... ...These consecutive species samples are about half a million years apart, but the amounts of change between them are quite different. From the earlier to later australopithecine species, cranial capacity (approximate midsex average) goes from 450 cm3 [cubic centimeters - VJT] to 475 cm3, while from A. africanus to the earliest African H. sapiens sample the change is much greater: 860 cm3... Yet, brain size is only one of the evolving systems reflected in early H. sapiens anatomy. There are four interrelated complexes of changes at the very beginning of H. sapiens (Wolpoff 1999): (1) changing brain size (larger, especially longer vault, with a broad frontal bone and an expanded parietal association area; neural canal expansion); (2) changing dental function (more anterior tooth use, greater emphasis on grinding and less on crunching) as reflected in broader faces and larger nuchal areas; (3) development of a cranial buttressing system to strengthen the vault, including vault bone thickening and prominent tori; and (4) dramatic expansion of body height (estimated average weights double) and numerous changes in proportions (fig. 1). These, and other changes involving the visual and respiratory systems, reflect significant adaptive differences for the new species and give us important insight into the mode of speciation because they seem to happen all together, at the time of its origin. A Genetic Revolution If we assume these earlier australopithecines are a group of very closely related species, for instance, nearer to each other than Pan and Homo, we can expect that they differ much more in allele frequencies than in the presence or absence of specific genes for these features. Therefore, a reshuffling of existing alleles could result in the frequencies of features we observe in early H. sapiens. Thus, our second question is about this reshuffling, whether early H. sapiens is a consequence of rapid speciation with significant founder effect or the result of a long, gradual process of anagenic change. The first explanation, cladogenesis, is suggested by the fact that no gradual series of changes in earlier australopithecine populations clearly leads to the new species, and no australopithecine species is obviously transitional... In sum, the earliest H. sapiens remains differ significantly from australopithecines in both size and anatomical details. Insofar as we can tell, the changes were sudden and not gradual... Behavioral Changes This section addresses a second reason for suspecting there was a bottleneck and a genetic reorganization at the beginning of H. sapiens evolution. The characteristic early H. sapiens features denote a new adaptive pattern that many describe as the first true hunting, gathering, and scavenging adaptation and that we believe may be uniquely associated with the Oldowan archaeological occurrences. These facts provide insight into what some of the sources of selection promoting the new species might have been... Body size is a key element in the behavioral changes reflected at the earliest H. sapiens archaeological sites because of the locomotor changes that large body size denotes and the increased metabolic resources it requires. Moreover, the marked increase in brain size for early H. sapiens has significant metabolic consequences, because the human brain, which is 2% of the body weight, uses some 20%–25% of its metabolic energy. Larger brain size evolved in spite of these increased energy requirements, but the additional energy had to come from somewhere, and the answer must certainly lie in meat (Milton 1999). Larger body size in nonhuman primates is associated with the consumption of increasing amounts of low quality foods, and an increase in the amount of time and energy spent eating. The greater human body mass, and especially the longer legs, reflected a new foraging strategy related to this, in which, as Leonard and Robertson (1996) note: "large day ranges, increased meat consumption, division of foraging activities, and sharing of resources ... may have both necessitated and allowed for a higher-quality diet."... These behavioral changes are far more massive and sudden than any earlier changes known for hominids. They combine with the anatomical evidence to suggest significant genetic reorganization at the origin of H. sapiens, and from this genetic reorganization, we deduce that H. sapiens evolved from a small isolated australopithecine population and that small population size played a significant role in this evolution... All the currently available genetic, paleontological, and archaeological data are consistent with a bottleneck in our lineage more or less at about 2 MYA. At the moment, genetic data cannot disprove a simple model of exponential population growth following such a bottleneck and extending through the Pleistocene. Archaeological and paleontological data indicate that this model is too oversimplified to be an accurate reflection of detailed population history, and therefore we conclude that genetic data lack the resolution to validly reflect many details of Pleistocene human population change. (Emphases mine - VJT.)
From all this, you conclude that "The phylogenetic changes are well-within known rates of evolution," but the authors nowhere say this. Nor do they assert the contrary. Indeed, what struck me about the article was its refreshingly honest, non-dogmatic tone. The authors simply assert that the change from Australopithecus to Homo erectus (or early sapiens, as they refer to him) was relatively sudden, that it occurred over a period of no more than half a million years, and that it was associated with a dietary change to meat. Contrary to what you assert, the authors do not "completely and adamantly reject" anything, except the hypothesis of a recent genetic bottleneck. They make no attempt to explain the transformation from Australopithecus to Homo erectus, as they are more interested in the implications for ancient population sizes. They provide no calculations to support your claim that the changes in the human lineage were "well-within" normal rates of evolutionary change. You are perfectly free, if you wish, to latch on to the words "meat diet" and "half a million years" (which are in the article) and pretend that you have magically solved the paleoanthropological puzzle of how a systematic anatomical and behavioral revolution occurred in the human lineage. But you haven't solved anything. Describing what happened (e.g. people started eating meat), when it happened (e.g. 2 million years ago) and putting a lower limit on how fast it happened (e.g. over no more than 500,000 years) is not the same thing as explaining how it happened. I shouldn't have to belabor this basic point, so I won't. The hypothesis that the anatomical and behavioral changes that took place about two million years ago in the human lineage cannot be explained as the outcome of chance plus necessity is a scientific one. It stands or falls on the evidence. If it dies, then so be it. But before you pronounce it dead, ask yourself: what kind of evidence would be needed to destroy it? At the very least, we would need to know how many extra genetic instructions were required to transform the body of an australopithecine into that of a human being, whether a viable pathway existed which would enable a transformation to occur from one to the other, and whether the sequence of mutations required to effect this transformation was reasonably probable, given the existence of Australopithecus. By "reasonably probable" I mean: not astronomically improbable. That should be a hurdle that Darwinists can clear. Recenet discoveries of Neanderthal DNA and of the specific role played by the various genes that distinguish humans from chimps may make this scientific question a tractable one within the next few decades. Yes, it will take a lot of spadework, but that can't be helped. If you're putting forward a speculative hypothesis (that unguided evolution explains everything), you have to establish it properly. Digging up a few fossils will impress no-one.vjtorley
December 24, 2009
December
12
Dec
24
24
2009
07:23 PM
7
07
23
PM
PDT
Zachriel: 150 divided by 1500 is obviously one tenth. Where in the world did you get 3%? You give a citation that talks about 35-40 million accumulated mutations. But I used a figure of 100 million. It's still not enough. Yes, every possible mutation has occurred, but just because it occurred doesn't mean it became fixed. If you invoke NS to help the fixation process, you run into Nachman's Paradox. Zachriel:
Assuming many of the changes are neutral, then you are calculating the odds of an everyday bridge hand. Every hand is incredibly unlikely, but some hand is inevitable.
The odds of an everyday bridge hand would be the binomial coefficient of 52!/4!48!, which equals (52 x 51 x 50 x 49)/4 x 3 x 2 x 1 = (roughly) 10^5 possible hands. You're not seriously considering comparing that to 10^70,000 are you? This is a rather glib response. You're not taking the problem Nachman is posing seriously either. It was precisely this problem that led Kimura to his Neutral Theory. It's Christmas. See you in a few days. Merry Christmas everyone.PaV
December 24, 2009
December
12
Dec
24
24
2009
06:54 PM
6
06
54
PM
PDT
IrynaB, You don't have an actual argument.Joseph
December 24, 2009
December
12
Dec
24
24
2009
03:28 PM
3
03
28
PM
PDT
PaV: So, 150/1500 = 10%, or 1 in 10.
You need to work on your arithmetic. If the mutations are distributed evenly across the genome, then it would be 3% of each gene. (Notice the brevity of the calculation.)
PaV: Does anyone out there in Darwinland still want to maintain that this could happen by chance?
The total number of mutations tried over the relevant history (given your assumptions) is population (10^5) * mutations per year per individual (10) * years (10^7) = a lot (10^13). As the genome is about a billion in length (~10^9), that means every point on the genome has been tested bunches of times. Assuming neutrality, the number of mutations that become fixed is mutations per year per individual (10) * years (10^7) = more than enough (10^8).
PaV: So, why don’t we look at actual numbers and actual probabilities.
Let's.
Chimpanzee Sequencing and Analysis Consortium: Through comparison with the human genome, we have generated a largely complete catalogue of the genetic differences that have accumulated since the human and chimpanzee species diverged from our common ancestor, constituting approximately thirty-five million single-nucleotide changes, five million insertion/deletion events, and various chromosomal rearrangements.
So our rough estimate is sufficient to account for the point mutations, even if we assume neutrality. Of course, as the authors state, there are other mechanisms at work. Gene duplication and selection can work much faster.
PaV: The odds so calculated: 10^-70,000;
Assuming many of the changes are neutral, then you are calculating the odds of an everyday bridge hand. Every hand is incredibly unlikely, but some hand is inevitable.
PaV: Now, there is a further point to be made here: the neutral theory is being invoked as the means by which the needed ‘evolutionary’ changes come about.
I would have thought gene duplication and selection would have been important, though many changes are clearly neutral.
PaV: Well, if you invoke the neutral theory, you’re then discounting any role for NS.
Uh, no. Just because many of the changes are neutral doesn't mean they're all neutral or that selection is unimportant.
PaV: Indeed, he would admit, mutations occur randomly, but then NS comes along and in some mysterious way, guided by still unknown forces of nature acting via preferential death, “endless forms most beautiful” come about.
It's not all that mysterious. Quite a lot is known about evolution—though certainly not everything.
PaV: ... then how will you deal with the genetic load calculations that Nachman and Crowell say presents a “paradox”.
Nachman calculations 3 deleterious mutations per genome per individual per generation. This is only an issue in certain slow reproducers (such as humans). In other words, the entire world of biology works just fine. It is reasonable, given the vast evidence supporting evolution, that these slow reproducers would not have evolved if they couldn't persist. Nachman tests a particular selection model of evolution. If Nachman's paradox were a problem other than a defect in the model, then the human population would be rapidly decreasing. It isn't. It's a defect in the model. Nachman suggested synergistic epistasis as a plausible and testable modification of the model. Other mechanisms include loss of egg or sperm before fertilization or spontaneous abortion.Zachriel
December 24, 2009
December
12
Dec
24
24
2009
03:28 PM
3
03
28
PM
PDT
Zachriel:
You have to grapple with the extensive evidence for Common Descent. It doesn’t go away because ID pretends it isn’t there.
1- The evidence for Common Descent isn't extenstive As a matter of fact the vast majority of the fossil record- the marine invertebrates- does not support the premise. 2- That evidence can be used as evidence for Common Design 3- ID doesn't say anything about it 4- There isn't any evidence that the transformations required are even possible IOW Common Descent can't be too essential for anything but to promote a non-specific PoV.Joseph
December 24, 2009
December
12
Dec
24
24
2009
03:25 PM
3
03
25
PM
PDT
Zachriel:
The number of mutations are more than sufficient to account for the changes. Since you don't even know what the changes were, you can't know the number of mutations needed. Since you can't know the number of mutations needed, you cannot know that there was a sufficient number. More science, less handwaving.
Mung
December 24, 2009
December
12
Dec
24
24
2009
11:36 AM
11
11
36
AM
PDT
Zachriel [38]:
The number of mutations are more than sufficient to account for the changes. Even in a moderate size population, every mutation is tried every few thousand years.
This is a claim you are making without examination. As far as you’re concerned, this just seems sufficient. So, why don’t we look at actual numbers and actual probabilities. Let's assume that we're dealing with 10 million years of neutral drift. Let's assume that there are 100 million fixations; and, let's assume that none of the mutations reverts. Per Nachman's paper, a little less than ten percent of the human genome codes for proteins, and there are 70,000 genes found in the genome. How many mutations/substitutions will occur per gene? Of the hundred million fixations = substitutions, on average, 10 million will occur. [This amounts to about 1/3 of a percent of the genome, while the difference between chimps and humans is now thought to be about 3%. We'll revisit this.] Thus, 10^7 substitutions, divided by 7 x 10^4 genes, means that, on average, 150 substitutions occur per gene. Nachman tells us that the average gene consists of 1500 bp = nucleotides. So, 150/1500 = 10%, or 1 in 10. Let’s assume the minimal situation where only one SNP = substitution = mutations = fixation (whatever term you want to use) is needed to distinguish a chimp gene from a human gene. (Very likely we will need, on average, 6 or 7 substitutions. Why? Because, as calculated above, these 100 million substitutions represent only 0.3 % of the genome whereas chimps and humans though formerly thought to be distinguished by a 1% difference, are now considered to differ by up to 3% difference.) So, again, this represents a very conservative view of things. [Let me add that if we want to say that some of the genes don’t change, then this only means that others have to change a lot more, and the odds I’m about to calculate end up being just pushed around. That is, the odds will end up being the same no matter how you cut things.] Now there is a 1 in 10 chance of the “correct” mutation taking place along the 1500 bp length of each gene. So, what are the odds of all these “random” mutations [remember, these are all “neutral” mutations] occurring? Well, because these individual gene lengths are independent of each other, quite simply, we multiply the odds of one gene converting from chimp to human by the odds of the next gene converting from chimp to human, and that product by the odds of the very next gene converting, and so forth. The odds so calculated: 10^-70,000; that is, 1 in 10 raised to the 70,000 power. Here is a figure that assumes all the mutations fixed never revert, using a time frame that is much longer than any thought to exist between chimps and humans (Ardi pushes it to what, 6.5 mya?), and assuming only the most minimal of differences between genes, and still we come up with a figure that is absolutely astronomical in its improbability. Does anyone out there in Darwinland still want to maintain that this could happen by chance? [N.B. Now, Nachman’s figure of 176 mutations/genome/replication is, I believe, for a diploid genome. Normally you would deal with the number only on one strand since the coding direction means only half of the genome is used to code genes. So, it could be justified to halve the number of mutations, which would have the effect of squaring the improbability.] Now, there is a further point to be made here: the neutral theory is being invoked as the means by which the needed ‘evolutionary’ changes come about. Well, if you invoke the neutral theory, you’re then discounting any role for NS. Now, how can you justify using the neutral theory to account for all the needed mutations (but, of course, the above calculation shows just how woefully inadequate the neutral theory is to account for the changed basepairs) and then not agree with the design argument? What I mean is this: Dawkins asserts that evolution is really not a “blind chance” process. Indeed, he would admit, mutations occur randomly, but then NS comes along and in some mysterious way, guided by still unknown forces of nature acting via preferential death, “endless forms most beautiful” come about. So, he would tell us, through NS an otherwise random process gives the “appearance of design”. He tells us upfront that nature “appears” designed. So, if you want to account for random mutations via the neutral theory---thus leaving NS behind---then you should, per Dawkins, believe not only that life “appears” designed, but actually IS designed. If, in reaction to this argument you are then going to tell me that NS is really at work---somehow!!---then how will you deal with the genetic load calculations that Nachman and Crowell say presents a “paradox”. Hasn’t Darwinism really painted itself into the corner?PaV
December 24, 2009
December
12
Dec
24
24
2009
11:34 AM
11
11
34
AM
PDT
IrynaB: Sooner or later computers will be more powerful than the brain in all aspects. And then what?
It will be a John Henry moment. Once upon a time, chess was considered the ultimate test of human intelligence. Now, humans strengthen their chess-playing abilities with computers, just like they strengthen their bodies with machines.Zachriel
December 24, 2009
December
12
Dec
24
24
2009
08:15 AM
8
08
15
AM
PDT
vjtorley:
The human brain is the most complex machine known in the universe. It is orders of magnitude more powerful than the world’s best computers, which are the product of design.
This is only partially true. Computers have long surpassed the brain's capacity in many areas. That's why I use Mathematica to solve systems of algebraic equations. Sooner or later computers will be more powerful than the brain in all aspects. And then what? Does the brain stop being designed? Will we have surpassed the Designer's capabilities?IrynaB
December 24, 2009
December
12
Dec
24
24
2009
12:54 AM
12
12
54
AM
PDT
VJtorley: The Penn, Holyoak and Povinelli article you cite was published in "Behavioral and Brain Sciences," one of my favorite venues due to its format of target article followed by numerous invited responses. Here are a number of excerpts from those responses, which collectively suggest that an embrace of their work may be premature. Out of their heads: Turning relational reinterpretation inside out Louise Barrett By being in thrall to a representational theory of mind based on the computer metaphor, Penn et al. are obliged to draw a representational line in the sand that animals are unable to cross in order to account satisfactorily for the differences between ourselves and other animals. The suggestion here is that, if Penn et al. step back from this computational model and survey the problem more broadly, they may recognize the appeal of an embodied, embedded approach, where the ability of humans to outstrip other species may be a consequence of how we exploit the elaborate structures we construct in the world, rather than the exploitation of more elaborate structures inside our heads...From a purely internal perspective, then, the cognitive processes of humans and other animals may well be quite similar. The difference, paradoxically, may lie in our ability to create and exploit external structures in ways that allow us to augment, enhance, and support these rather mundane internal processes. The reinterpretation hypothesis: Explanation or redescription? Jose´ Luis Bermu´dez One obvious way of answering these questions is to highlight the distinctiveness of human linguistic abilities – either by way of the “rewiring hypothesis” proposed by Dennett (1996), Mithen (1996), and Bermu´ dez (2003; 2005) or by Carruthers’s appeal to the role of representations in logical form in domain general, abstract thinking (Carruthers 2002). Penn et al. reject these proposals. Whatever their ultimate merits, however, these proposals quite plainly offer explanatory hypotheses. If Penn et al. are to offer a genuine alternative, they need to make clear just how their account is an explanation of the uniqueness of human cognition, rather than simply a description of that uniqueness. Darwin’s last word: How words changed cognition Derek Bickerton The capacity to perceive and exploit higher-order relations between mental representations depends crucially on having the right kind of mental representations to begin with, a kind that can be manipulated, concatenated, hierarchically structured, linked at different levels of abstraction, and used to build structured chains of thought. Are nonhuman representations of this kind? If they are not, Penn et al.’s problem disappears: Other animals lack the cognitive powers of humans simply because they have no units over which higher-order mechanisms could operate. The question then becomes how we acquired the right kind of representations. The role of motor-sensory feedback in the evolution of mind Bruce Bridgeman Both Darwin and Penn et al. are correct. There are enormous differences between human and animal minds, but enormous differences can arise from seemingly subtle changes in mental function. An example is the use of motor-sensory feedback to elaborate human thinking, based on plans that can circulate through the human brain repeatedly….Did Darwin make a mistake? I do not think so. Any mistakes lie elsewhere. Imaginative scrub-jays, causal rooks, and a liberal application of Occam’s aftershave Nathan J. Emerya and Nicola S. Claytonb The cognitive differences between human and nonhuman animal minds suggested by Penn et al. are without exception impossible to quantify because of the reliance on language in experiments of human cognition...As recent studies in scrub-jays and apes have suggested (Correia et al. 2007; Mulcahy & Call 2006a; Raby et al. 2007), nonhuman animals may think about alternative futures outside the realm of perception. We believe that these complex processes should not be neglected in the type of cognitive architectures discussed by Penn et al.; indeed, we have argued that planning, imagination, and prospection can be included in such models (Emery & Clayton, in press). Comparative intelligence and intelligent comparisons Allen Gardner Oddly, a wave of recent claims of evidence for noncontinuity fail to use any controls for experimenter hints. This failure of method is apparent in virtually all of the experimental evidence that Penn et al. cite. Herrmann et al. (2007) is a very recent example. Fortunately, an online video published by Science clearly shows that experimenters were in full view of the children and chimpanzees they tested. Differences in experimenter expectations or rapport between experimenter and subject easily account for all results. Relational language supports relational cognition in humans and apes? Dedre Gentner and Stella Christie Darwin was not so wrong. We agree with Penn et al. that relational ability is central to the human cognitive advantage. But the possession of language and other symbol systems is equally important. Without linguistic input to suggest relational concepts and combinatorial structures to use in conjoining them, a human child must invent her own verbs and prepositions, not to mention the vast array of relational nouns used in logic (contradiction, converse), science (momentum, limit, contagion) and everyday life (gift, deadline). Thus, whereas Penn et al. argue for a vast discontinuity between humans and nonhuman animals, we see a graded difference that becomes large through human learning and enculturation. Humans are born with the potential for relational thought, but language and culture are required to fully realize this potential. Bottlenose dolphins understand relationships between concepts Louis M. Herman, Robert K. Uyeyama, and Adam A. Pack The studies Penn et al. critique to discount nonhuman animal relational competencies are heavily weighted toward primates and birds, plus a few additional citations on bees, fish, a sea lion, and dolphins. Cognitive differences among nonhuman species are largely ignored, as if all were cut from the same mental cloth….Penn et al. make a top-down claim for genetic pre-specification in humans alone of a module for higher-order cognition. However, bottom-up theories may offer better paths to understanding nonhuman animal cognitive potential... Taking symbols for granted? Is the discontinuity between human and nonhuman minds the product of external symbol systems? Gary Lupyan The human ability to reason about unobservable causes, to draw inferences based on hierarchical and logical relations, and to formulate highly abstract rules is not in dispute. Much of this thinking is compatible on an intuitive level with Penn et al.’s RR hypothesis. But although it is indeed “highly unlikely that the human ability to reason about higher-order relations evolved de novo and independently with each distinctively human cognitive capability” (sect. 11, para. 7), it is not unlikely that such uniquely human abilities depend on the use of external symbol systems...Although the authors provide a compelling demonstration for an insensitivity to structural relations and the use of symbols by nonhuman animals, in taking for granted the biological basis for these abilities in human animals, the very premise of a biologically based fundamental discontinuity between human and nonhuman minds remains unfulfilled. An amicus for the defense: Relational reasoning magnifies the behavioral differences between humans and nonhumans Arthur B. Markman and C. Hunt Stilwell As argued by the target article, role-governed categories and analogical reasoning are a result of straightforward differences in representational capacity between human and nonhuman animals. We suggest that these abilities serve to magnify the apparent cognitive differences between human and nonhuman animals, because they are crucial for the development of cultural systems that increase in complexity across generations…This view helps to explain how the cognitive abilities of human and nonhuman animals could simultaneously appear to be very similar and very different. Small differences in representation ability support large differences in the available knowledge base that humans and nonhuman animals have to reason with. What this work does not explain is how the leap from featurebased representations to relational representations is made. Putting Descartes before the horse (again!) Brendan McGonigle1 and Margaret Chalmers One difficulty in following this thesis is that when espousing their case for human structural superiority, the authors veer between task criteria which are adult end-state, context free, and formal, such as “systematicity,” omnidirectionality, and “analogy” considered in isolation from content – and those which are embedded in “world knowledge” – such as functional analogy, theory of mind (ToM), and higher-order structural apprehension of perceived relations of similarity and difference…This not only conflates private with cultural constructions as templates for the individual mind, it also ditches in the process those elements of human cognition regarded by many as core and normative, namely, commonsense reasoning, bounded rationality, choice transitivity, and subjective scales of judgement (based on adaptive value rather than truth) as well as other sources of knowledge derived from perception and action – all of which are subject to principled influences of learning and development. In contrast, Penn et al.’s own characterisation of human cognition both diminishes the role of development and eliminates completely the role of learning. This is despite the fact that many of the authors they cite are at pains to point out that the human competences they describe are often the product of many years of human development (Halford 1993; Piaget 1970) and/or considerable explicit tuition (Kotovsky & Gentner 1996; Siegal et al. 2001) within a physical and social environment. ….In an exciting area still largely in a vacuum created more by experimental neglect than animal failures, this rush to judgement by Penn et al. will put this fragile yet exciting new comparative agenda at risk. Difficulties with “humaniqueness” Irene M. Pepperberg In sum, although Penn et al. do indeed present cases for which no good data as yet exist to demonstrate equivalent capacities for humans and nonhumans, I disagree with their insistence that the present lack of such data leads to a theoretical stance requiring a sharp divide between human and nonhuman capacities. Absence of evidence is not a sure argument for evidence of absence. A continuum appears to exist for many behavior patterns once thought to provide critical distinctions between humans and nonhumans; I discuss some such instances missed by Penn et al., others also exist, and I suspect that, over time, researchers will find more continua in other behavior patterns. Moreover, although I suspect that some of the papers that I cite were not published when this target article was written, their recent appearance only supports my point – that new data may require a reappraisal of purported certainties. One may argue about definitions of discontinuity – for example, how to reconcile some societies’ advanced tool creation and use with those of primitive societies whose tools are not much better than those of corvids (Everett 2005; Hunt & Grey 2007) – and I do not deny the many differences that indeed exist between humans and nonhumans, but I believe future research likely will show these to be of degree rather than of kind. Quotidian cognition and the human-nonhuman “divide”: Just more or less of a good thing? Drew Rendall,a John R. Vokey,a and Hugh Notman Ultimately, then, we completely agree with Penn et al. that the current zeitgeist in comparative cognition is wrong; however, the mistake maybe lies not in emphasizing mental continuity, but rather in the kind of mental continuity emphasized. Animals and humans are probably similar: however, similar not because animals are regularly doing cognitively sophisticated things, but because humans are probably doing cognitively rather mundane things more often than we think. Explaining human cognitive autapomorphies Thomas Suddendorf Abstract: The real reason for the apparent discontinuity between human and nonhuman minds is that all closely related hominids have become extinct. Nonetheless, I agree with Penn et al. that comparative psychology should aim to establish what cognitive traits humans share with other animals and what traits they do not share, because this could make profound contributions to genetics and neuroscience. There is, however, no consensus yet, and Pennet al.’s conclusion that it all comes down to one trait is premature. Languages of thought need to be distinguished from learning mechanisms, and nothing yet rules out multiple distinctively human learning systems Michael Tetzlaff and Peter Carruthers Abstract: We distinguish the question whether only human minds are equipped with a language of thought (LoT) from the question whether human minds employ a single uniquely human learning mechanism. Thus separated, our answer to both questions is negative. Even very simple minds employ a LoT. And the comparative data reviewed by Penn et al. actually suggest that there are many distinctively human learning mechanisms. Analogical apes and paleological monkeys revisited Roger K. R. Thompsona and Timothy M. Flemmingb Penn et al. suggest that, in part, the ability to label relational information is unique to the human mind and responsible for the discontinuity implicated by the relational reinterpretation (RR) hypothesis. In fact, we believe there is comparative evidence to suggest that similar symbolic systems also apply to our nearest primate relatives. In the case of other animals, like monkeys, however, no evidence as yet indicates that a conditional cue can acquire the full status of a symbolic label, although it would seem that symmetric treatment of a conditional cue lays the foundation for a recoding of relational information as set forth by the RR hypothesis. On possible discontinuities between human and nonhuman minds Edward A. Wasserman The history of comparative psychology is replete with proclamations of human uniqueness. Locke and Morgan denied animals relational thought; Darwin opened the door to that possibility. Penn et al. may be too quick to dismiss the cognitive competences of animals. The developmental precursors to relational thought in humans are not yet known; providing animals those prerequisite experiences may promote more advanced relational thought. Here follow some excerpts:Voice Coil
December 23, 2009
December
12
Dec
23
23
2009
07:57 PM
7
07
57
PM
PDT
vjtorley: You may be unaware of the fact that the intelligent design movement as such does not question common descent –
Common Descent is the single most important unifying pattern in biology.
vjtorley: indeed, the question of common descent is orthogonal to its concerns.
Common Descent is essential to understanding evolution. The evidence is pervasive and not reasonably subject to dispute. In large part, humans and every other organism is what it is because of what it once was.
vjtorley: The human brain is the most complex machine known in the universe.
So Common Descent applies to everything in biology but the human brain? Or are you picking a thread while ignoring the tapestry? You have to grapple with the extensive evidence for Common Descent. It doesn't go away because ID pretends it isn't there.
vjtorley: Hence my interest in the paper by Hawks et al., “Population Bottlenecks and Pleistocene Human Evolution” in Molecular Biology and Evolution
Yes, there is evidence for a bottleneck in human evolution about 2 million years ago, a common natural occurence. The phylogenetic changes are well-within known rates of evolution. The author completely and adamantly rejects your conclusions.
vjtorley: Hawks is, as you point out, an orthodox evolutionist, yet he is quite open about the occurrence of a sudden anatomical change at the time when Homo erectus (referred to as early Homo sapiens in Hawks’ 2000 paper) first appeared:
Of course he is. Because his idea of "sudden" is well-within the norms of evolutionary theory.
vjtorley: For the time being, the tentative hypothesis I shall adopt is that the time when Homo erectus first appeared in Africa is when the human brain underwent its “quantumn leap.”
You can hypothesize what you want, but all you have is unsupported speculation, while the evidence supports the natural evolutionary history of Common Descent.Zachriel
December 23, 2009
December
12
Dec
23
23
2009
06:52 PM
6
06
52
PM
PDT
Zachriel (#19, 20) Thank you for your posts. Concerning the article I linked to on the weblog of paleoanthropologist John Hawks, you write:
Nice blog. But what is your interest in the difficulty of reconstructing a relatively minor detail of common descent? Hawks certainly agrees that the overall pattern of evidence strongly supports Common Descent, the question being where to fit this particular organism.
You may be unaware of the fact that the intelligent design movement as such does not question common descent - indeed, the question of common descent is orthogonal to its concerns. ID is the scientific quest for patterns in nature which are best explained as the result of intelligent agency. The human brain is the most complex machine known in the universe. It is orders of magnitude more powerful than the world's best computers, which are the product of design. There is also abundant empirical evidence of a sharp cognitive discontinuity between humans and their nearest genetic relatives, the apes, according to a recent paper by Derek C. Penn, Keith J. Holyoak and Daniel J. Povinelli, entitled Darwin's mistake: Explaining the discontinuity between human and nonhuman minds in Behavioral and Brain Sciences (2008), 31(2): 109-178. Let me emphasize that in matters pertaining to biology, the authors of the paper are all orthodox evolutionists, even if they strongly disagree with Darwin's views on psychology:
Darwin was mistaken: the profound biological continuity between human and nonhuman animals masks an equally profound discontinuity between human and nonhuman minds.
Now, you are of course free to believe that the human brain, which controls the body of the primate Homo sapiens, evolved gradually through an unguided process of random variation combined with periodic non-random winnowing (selection), and the authors of the paper I cited would also agree with you. For my part, however, I would call this a fanciful hypothesis, which flies in the face of everything we know about machines and how they improve. In my opinion, the main reason why such an outlandish hypothesis is still taken seriously in the sciences is because of the mistaken perception that the alternative hypothesis of design is a "science-stopper", and that it would stymie research if adopted. Nothing could be farther from the truth. A more useful quest, in my opinion, would be to identify the time in the fossil record when the human brain underwent an abrupt change in its processing capacity, which would be one tell-tale signature of guided evolution. Hence my interest in the paper by Hawks et al., "Population Bottlenecks and Pleistocene Human Evolution" in Molecular Biology and Evolution 17:2-22 (2000) at http://mbe.oxfordjournals.org/cgi/content/full/17/1/2 . This paper presents evidence of an abrupt jump at 2 million years ago. (Since then, Hawks has modified the date slightly: he now dates the emergence of Homo erectus in Africa to 1.65 million years ago, as some of the papers I cited above show.) Hawks is, as you point out, an orthodox evolutionist, yet he is quite open about the occurrence of a sudden anatomical change at the time when Homo erectus (referred to as early Homo sapiens in Hawks' 2000 paper) first appeared:
We, like many others, interpret the anatomical evidence to show that early H. sapiens was significantly and dramatically different from earlier and penecontemporary australopithecines in virtually every element of its skeleton (fig. 1) and every remnant of its behavior (Gamble 1994; Wolpoff and Caspari 1997; Asfaw et al. 1999; Wood and Collard 1999). Its appearance reflects a real acceleration of evolutionary change from the more slowly changing pace of australopithecine evolution.
For the time being, the tentative hypothesis I shall adopt is that the time when Homo erectus first appeared in Africa is when the human brain underwent its "quantumn leap." I would also expect that further research in the field of genetics should reveal a sharp discontinuity in the number of genetic instructions required to build a human brain (as opposed to a chimp's), and that this hurdle was crossed about 1.65 million years ago. Finally, I speculate that the magnitude of the informational hurdle that was crossed during the transition from the common ancestor of humans and apes to the first human being may even compare with the magnitude of the informational hurdle that occurred at the beginning of the Cambrian period, and which is described in Dr. Stephen Meyer's paper, The Cambrian Explosion: Biology's Big Bang at http://www.discovery.org/a/1772 . See also this resource page: Darwin's Dilemma: The Mystery of the Cambrian Explosion which has lots of helpful, up-to-date articles. Lastly, the reason why I cited Hawks' blog post, The trouble about Kenyanthropus and Ardi is that it serves as a useful antidote to scientific hubris. What it shows is that our picture of hominin evolution beyond the critical 4-million-year stage (near the point when humans and chimps are supposed to have diverged) remains murky and speculative - a situation which is not helped by the fact that the scientists who possess the hominin fossils from that time refuse to make them available to the scientific community at large. Now that's a science-stopper.vjtorley
December 23, 2009
December
12
Dec
23
23
2009
05:32 PM
5
05
32
PM
PDT
1 2 3

Leave a Reply