Home » Darwinism, Human evolution » Human evolution: Ardipithecus, humans, and chimps

Human evolution: Ardipithecus, humans, and chimps

Someone wrote to me recently, asking

Ever since the reporting of Ardi, I expected a commentary of it on your blog but so far I have found none unless I missed it. I’m curious to what you have to say about it since the researchers of Ardi claim chimps may have descended from us. That being the case then evolution’s tree of life would have to be reimagined. Thoughts?

I think Jean Auel did the best job in Clan of the Cave Bear, and she even admitted that she was writing fiction. Which, in my view, puts her way ahead of dozens of profs who can tell me exactly how long-dead people – who never left any writings behind – thought about stuff like religion and family life.

Anyway, I am putting this to the commenters. What do you think about Ardipithecus? An ancestor of us? Of chimps? Both?

By the way, this blog is a volunteer enterprise. Unlike the Darwinists, we are not part of your tax burden. If you feel like contributing financially, don’t let me deter you. We could expand our coverage and services if we had more resources. Otherwise, you see only what a volunteer found time to contribute. That’s hardly a quarter of what we could say.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

70 Responses to Human evolution: Ardipithecus, humans, and chimps

  1. Which, in my view, puts her way ahead of dozens of profs who can tell me exactly how long-dead people – who never left any writings behind – thought about stuff like religion and family life.

    Can you tell us who some of these “dozens of profs” are, along with evidence that they do indeed try to say exactly how pre-humans thought?

    And then can you explain how this relates to Ardi? I wasn’t aware of any discussion of behaviour, although I admit I didn’t follow the reporting very closely.

  2. I was very intrigued by Ardi. I hard heard about the discovery in the past and they finally released everything in great detail. I watched the three hour specials on TV and researched Ardi online.

    I was surprised to see that Ardi was only like 4 ft something tall. Overall it seemed to resemble an orangutan or it is just an extinct species. The pelvis was the only thing that seemed to separate it a bit from other extinct apes.

    One thing that did bother me was the artist renderings. There is no way they know what Ardi looked like. Whether or not it had hair, where it had hair etc. Now it probably did, since it seems to be some sort of extinct Ape, but they should not throw that out in the public.

    Overall, the more I watched the specials and the more I learned I was not too impressed. Specifically because of the claims that Arid was “the missing link” or in the chain with humans. But that doesn’t seem to be the case. It really just seems like an extinct ape. Look at the bone structure and look at the feet. Like I said, the pelvis is the only thing that jumps out.

    Also, this is a fairly complete skeleton, more so than Lucy, but I believe that found fragments from various sites in the same area. They did not just get them all in one spot. Correct me if I am wrong on this. But that opens up the door for some issues as well.

    So I guess it is a good historical find, but I don’t see how you can call it a link or ancestor of humans. It appears to be an extinct ape.

  3. The apparent importance of Ardipithecus is that it implies that the ape lineage and hominid lineage broke off from one another earlier than thought. However, when we consider the tremendous anatomical differences between apes/chimps/orangatans and humans, per Darwinian theory, a huge amount of time would be necessary to acount for the differences we see, not just the few millions of years this find might add to the discussion.

    In the minds of these scientists, I presume, this helps. But let’s just consider that from what is generally known concerning mutations, a mutation of some sort—a nucleotide change, recombination, insertion, deletion, etc—occur with a frequency of about 1 in 10^9. For a primitive population of a thousand, with a generation time of several years, this would require several millions of years of reproduction to produce just one mutation. Even with this added time since the split of lineages, there is just no way to account for the huge differences between man and ape using traditional neo-Darwinian mechanisms.

    This is like an 12-year-old showing his license to a liquor store owner so that he can buy some whiskey. The owner says: “According to your license, you were born in April of 1997. You’re not old enough to buy whiskey.” To which the teenager says: “Oh,no. I was really born in March of 1997.”

    We’re dealing here with a problem that is orders of magnitude apart from theory. Adding a few million years is like trying to whittle down our national debt by having a garage sale.

  4. PaV said:

    But let’s just consider that from what is generally known concerning mutations, a mutation of some sort—a nucleotide change, recombination, insertion, deletion, etc—occur with a frequency of about 1 in 10^9. For a primitive population of a thousand, with a generation time of several years, this would require several millions of years of reproduction to produce just one mutation.

    Worth noting, I believe, is that the mutation rate quoted deals with that per *nucleotide*, not per genome. With a genome of several million nucleotides, a mutation will most definitely appear in a shorter amount of time. If you are referring to a mutation fixating, that is a different story.

  5. PaV:

    But let’s just consider that from what is generally known concerning mutations, a mutation of some sort—a nucleotide change, recombination, insertion, deletion, etc—occur with a frequency of about 1 in 10^9. For a primitive population of a thousand, with a generation time of several years, this would require several millions of years of reproduction to produce just one mutation. Even with this added time since the split of lineages, there is just no way to account for the huge differences between man and ape using traditional neo-Darwinian mechanisms.

    Actually, the number of new mutations per individual offspring is estimated to be about 150 in humans, off the top of my head. You’re off by 11 orders of magnitude.

    Also off the top of my head, humans produce about 400 novel gene products compared to our nearest relatives. Factor in the fact that regulatory genes can have large downstream effects, do you still feel that the “huge anatomical differences” (which I don’t find so huge at all) cannot be accounted for?

  6. Let’s be honest, Darwinists have been screwing up their own science since Saint Charles wrote Origin.

    They still do not know what the alleged human ancestor is and never will. That’s because the oldest ancestor of man was a man.

    Every year or so they come up with some alleged ancestor and it always turns out to be problematic to say the least. So they downplay the failures and hail some new fossil as the real one until the new goes down the tubes and ad infinitum – they never learn because they do not want to.
    When the whole of your existence (job, reputation, career, friendships, etc.) depends on the truth of your origins theory, you will not likely be willing to change it at any cost. It won’t matter how futile and inane your theory must become.

    That pretty much describes the current state of Darwinist thinking. Dumb and dumber, or as Sir Fred Hoyle stated in ‘The Mathematics of Evolution’, “mentally ill”.

    Persecute, ostricise and belittle your opponent to keep your psychological security in your world view in tact. That’s the current tactic.

    Ardi is just another example of Dawinist waste of time and funds – just look at the details of its restoration. Do you believe they restored that thing perfectly? Do you really trust that? Well I certainly don’t.

    The bones were so “crushed”, “chalky”, “fragmented”, “squished” and “Irish stew” like that it took them years to reconstruct. I wouldn’t put a whole lot of confidence in such a reconstruction in the first place.

    Science reports some serious scientific skepticism about A. ramidus being bipedal:

    However, several researchers aren’t so sure about these inferences. Some are skeptical that the crushed pelvis really shows the anatomical details needed to demonstrate bipedality. The pelvis is “suggestive” of bipedality but not conclusive, says paleoanthropologist Carol Ward of the University of Missouri, Columbia. Also, Ar. ramidus “does not appear to have had its knee placed over the ankle, which means that when walking bipedally, it would have had to shift its weight to the side,” she says. Paleoanthropologist William Jungers of Stony Brook University in New York state is also not sure that the skeleton was bipedal. “Believe me, it’s a unique form of bipedalism,” he says. “The postcranium alone would not unequivocally signal hominin status, in my opinion.” Paleoanthropologist Bernard Wood of George Washington University in Washington, D.C., agrees. Looking at the skeleton as a whole, he says, “I think the head is consistent with it being a hominin, … but the rest of the body is much more questionable.” (Ann Gibbons, “A New Kind of Ancestor: Ardipithecus Unveiled,” Science, Vol. 326:36-40 (Oct. 2, 2009).)

    Iow, Ardi is another future failure.

    Lucy was debunked back in 1998 by scientists around the world (but chiefly Europe) so it still amazes me to see that brought up again.

    “The success of Darwinism was accompanied by a decline in scientific integrity. …To establish the continuity required by the theory, historical arguments are invoked even though historical evidence is lacking.” -W. R. Thompson, PhD

    Gee that reminds me of Climategate! ;-)

  7. Let’s be honest, Darwinists have been screwing up their own science since Saint Charles wrote Origin.

    They still do not know what the alleged human ancestor is and never will. That’s because the oldest ancestor of man was a man.

    Let’s compare the evidence then. Why is your claim that the oldest ancestor of man was a man (not a woman of course) is more convincing?

  8. Actually, the number of new mutations per individual offspring is estimated to be about 150 in humans, off the top of my head. You’re off by 11 orders of magnitude.

    Yes.

    Nachman and Crowell (2000) produced this well-known estimate. It means that across the global human metapopulation there is enough mutation to replace every single nucleotide in a period of about 20 years (i.e. each generation). So a lack of mutation to generate variation is a poor argument indeed.

    A series of terrible analogies about whisky and garage sales from PaV does not change this basic fact.

    I agree with IrynaB – let’s see the balance of evidence here. The morphological and genomic data strongly indicate common descent and the differences between humans and apes can be explained by the types of microevolution that ID advocates seem most happy to accept. So there would need to be some strong counterevidence to suggest otherwise.

    The absence of a particular intermediate fossil is hardly conclusive proof – this assumes the fossil record conveniently provides specimens of each species that ever lived!

  9. Paulmc,

    But you’ve got the burden of proving your theory correct. The lack of evidence means that skeptics can remain skeptical.

    Also, the difference between Ardi and humans is not, I think, explainable by microevolutionary changes alone. I may be wrong, but isn’t it true that Ardi had opposable big toes? That does not change by random mutuation imo.

  10. But you’ve got the burden of proving your theory correct.

    Collin: Do you reject common descent as a whole – i.e. are you suggesting each species is specially created? If not, then the most parsimonious explanation is that, like every other species, humans descended from an ancestral species.

    The lack of evidence means that skeptics can remain skeptical.

    Skepticism is one thing, stating “That’s because the oldest ancestor of man was a man” is rather another. That is a statement that requires some evidence.

    Also, the difference between Ardi and humans is not, I think, explainable by microevolutionary changes alone. I may be wrong, but isn’t it true that Ardi had opposable big toes? That does not change by random mutuation imo.

    Of course, Ardi may or may not be a direct human ancestor. That surely remains open. It is also quite possible that no all-answering ancestral fossil will ever be found; one may never have been preserved. With that said, large opposable toes is hardly an insurmountable problem for microevolutionary processes – what is your basis for saying it is?

    Overall, the differences between ape and human are relatively modest, on both a morphological and a genomic level – good evidence for common descent.

    If humans are specially created, then let us see the evidence that disproves common descent.

  11. Hi everyone,

    Here’s an illuminating paper on Ardpithecus from a paleoanthropologist named John Hawks. It makes for VERY interesting reading.

    The trouble about Kenyanthropus and Ardi .

    There are three skulls from putative “hominins” that date to 3.5 million years or earlier. Every one of these skulls is known now from extensive reconstruction or correction for distortion in the original.

    By itself, the extensive reconstruction might not be a problem. But as Tim White has repeatedly shown, the specialists on these crania actively and vociferously disagree about the basic anatomy due to problems reconstructing them. (Emphases mine – VJT.)

    It gets better:

    We can’t see the scans, no independent reconstructions are possible, and the people who can see the scans refuse to present comparisons of these three skulls that together represent the supposed origin of the hominin lineage.

    By the way, the emphasis and italics in the above quote are John Hawks’, not mine.

    Watch out for the AAAARRRRGGHHHH! near the end of the post. All of us have fresh memories of the HarryReadMe file in the Climategate scandal. One lesson we all learned from that is that when a scientist says Aaarrgghh!! , you know there’s a major problem.

    Here are some other articles by John Hawks that may be germane to the debate on whether Homo could have evolved gradually:

    John Hawks, Keith Hunley, Sang-Hee Lee and Milford Wolpoff. Population Bottlenecks and Pleistocene Human Evolution at http://mbe.oxfordjournals.org/.....ull/17/1/2 . In Molecular Biology and Evolution 17:2-22 (2000).

    A revised chronology for early Homo .

    Is a lack of fossils the problem with early Homo?

    News flash: Dmanisi hominids were not short .

    Enjoy!

  12. Oops, that last post had the most incredible argument for intelligent design ever conceived, but I accidentally deleted it and now I forgot it. Oh well. :)

    Paul, when you say, “Skepticism is one thing, stating “That’s because the oldest ancestor of man was a man” is rather another. That is a statement that requires some evidence” you are right.

  13. Paul, when you say, “Skepticism is one thing, stating “That’s because the oldest ancestor of man was a man” is rather another. That is a statement that requires some evidence” you are right.

    Well it’s nice to know we agree on that much! Who knows about the rest…

  14. If we descended from apes, why did we lose our hair? What advantage does hairlessness convey? Is it because we discovered fire and how to make clothing to keep warm? Did fur have a tendency to catch fire so our ancestors with less hair survived more than the ones who had fur? Why did we retain eyebrows and hair on our heads? Didn’t we know how to make hats? Eyebrows can be helpful for keeping debris out of our eyes, so did our ancestors who lost their eyebrows get killed more by predators and other enemies because they got stuff in their eyes at crucial moments, therefore losing eyebrows was an evolutionary disadvantage?

  15. paulmc:

    It means that across the global human metapopulation there is enough mutation to replace every single nucleotide in a period of about 20 years (i.e. each generation).

    And yet every single nucleotide is not replaced every 20 years, or every generation, so apparently there are in actuality not enough mutations across the human metapopulation to replace every single nucleotide in a period of about 20 years (i.e. each generation). Quite the conundrum.

    And if there were, it would play heck with trying to reconstruct phylogenetic trees, wouldn’t it?

    Can’t have too much mutational noise or the signal would get wiped out.

    So a lack of mutation to generate variation is a poor argument indeed.

    He was assuming a much smaller population size. Of course, I think the trade off there is with a smaller population size a mutation can be fixed much more quickly.

  16. Mung:

    And yet every single nucleotide is not replaced every 20 years, or every generation, so apparently there are in actuality not enough mutations across the human metapopulation to replace every single nucleotide in a period of about 20 years (i.e. each generation). Quite the conundrum.

    The word replace was a bit unfortunate. Most conceivable single nucleotide mutations occur within a relatively short time.

    Since the probability of fixation of a beneficial mutation is roughly 2s I think, where s is the selective advantage, only a small fraction of beneficial mutations will become fixed. For neutral mutations that’s 1/N and for deleterious mutations even smaller. Assuming a distribution of fitness effects of mutations (based on estimates, if available), isn’t it possible to work out how many beneficial mutations have accumulated since the split-off some 6 or whatever millions of years ago? Doesn’t sound like too much work, so I guess someone already did that.

  17. vjtorley: Here’s an illuminating paper on Ardpithecus from a paleoanthropologist named John Hawks.

    Nice blog. But what is your interest in the difficulty of reconstructing a relatively minor detail of common descent? Hawks certainly agrees that the overall pattern of evidence strongly supports Common Descent, the question being where to fit this particular organism. His request for better access to the original evidence is reasonable. Regardless, it doesn’t call into question evolutionary theory, just some details of a lineage that you seem particularly interested in (perhaps because you share a possible family relationship).

    The paleoanthropologist named John Hawks also thinks ID goes well beyond fantasy into the realm of delusion.

  18. vjtorley: Here are some other articles by John Hawks that may be germane to the debate on whether Homo could have evolved gradually:

    John Hawks, Keith Hunley, Sang-Hee Lee and Milford Wolpoff. Population Bottlenecks and Pleistocene Human Evolution at http://mbe.oxfordjournals.org/…..ull/17/1/2 . In Molecular Biology and Evolution 17:2-22 (2000).

    Everything in the paper you cite is based in and lends support to the Theory of Evolution, that is, evolution by natural variation and selection. From the abstract:

    Although significant population size fluctuations and contractions occurred, none has left a singular mark on our genetic heritage. Instead, while isolation by distance across the network of population interactions allowed differences to persist, and with selection, local adaptations were able to develop, evolution through selection, along with gene flow, has promoted the spread of morphological and behavioral changes across the human range. It is this pattern of shared ancestry that has left its signature in the variation that we observe today.

    The study indicates a bottleneck two million years ago, no subsequent bottleneck, and insufficient resolution in genetic data to provide a more detailed look at fluctuations in population.

  19. Mung: And yet every single nucleotide is not replaced every 20 years, or every generation, so apparently there are in actuality not enough mutations across the human metapopulation to replace every single nucleotide in a period of about 20 years (i.e. each generation). Quite the conundrum.

    No conundrum. The mechanism is called selection (Darwin, Wallace 1858).

    Mung: Can’t have too much mutational noise or the signal would get wiped out.

    Quite so, but not an issue. No single individual has every mutation, rather every single-point mutation is represented in the population over time. With a population of a few million, every mutation would be tried every few thousand years. There is sufficient variation to explain extensive changes in populations, which was the question raised.

    -
    Plasma storms are apparently still causing a delay in Zachriel's comments appearing on this blog. Our teams are working to resolve the problem.

  20. IrynaB,

    What is the scientific data that demonstrates the transformations required are even possible?

    Also it is most likely that even the most beneficial mutation will get lost, as opposed to becoming fixed.

    BTW “beneficial” is relative- what is beneficial for one generation in one environment may not be so in the next generation or in another environment.

    Also there isn’t just one beneficial mutation- meaning there could very well end up being competing beneficial mutations.

  21. Why do some people want to be related to chimps?

  22. StateMachine [4]:

    Worth noting, I believe, is that the mutation rate quoted deals with that per *nucleotide*, not per genome. With a genome of several million nucleotides, a mutation will most definitely appear in a shorter amount of time. If you are referring to a mutation fixating, that is a different story.

    Actually, the mutation rate, per year, is rather constant, and is 1 x 10^9. In my response below I’ll spell that out. As to genomes of “several million nucleotides”, this does not apply to human evolution, which has a genome of about 3 x 10^9 nucleotides. And, since we’re dealing with small effective population sizes, the time needed to arrive at the needed mutation is the bottleneck, not the fixation time.

    IrynaB [5]:

    Actually, the number of new mutations per individual offspring is estimated to be about 150 in humans, off the top of my head. You’re off by 11 orders of magnitude.

    Also off the top of my head, humans produce about 400 novel gene products compared to our nearest relatives. Factor in the fact that regulatory genes can have large downstream effects, do you still feel that the “huge anatomical differences” (which I don’t find so huge at all) cannot be accounted for?

    I’ve just finished reading the article. The actual number is 175 per “diploid” genome. [I suppose what is implicit in the “per diploid genome” is our understanding that “anit-sense” transcription takes place, which means we should divide this number by two. But let’s just leave the number alone for right now.] This number is calculated using a generation time of 20 years, and rather low effective population sizes. The actual equation is: mu=k/(2t + 4Net)=mutation rate. You can see that by increasing the effective population size, Ne, the result is a lower rate of mutation. But, we’ll leave that to the one side as well. Now, typically, mutation rates are given as mutations/generation while assuming one year as the average generation time. Typical rates, as I stated in the post, is 1 x 10^9 mutations/generation= 1 x 10^9 mutations/year, which equals 20 x 10^9/20 years = 2.0 x 10^8, which is the exact figure that Nachman and Crowell give. This means, then, that for an effective population size of 10,000 to 100, 000, mutations will occur, on a per year basis, at the rate of 10^9/10^5, and will become fixed at a rate of 1/4Ne, or 1 fixation for every 400,000 years. This means that it will take, on average, 10,000 years to get a particular needed mutation, and 400,000 years to fix it. IOW, we can expect roughly 2.5 mutations to become fixed every million years of evolution. Nachman and Crowell use a figure of 2.75 million years based on the last common ancestor having been 5.5 mya (we divide the time by two since both lines are mutating [N.B. N&C were comparing humans to chimps] simultaneously, thus giving twice the mutational difference). So, how many mutations of all sorts can we expect? It’s 2.5 x 2.75 = 7.925. Let’s round that off to 8 mutations that are fixed. Now there have been a huge number of mutations that will have occurred, but only 8 will have found their way through the entire population. One would think this would have the effect of humbling Darwinists, but we know that it doesn’t. There is the dogma to preserve, you know!

    So, my basic argument still stands. The whole ‘excitement’ about Ardi is because it pushed the 5.5 mya back a bit. Yet, again, we’re dealing with orders of magnitude of difference between what is needed to be explained and what Darwinism, specifically neo-Darwinian population genetics, can explain.

    Let me add these two further points: (1) I believe it is due to the small effective population size and the very long generation time used for humans that gives these larger than average numbers for mutation rates. With this in mind, I’ll make the second point. (2) As I’ve just demonstrated, correcting for different generation times makes the “rate” of human evolution the same as that of other organisms. Yet we hear that human evolution has “speeded up”. I believe this is entirely due to using numbers without making needed corrections. My second point, then, is that, quite typically, we who challenge the Darwinian orthodoxy are forced to wade through the addled thinking of Darwinists. This muddled thought happens because words are simply invented by the Darwinists to force-fit the data into Darwinian thought. We, then, are forced to unravel their muddled thinking so as to get to the bottom of things.

    paulmc [8]

    Nachman and Crowell (2000) produced this well-known estimate. It means that across the global human metapopulation there is enough mutation to replace every single nucleotide in a period of about 20 years (i.e. each generation). So a lack of mutation to generate variation is a poor argument indeed.

    Well, first of all, your claim depends on what number you use for the effective population size of humans. Your claim, that in one generation every nucleotide can be replaced means that 7.0 x 10^9 nucleotides (diploid genome) must be replaced in one generation with a mutation rate of 175 mutations per diploid genome. Let’s assume 200 mutations per diploid genome just to make the calculation straightforward. This means the effective population size of the ancestral originator has to be 3.5 x 10^7, that is, 35 million strong. Given that Nachman and Crowell use population sizes of 10^ and 10^5, your statement is off by at least a factor of 350. Second, as I’ve pointed out, it’s not just a matter of getting the mutation, but of it fixing. With a population size of 35 million, it would take 140 million years for a single mutation to become fixed.

    Isn’t it obvious that the numbers just don’t add up?

    paulmc [8]

    A series of terrible analogies about whisky and garage sales from PaV does not change this basic fact.

    Do my analogies seem so terrible now?

    Having answered the challenges made, let me note how NACHMAN AND CROWELL’S PAPER ends:

    The high deleterious mutation rate in humans presents a paradox. If mutations interact multiplicatively, the genetic load associated with such a high U would be intolerable in species with a low rate of reproduction (Muller 1950; Wallace 1981; Crow 1993; Kondrashov 1995; Eyre-Walker and Keightley 1999). The reduction in fitness (i.e., the genetic load) due to deleterious mutations with multiplicative effects is given by 1-e^2U (Kimura and Moruyama 1966). For U = 3, the average fitness is reduced to 0.05, or put differently, each female would need to produce 40 offspring to
    survive and maintain the population at constant size. This assumes that all mortality is due to selection and so the actual number of offspring required to maintain a constant population size is probably higher. . . .

    The problem can be mitigated somewhat by soft selection (Wallace 1991) or by selection early in development (e.g., in utero). However, many mutations are unconditionally deleterious and it is improbable that the reproductive potential on average for human females can approach 40 zygotes.

    So, there you have it. A paper that ends up presenting Darwinism with a “paradox”—which is no more than Haldane’s Dilemna revisited—is quoted as a source telling us just how many mutations are possible. So a paper that should have Darwinists shaking their heads is used as a debating device in their favor, and, it appears, because they are misinterpreting it under the guise of “sped up” human evolution.

    Finally, let me note that Kimura, in his book The Neutral Theory of Molecular Evolution says that when he first proposed the “neutral theory,” he did so based on this very problem: that is, the problem of genetic load. But he said there was another reason why he was proposing it which he left unstated. It was the problem of the huge number of beneficial mutations that are needed to account for the polymorphism found in human DNA. Because most mutations are deleterious, a huge number of mutations is needed so as to winnow out the few good mutations that go into any kind of progressive evolution of the genome.

  23. The Riddled Chain
    Pure. Dumb. Luck.

  24. PaV,

    Thank you very much for your detailed reply. It’s good to see a pro ID commenter here take the time to seriously work the numbers.

    But (you knew there was a but coming), your calculations are still way off because you calculate on a per nucleotide basis — not a per genome basis.

    Typical rates, as I stated in the post, is 1 x 10^9 mutations/generation= 1 x 10^9 mutations/year, which equals 20 x 10^9/20 years = 2.0 x 10^8, which is the exact figure that Nachman and Crowell give. This means, then, that for an effective population size of 10,000 to 100, 000, mutations will occur, on a per year basis, at the rate of 10^9/10^5, and will become fixed at a rate of 1/4Ne, or 1 fixation for every 400,000 years. This means that it will take, on average, 10,000 years to get a particular needed mutation, and 400,000 years to fix it. IOW, we can expect roughly 2.5 mutations to become fixed every million years of evolution.

    Here’s my calculation: 175 new mutations per individual, for a population size of Ne=10^5, means 2×10^7 new mutations per generation of 20 years, equals 10^6 new mutations per year in the population. If 1/Ne is the probability of fixation (which assumes neutrality), then out of 10^6 mutations per year, 10 will become fixed. That means 10 million mutations become fixed every million of years. A factor 10^7 higher than your estimate of 2.5. That’s better than your previous error of 10^11. Four more posts, and we might agree!

    I will address the rest of your post later.

  25. PaV, really good stuff.

    Haldane’s Dilemma is alive and well, and evolutionists continue to pretend it doesn’t exist.

  26. IrynaB [26]:

    Here’s my calculation: 175 new mutations per individual, for a population size of Ne=10^5, means 2×10^7 new mutations per generation of 20 years, equals 10^6 new mutations per year in the population. If 1/Ne is the probability of fixation (which assumes neutrality), then out of 10^6 mutations per year, 10 will become fixed. That means 10 million mutations become fixed every million of years. A factor 10^7 higher than your estimate of 2.5. That’s better than your previous error of 10^11. Four more posts, and we might agree!

    The point that you’re missing is that not just any old mutation will do. You need a specific mutation at a specific location along the length of the genome. So, at some specific spot, lineage A diverges from lineage B. Any mutation somewhere else won’t do because the likelihood of the mutation being deleterious is much greater than it’s being beneficial. So, the question then becomes this: to get a particular mutation at a particular point (thus rendering it ‘beneficial’), how many mutations do we need? Well, the probability of getting a particular mutation at a particular site along the length of the genome is 1 divided by the total number of nucleotides. Now the 175 mutations is for a ‘diploid’ genome. So the odds of a particular mutation at a particular site is 1/7 x 10^9 = 1.4 x 10^10. Thus, 1.4 x 10^10 mutations have to occur before we can be sure that a particular mutation—let’s call it mutation A—occurs. Now, if 10^6 mutations are produced each year by the population, then 1.4 x 10^4 years are needed for mutation A to occur. Now simply occurring is not sufficient; it has to become fixed. If we need mutation B elsewhere on the genome if some new species is to arise, then mutation B has show up on a genome that already has mutation A on it. This means that, roughly, mutations A,B,C,D ….. and so forth, need to be added sequentially (note that I said roughly, for on average this is true, but not strictly true). The fixation time for a particular mutation is 1/4Ne, with Ne considered by Nachman and Crowell to be 10^4 or 10^5. You’ve used 10^5. So, it takes 14,000 years for mutation A to arise, and then it takes mutation A 4 x 10^5 years to become fixed. Total number of years is 414,000 years for the first beneficial mutation to occur and become fixed. Now the genome must search for mutation B. Well, of course, this will take another 414,000 years. So, take the 2.75 million years since lineage A and lineage B diverge, and this means, on average, we would expect 2.75 x 10^6 years/one fixed mutation/4.1 x 10^5 years = 6.6 fixed, beneficial mutations. I guess my 8 fixed mutations figure was overstating it!

    Your calculation of 10 million mutations becoming fixed only points out the true function of natural selection: the elimination of deleterious mutations. If, indeed, all 10 million mutations are truly “neutral”, then let’s look at another number Nachman and Crowell give us: 70,000 genes. Now, on average, these 37.5 million mutations, all fixed per your calculation, would amount to 37.5 x 10^6 mutations/7 x 10^4 genes, equals, roughly, 5 x 10^2 mutations per gene. N&C use 1,500 basepairs as the average size of a gene, and they calculate that the total number of coding basepairs is about 10% of the genome. Your figure of 10 million mutations fixed per million years amounts then to a heterozyosity of around 50 mutations per 1500 basepairs, which is roughly 3%. This is perfectly in line with heterozygosities for mammalian species. But we’ve calculated the heterozygosity per one million years. The highest heterozygosities occur in insect species. For mammals H is in the range of 6 to 13%. So, any species that diverged more than 8 million years ago (I’m doubling here because of the split lineages) couldn’t tolerate this amount of heterozygosity, and that would mean that natural selection would have to kill off members of the species as a way of eliminating the accumulated number of mutations. And, again, Nachman and Crowell end their paper with the problem of genetic load. Translated, this means that to keep harmful mutations away, assuming Darwinian mechanisms, “intolerable” amounts of offspring would have to die. So, what does this say about Darwinian mechanisms? First we had Haldane’s Dilemna—which basically led Kimura to his Neutral Theory, and, now, we have Nachman and Crowell’s Paradox. You see, RM + NS just doesn’t add up. So, what’s going on? Or, better yet, what went on?

  27. IrynaB: If 1/Ne is the probability of fixation (which assumes neutrality), then out of 10^6 mutations per year, 10 will become fixed.

    Which is consistent with an even simpler calculation. Assuming neutrality, the rate of fixation is approximately equal to the rate of mutation, independent of population size. The rate is 175 mutations per genome per generation or roughly 10 mutations fixed per year.

    PaV: Typical rates, as I stated in the post, is 1 x 10^9 mutations/generation= 1 x 10^9 mutations/year, which equals 20 x 10^9/20 years = 2.0 x 10^8, which is the exact figure that Nachman and Crowell give.

    Are those supposed to be negative exponents? Nachman’s estimated rate of mutations per nucleotide is 2.5*10^-8 per generation.

    PaV: This means, then, that for an effective population size of 10,000 to 100, 000, mutations will occur, on a per year basis, at the rate of 10^9/10^5,

    It would be the yearly rate of mutations per diploid genome (10) * population (10^5).

    PaV: This means, then, that for an effective population size of 10,000 to 100, 000, mutations will occur, on a per year basis, at the rate of 10^9/10^5, and will become fixed at a rate of 1/4Ne, or 1 fixation for every 400,000 years.

    Doesn’t matter because new mutations are constantly being added and constantly being fixed. In any case, beneficial mutations fix much more rapidly.

  28. Pav:

    The point that you’re missing is that not just any old mutation will do. You need a specific mutation at a specific location along the length of the genome.

    No, we’re not talking about specific mutations — we’re talking about the total number of mutations that have accumulated in the genome for the last few million years. Focusing on a specific mutation is like asking what the odds are that a specific person wins the lottery, while here we are talking about the odds that someone — anyone — will win the lottery. The latter odds are of course much higher.

    It just so happens that my estimate coincides nicely with independently estimated divergence of humans and chimps, whereas your estimate is way off.

    Your figure of 10 million mutations fixed per million years amounts then to a heterozyosity of around 50 mutations per 1500 basepairs, which is roughly 3%. This is perfectly in line with heterozygosities for mammalian species.

    I agree. This supports my calculation.

    But we’ve calculated the heterozygosity per one million years. The highest heterozygosities occur in insect species. For mammals H is in the range of 6 to 13%. So, any species that diverged more than 8 million years ago (I’m doubling here because of the split lineages) couldn’t tolerate this amount of heterozygosity, and that would mean that natural selection would have to kill off members of the species as a way of eliminating the accumulated number of mutations.

    No, because our calculations so far assume neutral mutations.

    And, again, Nachman and Crowell end their paper with the problem of genetic load. Translated, this means that to keep harmful mutations away, assuming Darwinian mechanisms, “intolerable” amounts of offspring would have to die. So, what does this say about Darwinian mechanisms? First we had Haldane’s Dilemna—which basically led Kimura to his Neutral Theory, and, now, we have Nachman and Crowell’s Paradox. You see, RM + NS just doesn’t add up. So, what’s going on? Or, better yet, what went on?

    It seems to add up just fine if you do the calculations correctly. Haldane’s dilemma has been refuted a long time ago, but that’s for another post.

  29. PaV: The point that you’re missing is that not just any old mutation will do. You need a specific mutation at a specific location along the length of the genome.

    That’s a profound misunderstanding of evolution. There isn’t a specific adaptation that evolution has “in mind.” Rather, it’s an opportunistic process. If a particular mutation is beneficial, it will tend to propagate through the population.

    PaV: Total number of years is 414,000 years for the first beneficial mutation to occur and become fixed.

    Your calculation of fixation is incorrect. The presumption is that critical mutations are beneficial which fixes faster than neutral mutations. If the population has structure (subdivisions, as are commonly found in natural populations), then it can fix even faster.

    PaV: Your calculation of 10 million mutations becoming fixed only points out the true function of natural selection:

    You keep conflating neutral and non-neutral mutations. Natural selection doesn’t influence neutral mutations. If a mutation is beneficial, it will be more likely to fix. If it is detrimental, it will be more likely to go extinct.

    Under selection, it’s P ? 2s / ( 1? e^(?4 Ns) ). For small s and large N, that approximates to 2s.

    P, probability
    N, effective population
    s, selection coeffecient

    If there are large numbers of neutral mutations per individual in each generation, then we expect large numbers to accumulate. A mutation will be effectively neutral if s is much less than 1/2N.

  30. IrynaB [30],

    If I compare species A to species B, then I notice that specific mutations at specific locations along the genome are part of the causes bringing about the difference between species A and species B. So to talk about it not mattering what kind of mutation we’re considering, or where a mutation occurs, is to ignore the actual fact of genomic differences. You are thinking as though any mutation, anywhere along the length of the genome, can be tolerated. Well, we know that just one mutation in the wrong spot can cause the death of an individual organism. So it is wrong to think so many mutations can occur without any damage to the organism. When we invoke the neutral theory, we’re simply assuming randomness in how the mutations come about; we’re not assuming mutation is itself neutral, though a great number of them are.

    Getting back to the problem at hand, again, specific nucleotide substitutions at specific locations is what separate species. What is needed, therefore, is a sequence of specific mutations. If you take species A and species B, and gene A1 differs from gene B1 at three specific locations, then you need three specific mutations. There are 70,000 genes in humans, and my calculation shows that only three of these 70,000 genes can be helped along by accepted Darwinian mechanisms. Obviously, this isn’t sufficient.

    Zachriel [31],

    You’re making the same mistake that IrynaB is making. If gene A is 1500 bp long, and three nucleotide changes are needed, then this 1500 bp section, surrounded as it is by 3.5 x 10^9 bp, has the probability of all three mutations occurring of (1500/3.5 x 10^9)^3, which is a trillionth of a trillionth of a chance of happening. That’s where fixation helps you. But if a mutation that has no effect on a protein fixes (a neutral mutation), then this is no help whatsoever. Only when one of the three critically needed nucleotide mutations occurs and fixes is any progress made. But the probability of this—not just any old mutation—is quite low as the above calculation demonstrates.

    When you invoke positive selection, you are simply invoking the mechanism that involves the very paradox that Nachman and Crowell present: positive selection is the flip side of genetic load—and they see a great big problem there. As should you.

  31. PaV,

    I generally agree with you, but isn’t their argument that if the mutations that separate us from apes were not there that there would be different mutations that separate us from… whatever other species would exist had evolution worked differently. In other words, it is highly unlikely that a certain mutation would occur, but it is not quite as unlikely that any beneficial or neutral mutation would occur, be whatever mutation it might.

    I think that ID’s best arguments concerning DNA include those that show that mutations are almost always deleterious and are inadequate in explaining complex structures, whether they be human or ape.

  32. Zachriel:

    The rate is 175 mutations per genome per generation or roughly 10 mutations fixed per year.

    Being “fixed” means that everyone in that population has it.

    And in a sexually reproducing population even the most beneficial mutation has a better chance of getting lost than it does at becoming fixed.

  33. IrynaB:

    Haldane’s dilemma has been refuted a long time ago, but that’s for another post.

    That is false- the dilemma stands firm.

    Also there isn’t any evidence that demonstrates any amount of mutational accumulation can account for the transformations required.

    IOW you don’t have anything beyond imagination.

    Is that how your “science” operates- via imagination?

    Strange…

  34. Collin: I generally agree with you, but isn’t their argument that if the mutations that separate us from apes were not there that there would be different mutations that separate us from… whatever other species would exist had evolution worked differently. In other words, it is highly unlikely that a certain mutation would occur, but it is not quite as unlikely that any beneficial or neutral mutation would occur, be whatever mutation it might.

    Quite so. What we expect is a radiating pattern and diversity. A population may diverge to become humans or chimps. A population can be highly diverse. Indeed, nearly every human being has a unique genome. Leaving aside selection, if we divide a population, drift alone will assure divergence.

    Joseph: Being “fixed” means that everyone in that population has it.

    That’s correct. And the rate of fixation for neutral mutations is approximately equal to the neutral mutation rate, regardless of population size. (2Nm) · (1/2N) = m.

    Thus far we have only considered point mutations, but there are many other sources of genetic novelty.

    Joseph: And in a sexually reproducing population even the most beneficial mutation has a better chance of getting lost than it does at becoming fixed.

    That depends on the selection coefficient. The probability for a specific mutation being fixed is about 2s / ( 1 – e^( -4 Ns) ). For small s and large N, that approximates to 2s, and is therefore much more likely to fix than a neutral mutation.

  35. Collin and Zachriel:

    The mutations were speaking about have specifically to do with the differences between chimps and humans. The genetic difference between humans and other species becomes far greater.

    Zachriel, your understanding of neutral theory is wrong. I’m going to get Kimura’s “The Neutral Theory of Molecular Evolution” and quote some of his exact numbers. I’m sure you’re not going to trust my opinion; so I’ll give you the opinion of the grandaddy of neutral theory.

  36. PaV: The mutations were speaking about have specifically to do with the differences between chimps and humans.

    As long as you don’t try to argue that an ordinary hand of bridge is so unlikely as to be implausible (1 in 600 billion). Any single genome is incredibly unlikely.

    The number of mutations are more than sufficient to account for the changes. Even in a moderate size population, every mutation is tried every few thousand years.

    10^9 genome / (10 mutations per year * 10^5 population)

    PaV: Zachriel, your understanding of neutral theory is wrong.

    Meanwhile, neutral mutations fix at approximately the rate they occur.

  37. Joseph:

    That is false- the dilemma stands firm.

    No, it doesn’t.

    Also there isn’t any evidence that demonstrates any amount of mutational accumulation can account for the transformations required.

    IOW you don’t have anything beyond imagination.

    Is that how your “science” operates- via imagination?

    I don’t like your tone very much, Joseph. Unless you make an actual argument, I will refrain from responding to your insulting remarks.

  38. Haldane’s dilemma – the cost of selection – is not alive and well.

    This idea is based on the follwoing simplifying premises:

    1) Species are optimised for their environments, and therefore any mutation is deleterious until the (biotic or abiotic) environment changes.

    In this sense, Haldane excludes the possibility of positive selection. Selection is only purifying and the cost of selection is the cost of being out of step with the environment. It is a measure of how much change can occur in an environment before a species becomes extinct.

    However, as we now understand that environments are dynamic, there is substantial scope for positive mutations in natural populations. Soft selection and hard selection both occur, a point Haldane was unaware of.

    2) Species use fixed, delineated resources.

    However, species can encroach on the resources of other species (i.e. the basis for Van Valen’s Red Queen hypothesis). Because of this, differential intrapopulation selection cannot be invoked to set an upper limit on the rate of molecular evolution.

    3) Mutations fix individually and independently.

    This is the most important assumption in my opinion. The main stochastic force in a sufficiently large population is genetic draft (hitchhiking). This is the primary assumption which discounts the generality of the neutral theory. An interesting discussion of the genome-wide reach of this can be found here (Hahn, 2008). Even though a large proportion of mutations being fixed in populations are neutral as Kimura correctly predicted, they are not being fixed by drift.

    Further a series of positive mutations at linked loci are able to fixed concurrently, provided meoitic recombination breaks the linkage.

    This is the most important point because it is the same mistake that PaV @ 28 makes when stating:

    it takes 14,000 years for mutation A to arise, and then it takes mutation A 4 x 10^5 years to become fixed. Total number of years is 414,000 years for the first beneficial mutation to occur and become fixed. Now the genome must search for mutation B. Well, of course, this will take another 414,000 years.

    A small amount of knowledge is a dangerous thing.

  39. Zachriel (#19, 20)

    Thank you for your posts. Concerning the article I linked to on the weblog of paleoanthropologist John Hawks, you write:

    Nice blog. But what is your interest in the difficulty of reconstructing a relatively minor detail of common descent? Hawks certainly agrees that the overall pattern of evidence strongly supports Common Descent, the question being where to fit this particular organism.

    You may be unaware of the fact that the intelligent design movement as such does not question common descent – indeed, the question of common descent is orthogonal to its concerns. ID is the scientific quest for patterns in nature which are best explained as the result of intelligent agency.

    The human brain is the most complex machine known in the universe. It is orders of magnitude more powerful than the world’s best computers, which are the product of design. There is also abundant empirical evidence of a sharp cognitive discontinuity between humans and their nearest genetic relatives, the apes, according to a recent paper by Derek C. Penn, Keith J. Holyoak and Daniel J. Povinelli, entitled Darwin’s mistake: Explaining the discontinuity between human and nonhuman minds in Behavioral and Brain Sciences (2008), 31(2): 109-178. Let me emphasize that in matters pertaining to biology, the authors of the paper are all orthodox evolutionists, even if they strongly disagree with Darwin’s views on psychology:

    Darwin was mistaken: the profound biological continuity between human and nonhuman animals masks an equally profound discontinuity between human and nonhuman minds.

    Now, you are of course free to believe that the human brain, which controls the body of the primate Homo sapiens, evolved gradually through an unguided process of random variation combined with periodic non-random winnowing (selection), and the authors of the paper I cited would also agree with you. For my part, however, I would call this a fanciful hypothesis, which flies in the face of everything we know about machines and how they improve. In my opinion, the main reason why such an outlandish hypothesis is still taken seriously in the sciences is because of the mistaken perception that the alternative hypothesis of design is a “science-stopper”, and that it would stymie research if adopted. Nothing could be farther from the truth.

    A more useful quest, in my opinion, would be to identify the time in the fossil record when the human brain underwent an abrupt change in its processing capacity, which would be one tell-tale signature of guided evolution. Hence my interest in the paper by Hawks et al., “Population Bottlenecks and Pleistocene Human Evolution” in Molecular Biology and Evolution 17:2-22 (2000) at http://mbe.oxfordjournals.org/.....ull/17/1/2 . This paper presents evidence of an abrupt jump at 2 million years ago. (Since then, Hawks has modified the date slightly: he now dates the emergence of Homo erectus in Africa to 1.65 million years ago, as some of the papers I cited above show.)

    Hawks is, as you point out, an orthodox evolutionist, yet he is quite open about the occurrence of a sudden anatomical change at the time when Homo erectus (referred to as early Homo sapiens in Hawks’ 2000 paper) first appeared:

    We, like many others, interpret the anatomical evidence to show that early H. sapiens was significantly and dramatically different from earlier and penecontemporary australopithecines in virtually every element of its skeleton (fig. 1) and every remnant of its behavior (Gamble 1994; Wolpoff and Caspari 1997; Asfaw et al. 1999; Wood and Collard 1999). Its appearance reflects a real acceleration of evolutionary change from the more slowly changing pace of australopithecine evolution.

    For the time being, the tentative hypothesis I shall adopt is that the time when Homo erectus first appeared in Africa is when the human brain underwent its “quantumn leap.” I would also expect that further research in the field of genetics should reveal a sharp discontinuity in the number of genetic instructions required to build a human brain (as opposed to a chimp’s), and that this hurdle was crossed about 1.65 million years ago. Finally, I speculate that the magnitude of the informational hurdle that was crossed during the transition from the common ancestor of humans and apes to the first human being may even compare with the magnitude of the informational hurdle that occurred at the beginning of the Cambrian period, and which is described in Dr. Stephen Meyer’s paper, The Cambrian Explosion: Biology’s Big Bang at http://www.discovery.org/a/1772 . See also this resource page:

    Darwin’s Dilemma: The Mystery of the Cambrian Explosion which has lots of helpful, up-to-date articles.

    Lastly, the reason why I cited Hawks’ blog post, The trouble about Kenyanthropus and Ardi is that it serves as a useful antidote to scientific hubris. What it shows is that our picture of hominin evolution beyond the critical 4-million-year stage (near the point when humans and chimps are supposed to have diverged) remains murky and speculative – a situation which is not helped by the fact that the scientists who possess the hominin fossils from that time refuse to make them available to the scientific community at large. Now that’s a science-stopper.

  40. vjtorley: You may be unaware of the fact that the intelligent design movement as such does not question common descent –

    Common Descent is the single most important unifying pattern in biology.

    vjtorley: indeed, the question of common descent is orthogonal to its concerns.

    Common Descent is essential to understanding evolution. The evidence is pervasive and not reasonably subject to dispute. In large part, humans and every other organism is what it is because of what it once was.

    vjtorley: The human brain is the most complex machine known in the universe.

    So Common Descent applies to everything in biology but the human brain? Or are you picking a thread while ignoring the tapestry? You have to grapple with the extensive evidence for Common Descent. It doesn’t go away because ID pretends it isn’t there.

    vjtorley: Hence my interest in the paper by Hawks et al., “Population Bottlenecks and Pleistocene Human Evolution” in Molecular Biology and Evolution

    Yes, there is evidence for a bottleneck in human evolution about 2 million years ago, a common natural occurence. The phylogenetic changes are well-within known rates of evolution. The author completely and adamantly rejects your conclusions.

    vjtorley: Hawks is, as you point out, an orthodox evolutionist, yet he is quite open about the occurrence of a sudden anatomical change at the time when Homo erectus (referred to as early Homo sapiens in Hawks’ 2000 paper) first appeared:

    Of course he is. Because his idea of “sudden” is well-within the norms of evolutionary theory.

    vjtorley: For the time being, the tentative hypothesis I shall adopt is that the time when Homo erectus first appeared in Africa is when the human brain underwent its “quantumn leap.”

    You can hypothesize what you want, but all you have is unsupported speculation, while the evidence supports the natural evolutionary history of Common Descent.

  41. VJtorley:

    The Penn, Holyoak and Povinelli article you cite was published in “Behavioral and Brain Sciences,” one of my favorite venues due to its format of target article followed by numerous invited responses. Here are a number of excerpts from those responses, which collectively suggest that an embrace of their work may be premature.

    Out of their heads: Turning relational reinterpretation inside out

    Louise Barrett

    By being in thrall to a representational theory of mind based on the computer metaphor, Penn et al. are obliged to draw a representational line in the sand that animals are unable to cross in order to account satisfactorily for the differences between ourselves and other animals. The suggestion here is that, if Penn et al. step back from this computational model and survey the problem more broadly, they may recognize the appeal of an embodied, embedded approach, where the ability of humans to outstrip other species may be a consequence of how we exploit the elaborate structures we construct in the world, rather than the exploitation of more elaborate structures inside our heads…From a purely internal perspective, then, the cognitive processes of humans and other animals may well be quite similar. The difference, paradoxically, may lie in our ability to create and exploit external structures in ways that allow us to augment, enhance, and support these rather mundane internal processes.

    The reinterpretation hypothesis: Explanation or redescription?

    Jose´ Luis Bermu´dez

    One obvious way of answering these questions is to highlight the distinctiveness of human linguistic abilities – either by way of the “rewiring hypothesis” proposed by Dennett (1996), Mithen (1996), and Bermu´ dez (2003; 2005) or by Carruthers’s appeal to the role of representations in logical form in domain general, abstract thinking (Carruthers 2002). Penn et al. reject these proposals. Whatever their ultimate merits, however, these proposals quite plainly offer explanatory hypotheses. If Penn et al. are to offer a genuine alternative, they need to make clear just how their account is an explanation of the uniqueness of human cognition, rather than simply a description of that uniqueness.

    Darwin’s last word: How words changed cognition

    Derek Bickerton

    The capacity to perceive and exploit higher-order relations between mental representations depends crucially on having the right kind of mental representations to begin with, a kind that can be manipulated, concatenated, hierarchically structured, linked at different levels of abstraction, and used to build structured chains of thought. Are nonhuman representations of this kind? If they are not, Penn et al.’s problem disappears: Other animals lack the cognitive powers of humans simply because they have no units over which higher-order mechanisms could operate. The question then becomes how we acquired the right kind of representations.

    The role of motor-sensory feedback in the evolution of mind

    Bruce Bridgeman

    Both Darwin and Penn et al. are correct. There are enormous differences between human and animal minds, but enormous differences can arise from seemingly subtle changes in mental function. An example is the use of motor-sensory feedback to elaborate human thinking, based on plans that can circulate through the human brain repeatedly….Did Darwin make a mistake? I do not think so. Any mistakes lie elsewhere.

    Imaginative scrub-jays, causal rooks, and a liberal application of Occam’s aftershave

    Nathan J. Emerya and Nicola S. Claytonb

    The cognitive differences between human and nonhuman animal minds suggested by Penn et al. are without exception impossible to quantify because of the reliance on language in experiments of human cognition…As recent studies in scrub-jays and apes have suggested (Correia et al. 2007; Mulcahy & Call 2006a; Raby et al. 2007), nonhuman animals may think about alternative futures outside the realm of perception. We believe that these complex processes should not be neglected in the type of cognitive architectures discussed by Penn et al.; indeed, we have argued that planning, imagination, and prospection can be included in such models (Emery & Clayton, in press).

    Comparative intelligence and intelligent comparisons

    Allen Gardner

    Oddly, a wave of recent claims of evidence for noncontinuity fail to use any controls for experimenter hints. This failure of method is apparent in virtually all of the experimental evidence that Penn et al. cite. Herrmann et al. (2007) is a very recent example. Fortunately, an online video published by Science clearly shows that experimenters were in full view of the children and chimpanzees they tested. Differences in experimenter expectations or rapport between experimenter and subject easily account for all results.

    Relational language supports relational cognition in humans and apes?
    Dedre Gentner and Stella Christie

    Darwin was not so wrong. We agree with Penn et al. that relational ability is central to the human cognitive advantage. But the possession of language and other symbol systems is equally important. Without linguistic input to suggest relational concepts and combinatorial structures to use in conjoining them, a human child must invent her own verbs and prepositions, not to mention the vast array of relational nouns used in logic (contradiction, converse), science (momentum, limit, contagion) and everyday life (gift, deadline). Thus, whereas Penn et al. argue for a vast discontinuity between humans and nonhuman animals, we see a graded difference that becomes large through human learning and enculturation. Humans are born with the potential for relational thought, but language and culture are required to fully realize this potential.

    Bottlenose dolphins understand relationships between concepts

    Louis M. Herman, Robert K. Uyeyama, and Adam A. Pack

    The studies Penn et al. critique to discount nonhuman animal relational competencies are heavily weighted toward primates and birds, plus a few additional citations on bees, fish, a sea lion, and dolphins. Cognitive differences among nonhuman species are largely ignored, as if all were cut from the same mental cloth….Penn et al. make a top-down claim for genetic pre-specification in humans alone of a module for higher-order cognition. However, bottom-up theories may offer better paths to understanding nonhuman animal cognitive potential…

    Taking symbols for granted? Is the discontinuity between human and nonhuman minds the product of external symbol systems?

    Gary Lupyan

    The human ability to reason about unobservable causes, to draw inferences based on hierarchical and logical relations, and to formulate highly abstract rules is not in dispute. Much of this thinking is compatible on an intuitive level with Penn et al.’s RR hypothesis. But although it is indeed “highly unlikely that the human ability to reason about higher-order relations evolved de novo and independently with each distinctively human cognitive capability” (sect. 11, para. 7), it is not unlikely that such uniquely human abilities depend on the use of external symbol systems…Although the authors provide a compelling demonstration for an insensitivity to structural relations and the use of symbols by nonhuman animals, in taking for granted the biological basis for these abilities in human animals, the very premise of a biologically based fundamental discontinuity between human and nonhuman minds remains unfulfilled.

    An amicus for the defense: Relational reasoning magnifies the behavioral differences between humans and nonhumans

    Arthur B. Markman and C. Hunt Stilwell

    As argued by the target article, role-governed categories and analogical reasoning are a result of straightforward differences in representational capacity between human and nonhuman animals. We suggest that these abilities serve to magnify the apparent cognitive differences between human and nonhuman animals, because they are crucial for the development of cultural systems that increase in complexity across generations…This view helps to explain how the cognitive abilities of human and nonhuman animals could simultaneously appear to be very similar and very different. Small differences in representation ability support large differences in the available knowledge base that humans and nonhuman animals have to reason with. What this work does not explain is how the leap from featurebased representations to relational representations is made.

    Putting Descartes before the horse (again!)

    Brendan McGonigle1 and Margaret Chalmers

    One difficulty in following this thesis is that when espousing their case for human structural superiority, the authors veer between task criteria which are adult end-state, context free, and formal, such as “systematicity,” omnidirectionality, and “analogy” considered in isolation from content – and those which are embedded in “world knowledge” – such as functional analogy, theory of mind (ToM), and higher-order structural apprehension of perceived relations of similarity and difference…This not only conflates private with cultural constructions as templates for the individual mind, it also ditches in the process those elements of human cognition regarded by many as core and normative, namely, commonsense reasoning, bounded rationality, choice transitivity, and subjective scales of judgement (based on adaptive value rather than truth) as well as other sources of knowledge derived from perception and action – all of which are subject to principled influences of learning and development. In contrast, Penn et al.’s own characterisation of human cognition both diminishes the role of development and eliminates completely the role of learning. This is despite the fact that many of the authors they cite are at pains to point out that the human competences they describe are often the product of many years of human development (Halford 1993; Piaget 1970) and/or considerable explicit tuition (Kotovsky & Gentner 1996; Siegal et al. 2001) within a physical and social environment.

    ….In an exciting area still largely in a vacuum created more by experimental neglect than animal failures, this rush to judgement by Penn et al. will put this fragile yet exciting new comparative agenda at risk.

    Difficulties with “humaniqueness”

    Irene M. Pepperberg

    In sum, although Penn et al. do indeed present cases for which no good data as yet exist to demonstrate equivalent capacities for humans and nonhumans, I disagree with their insistence that the present lack of such data leads to a theoretical stance requiring a sharp divide between human and nonhuman capacities. Absence of evidence is not a sure argument for evidence of absence. A continuum appears to exist for many behavior patterns once thought to provide critical distinctions between humans and nonhumans; I discuss some such instances missed by Penn et al., others also exist, and I suspect that, over time, researchers will find more continua in other behavior patterns. Moreover, although I suspect that some of the papers that I cite were not published when this target article was written, their recent appearance only supports my point – that new data may require a reappraisal of purported certainties. One may argue about definitions of discontinuity – for example, how to reconcile some societies’ advanced tool creation and use with those of primitive societies whose tools are not much better than those of corvids (Everett 2005; Hunt & Grey 2007) – and I do not deny the many differences that indeed exist between humans and nonhumans, but I believe future research likely will show these to be of degree rather than of kind.

    Quotidian cognition and the human-nonhuman “divide”: Just more or less of a good thing?

    Drew Rendall,a John R. Vokey,a and Hugh Notman

    Ultimately, then, we completely agree with Penn et al. that the current zeitgeist in comparative cognition is wrong; however, the mistake maybe lies not in emphasizing mental continuity, but rather in the kind of mental continuity emphasized. Animals and humans are probably similar: however, similar not because animals are regularly doing cognitively sophisticated things, but because humans are probably doing cognitively rather mundane things more often than we think.

    Explaining human cognitive autapomorphies

    Thomas Suddendorf

    Abstract: The real reason for the apparent discontinuity between human and nonhuman minds is that all closely related hominids have become extinct. Nonetheless, I agree with Penn et al. that comparative psychology should aim to establish what cognitive traits humans share with other animals and what traits they do not share, because this could make profound contributions to genetics and neuroscience. There is, however, no consensus yet, and Pennet al.’s conclusion that it all comes down to one trait is premature.

    Languages of thought need to be distinguished from learning mechanisms, and nothing yet rules out multiple distinctively human learning systems

    Michael Tetzlaff and Peter Carruthers

    Abstract: We distinguish the question whether only human minds are equipped with a language of thought (LoT) from the question whether human minds employ a single uniquely human learning mechanism. Thus separated, our answer to both questions is negative. Even very simple minds employ a LoT. And the comparative data reviewed by Penn et al. actually suggest that there are many distinctively human learning mechanisms.

    Analogical apes and paleological monkeys revisited

    Roger K. R. Thompsona and Timothy M. Flemmingb

    Penn et al. suggest that, in part, the ability to label relational information is unique to the human mind and responsible for the discontinuity implicated by the relational reinterpretation (RR) hypothesis. In fact, we believe there is comparative evidence to suggest that similar symbolic systems also apply to our nearest primate relatives. In the case of other animals, like monkeys, however, no evidence as yet indicates that a conditional cue can acquire the full status of a symbolic label, although it would seem that symmetric treatment of a conditional cue lays the foundation for a recoding of relational information as set forth by the RR hypothesis.

    On possible discontinuities between human and nonhuman minds

    Edward A. Wasserman

    The history of comparative psychology is replete with proclamations of human uniqueness. Locke and Morgan denied animals relational thought; Darwin opened the door to that possibility. Penn et al. may be too quick to dismiss the cognitive competences of animals. The developmental precursors to relational thought in humans are not yet known; providing animals those prerequisite experiences may promote more advanced relational thought. Here follow some excerpts:

  42. vjtorley:

    The human brain is the most complex machine known in the universe. It is orders of magnitude more powerful than the world’s best computers, which are the product of design.

    This is only partially true. Computers have long surpassed the brain’s capacity in many areas. That’s why I use Mathematica to solve systems of algebraic equations.

    Sooner or later computers will be more powerful than the brain in all aspects. And then what? Does the brain stop being designed? Will we have surpassed the Designer’s capabilities?

  43. IrynaB: Sooner or later computers will be more powerful than the brain in all aspects. And then what?

    It will be a John Henry moment.

    Once upon a time, chess was considered the ultimate test of human intelligence. Now, humans strengthen their chess-playing abilities with computers, just like they strengthen their bodies with machines.

  44. Zachriel [38]:

    The number of mutations are more than sufficient to account for the changes. Even in a moderate size population, every mutation is tried every few thousand years.

    This is a claim you are making without examination. As far as you’re concerned, this just seems sufficient. So, why don’t we look at actual numbers and actual probabilities.

    Let’s assume that we’re dealing with 10 million years of neutral drift. Let’s assume that there are 100 million fixations; and, let’s assume that none of the mutations reverts. Per Nachman’s paper, a little less than ten percent of the human genome codes for proteins, and there are 70,000 genes found in the genome. How many mutations/substitutions will occur per gene? Of the hundred million fixations = substitutions, on average, 10 million will occur. [This amounts to about 1/3 of a percent of the genome, while the difference between chimps and humans is now thought to be about 3%. We'll revisit this.] Thus, 10^7 substitutions, divided by 7 x 10^4 genes, means that, on average, 150 substitutions occur per gene. Nachman tells us that the average gene consists of 1500 bp = nucleotides. So, 150/1500 = 10%, or 1 in 10. Let’s assume the minimal situation where only one SNP = substitution = mutations = fixation (whatever term you want to use) is needed to distinguish a chimp gene from a human gene. (Very likely we will need, on average, 6 or 7 substitutions. Why? Because, as calculated above, these 100 million substitutions represent only 0.3 % of the genome whereas chimps and humans though formerly thought to be distinguished by a 1% difference, are now considered to differ by up to 3% difference.) So, again, this represents a very conservative view of things. [Let me add that if we want to say that some of the genes don’t change, then this only means that others have to change a lot more, and the odds I’m about to calculate end up being just pushed around. That is, the odds will end up being the same no matter how you cut things.] Now there is a 1 in 10 chance of the “correct” mutation taking place along the 1500 bp length of each gene. So, what are the odds of all these “random” mutations [remember, these are all “neutral” mutations] occurring? Well, because these individual gene lengths are independent of each other, quite simply, we multiply the odds of one gene converting from chimp to human by the odds of the next gene converting from chimp to human, and that product by the odds of the very next gene converting, and so forth. The odds so calculated: 10^-70,000; that is, 1 in 10 raised to the 70,000 power. Here is a figure that assumes all the mutations fixed never revert, using a time frame that is much longer than any thought to exist between chimps and humans (Ardi pushes it to what, 6.5 mya?), and assuming only the most minimal of differences between genes, and still we come up with a figure that is absolutely astronomical in its improbability. Does anyone out there in Darwinland still want to maintain that this could happen by chance?

    [N.B. Now, Nachman’s figure of 176 mutations/genome/replication is, I believe, for a diploid genome. Normally you would deal with the number only on one strand since the coding direction means only half of the genome is used to code genes. So, it could be justified to halve the number of mutations, which would have the effect of squaring the improbability.]

    Now, there is a further point to be made here: the neutral theory is being invoked as the means by which the needed ‘evolutionary’ changes come about. Well, if you invoke the neutral theory, you’re then discounting any role for NS. Now, how can you justify using the neutral theory to account for all the needed mutations (but, of course, the above calculation shows just how woefully inadequate the neutral theory is to account for the changed basepairs) and then not agree with the design argument? What I mean is this: Dawkins asserts that evolution is really not a “blind chance” process. Indeed, he would admit, mutations occur randomly, but then NS comes along and in some mysterious way, guided by still unknown forces of nature acting via preferential death, “endless forms most beautiful” come about. So, he would tell us, through NS an otherwise random process gives the “appearance of design”. He tells us upfront that nature “appears” designed. So, if you want to account for random mutations via the neutral theory—thus leaving NS behind—then you should, per Dawkins, believe not only that life “appears” designed, but actually IS designed.

    If, in reaction to this argument you are then going to tell me that NS is really at work—somehow!!—then how will you deal with the genetic load calculations that Nachman and Crowell say presents a “paradox”. Hasn’t Darwinism really painted itself into the corner?

  45. Zachriel:

    The number of mutations are more than sufficient to account for the changes.
    Since you don’t even know what the changes were, you can’t know the number of mutations needed.

    Since you can’t know the number of mutations needed, you cannot know that there was a sufficient number.

    More science, less handwaving.

  46. Zachriel:

    You have to grapple with the extensive evidence for Common Descent. It doesn’t go away because ID pretends it isn’t there.

    1- The evidence for Common Descent isn’t extenstive

    As a matter of fact the vast majority of the fossil record- the marine invertebrates- does not support the premise.

    2- That evidence can be used as evidence for Common Design

    3- ID doesn’t say anything about it

    4- There isn’t any evidence that the transformations required are even possible

    IOW Common Descent can’t be too essential for anything but to promote a non-specific PoV.

  47. PaV: So, 150/1500 = 10%, or 1 in 10.

    You need to work on your arithmetic. If the mutations are distributed evenly across the genome, then it would be 3% of each gene. (Notice the brevity of the calculation.)

    PaV: Does anyone out there in Darwinland still want to maintain that this could happen by chance?

    The total number of mutations tried over the relevant history (given your assumptions) is population (10^5) * mutations per year per individual (10) * years (10^7) = a lot (10^13). As the genome is about a billion in length (~10^9), that means every point on the genome has been tested bunches of times.

    Assuming neutrality, the number of mutations that become fixed is mutations per year per individual (10) * years (10^7) = more than enough (10^8).

    PaV: So, why don’t we look at actual numbers and actual probabilities.

    Let’s.

    Chimpanzee Sequencing and Analysis Consortium: Through comparison with the human genome, we have generated a largely complete catalogue of the genetic differences that have accumulated since the human and chimpanzee species diverged from our common ancestor, constituting approximately thirty-five million single-nucleotide changes, five million insertion/deletion events, and various chromosomal rearrangements.

    So our rough estimate is sufficient to account for the point mutations, even if we assume neutrality. Of course, as the authors state, there are other mechanisms at work. Gene duplication and selection can work much faster.

    PaV: The odds so calculated: 10^-70,000;

    Assuming many of the changes are neutral, then you are calculating the odds of an everyday bridge hand. Every hand is incredibly unlikely, but some hand is inevitable.

    PaV: Now, there is a further point to be made here: the neutral theory is being invoked as the means by which the needed ‘evolutionary’ changes come about.

    I would have thought gene duplication and selection would have been important, though many changes are clearly neutral.

    PaV: Well, if you invoke the neutral theory, you’re then discounting any role for NS.

    Uh, no. Just because many of the changes are neutral doesn’t mean they’re all neutral or that selection is unimportant.

    PaV: Indeed, he would admit, mutations occur randomly, but then NS comes along and in some mysterious way, guided by still unknown forces of nature acting via preferential death, “endless forms most beautiful” come about.

    It’s not all that mysterious. Quite a lot is known about evolution—though certainly not everything.

    PaV: … then how will you deal with the genetic load calculations that Nachman and Crowell say presents a “paradox”.

    Nachman calculations 3 deleterious mutations per genome per individual per generation. This is only an issue in certain slow reproducers (such as humans). In other words, the entire world of biology works just fine. It is reasonable, given the vast evidence supporting evolution, that these slow reproducers would not have evolved if they couldn’t persist.

    Nachman tests a particular selection model of evolution. If Nachman’s paradox were a problem other than a defect in the model, then the human population would be rapidly decreasing. It isn’t. It’s a defect in the model. Nachman suggested synergistic epistasis as a plausible and testable modification of the model. Other mechanisms include loss of egg or sperm before fertilization or spontaneous abortion.

  48. IrynaB,

    You don’t have an actual argument.

  49. Zachriel:

    150 divided by 1500 is obviously one tenth. Where in the world did you get 3%?

    You give a citation that talks about 35-40 million accumulated mutations. But I used a figure of 100 million. It’s still not enough. Yes, every possible mutation has occurred, but just because it occurred doesn’t mean it became fixed. If you invoke NS to help the fixation process, you run into Nachman’s Paradox.

    Zachriel:

    Assuming many of the changes are neutral, then you are calculating the odds of an everyday bridge hand. Every hand is incredibly unlikely, but some hand is inevitable.

    The odds of an everyday bridge hand would be the binomial coefficient of 52!/4!48!, which equals (52 x 51 x 50 x 49)/4 x 3 x 2 x 1 = (roughly) 10^5 possible hands. You’re not seriously considering comparing that to 10^70,000 are you? This is a rather glib response.

    You’re not taking the problem Nachman is posing seriously either. It was precisely this problem that led Kimura to his Neutral Theory.

    It’s Christmas. See you in a few days.

    Merry Christmas everyone.

  50. Zachriel (#42)

    Thank you for your post. I have tried to make myself clear, so I shall say this one more time: I am not disputing common descent. That includes the common descent of humans and apes. Consequently, when you write:

    Common Descent is essential to understanding evolution. The evidence is pervasive and not reasonably subject to dispute….

    … you won’t get any argument from me. Nor am I proposing, as you seem to think, that “Common Descent applies to everything in biology but the human brain.”

    What I do vehemently dispute, however, is your assertion:

    Common Descent is the single most important unifying pattern in biology.

    The bare hypothesis of common descent explains nothing without a mechanism for explaining the genetic diversity that we see in living organisms today. The prevailing scientific hypothesis is that a combination of chance and necessity (random variation plus selection) can explain all of the features of living organisms today. I reject that hypothesis as empirically highly dubious. I may be mistaken, but I think the onus is on you to justify such an outlandish hypothesis.

    Which brings us to Hawks’s 2000 paper at http://mbe.oxfordjournals.org/.....ull/17/1/2 (“Population Bottlenecks and Pleistocene Human Evolution,” by John Hawks, Keith Hunley, Sang-Hee Lee and Milford Wolpoff. In Molecular Biology and Evolution 17:2-22 (2000)).

    You write:

    Yes, there is evidence for a bottleneck in human evolution about 2 million years ago, a common natural occurence. The phylogenetic changes are well-within known rates of evolution. The author completely and adamantly rejects your conclusions….[H]is idea of “sudden” is well-within the norms of evolutionary theory.

    “Completely and adamantly”? Were we reading the same paper? Here’s what Hawks et al. actually had to say:

    A hominid speciation is documented with paleoanthropological data at about 2 MYA [million years ago - VJT] by significant and simultaneous changes in cranial capacity and both cranial and postcranial characters. This marks the earliest known appearance of our direct ancestors. The new species has been called Homo erectus or Homo ergaster by some authors. Following others (Jelinek 1978; Aguirre 1994; Wolpoff et al. 1994), we call this emerging evolutionary species early Homo sapiens, as it begins an unbroken lineage leading directly to living human populations. The first specimens are humanity’s earliest known direct ancestors.

    We, like many others, interpret the anatomical evidence to show that early H. sapiens was significantly and dramatically different from earlier and penecontemporary australopithecines in virtually every element of its skeleton (fig. 1) and every remnant of its behavior (Gamble 1994; Wolpoff and Caspari 1997; Asfaw et al. 1999; Wood and Collard 1999). Its appearance reflects a real acceleration of evolutionary change from the more slowly changing pace of australopithecine evolution….

    …These consecutive species samples are about half a million years apart, but the amounts of change between them are quite different. From the earlier to later australopithecine species, cranial capacity (approximate midsex average) goes from 450 cm3 [cubic centimeters - VJT] to 475 cm3, while from A. africanus to the earliest African H. sapiens sample the change is much greater: 860 cm3…

    Yet, brain size is only one of the evolving systems reflected in early H. sapiens anatomy. There are four interrelated complexes of changes at the very beginning of H. sapiens (Wolpoff 1999): (1) changing brain size (larger, especially longer vault, with a broad frontal bone and an expanded parietal association area; neural canal expansion); (2) changing dental function (more anterior tooth use, greater emphasis on grinding and less on crunching) as reflected in broader faces and larger nuchal areas; (3) development of a cranial buttressing system to strengthen the vault, including vault bone thickening and prominent tori; and (4) dramatic expansion of body height (estimated average weights double) and numerous changes in proportions (fig. 1). These, and other changes involving the visual and respiratory systems, reflect significant adaptive differences for the new species and give us important insight into the mode of speciation because they seem to happen all together, at the time of its origin.

    A Genetic Revolution
    If we assume these earlier australopithecines are a group of very closely related species, for instance, nearer to each other than Pan and Homo, we can expect that they differ much more in allele frequencies than in the presence or absence of specific genes for these features. Therefore, a reshuffling of existing alleles could result in the frequencies of features we observe in early H. sapiens. Thus, our second question is about this reshuffling, whether early H. sapiens is a consequence of rapid speciation with significant founder effect or the result of a long, gradual process of anagenic change. The first explanation, cladogenesis, is suggested by the fact that no gradual series of changes in earlier australopithecine populations clearly leads to the new species, and no australopithecine species is obviously transitional

    In sum, the earliest H. sapiens remains differ significantly from australopithecines in both size and anatomical details. Insofar as we can tell, the changes were sudden and not gradual

    Behavioral Changes
    This section addresses a second reason for suspecting there was a bottleneck and a genetic reorganization at the beginning of H. sapiens evolution. The characteristic early H. sapiens features denote a new adaptive pattern that many describe as the first true hunting, gathering, and scavenging adaptation and that we believe may be uniquely associated with the Oldowan archaeological occurrences. These facts provide insight into what some of the sources of selection promoting the new species might have been…

    Body size is a key element in the behavioral changes reflected at the earliest H. sapiens archaeological sites because of the locomotor changes that large body size denotes and the increased metabolic resources it requires. Moreover, the marked increase in brain size for early H. sapiens has significant metabolic consequences, because the human brain, which is 2% of the body weight, uses some 20%–25% of its metabolic energy. Larger brain size evolved in spite of these increased energy requirements, but the additional energy had to come from somewhere, and the answer must certainly lie in meat (Milton 1999). Larger body size in nonhuman primates is associated with the consumption of increasing amounts of low quality foods, and an increase in the amount of time and energy spent eating. The greater human body mass, and especially the longer legs, reflected a new foraging strategy related to this, in which, as Leonard and Robertson (1996) note: “large day ranges, increased meat consumption, division of foraging activities, and sharing of resources … may have both necessitated and allowed for a higher-quality diet.”…

    These behavioral changes are far more massive and sudden than any earlier changes known for hominids. They combine with the anatomical evidence to suggest significant genetic reorganization at the origin of H. sapiens, and from this genetic reorganization, we deduce that H. sapiens evolved from a small isolated australopithecine population and that small population size played a significant role in this evolution…

    All the currently available genetic, paleontological, and archaeological data are consistent with a bottleneck in our lineage more or less at about 2 MYA. At the moment, genetic data cannot disprove a simple model of exponential population growth following such a bottleneck and extending through the Pleistocene. Archaeological and paleontological data indicate that this model is too oversimplified to be an accurate reflection of detailed population history, and therefore we conclude that genetic data lack the resolution to validly reflect many details of Pleistocene human population change. (Emphases mine – VJT.)

    From all this, you conclude that “The phylogenetic changes are well-within known rates of evolution,” but the authors nowhere say this. Nor do they assert the contrary. Indeed, what struck me about the article was its refreshingly honest, non-dogmatic tone. The authors simply assert that the change from Australopithecus to Homo erectus (or early sapiens, as they refer to him) was relatively sudden, that it occurred over a period of no more than half a million years, and that it was associated with a dietary change to meat.

    Contrary to what you assert, the authors do not “completely and adamantly reject” anything, except the hypothesis of a recent genetic bottleneck. They make no attempt to explain the transformation from Australopithecus to Homo erectus, as they are more interested in the implications for ancient population sizes. They provide no calculations to support your claim that the changes in the human lineage were “well-within” normal rates of evolutionary change.

    You are perfectly free, if you wish, to latch on to the words “meat diet” and “half a million years” (which are in the article) and pretend that you have magically solved the paleoanthropological puzzle of how a systematic anatomical and behavioral revolution occurred in the human lineage. But you haven’t solved anything. Describing what happened (e.g. people started eating meat), when it happened (e.g. 2 million years ago) and putting a lower limit on how fast it happened (e.g. over no more than 500,000 years) is not the same thing as explaining how it happened. I shouldn’t have to belabor this basic point, so I won’t.

    The hypothesis that the anatomical and behavioral changes that took place about two million years ago in the human lineage cannot be explained as the outcome of chance plus necessity is a scientific one. It stands or falls on the evidence. If it dies, then so be it. But before you pronounce it dead, ask yourself: what kind of evidence would be needed to destroy it?

    At the very least, we would need to know how many extra genetic instructions were required to transform the body of an australopithecine into that of a human being, whether a viable pathway existed which would enable a transformation to occur from one to the other, and whether the sequence of mutations required to effect this transformation was reasonably probable, given the existence of Australopithecus. By “reasonably probable” I mean: not astronomically improbable. That should be a hurdle that Darwinists can clear.

    Recenet discoveries of Neanderthal DNA and of the specific role played by the various genes that distinguish humans from chimps may make this scientific question a tractable one within the next few decades. Yes, it will take a lot of spadework, but that can’t be helped. If you’re putting forward a speculative hypothesis (that unguided evolution explains everything), you have to establish it properly. Digging up a few fossils will impress no-one.

  51. IrynaB (#44)

    Thank you for your post. You write:

    Sooner or later computers will be more powerful than the brain in all aspects. And then what? Does the brain stop being designed? Will we have surpassed the Designer’s capabilities?

    Should it ever happen that computers surpassed us in all aspects and then started turning on us, their makers, that would certainly disprove the hypothesis that we were designed by a benevolent cosmic Designer.

    However, this bleak eventuality would not disprove the more pessimistic hypothesis that a bunch of mischievous or malevolent aliens designed DNA on earth four billion years ago, foreseeing the possibility that it would evolve into an intelligent life-form that would subsequently be gobbled up by its own technological creations.

    Personally, the Terminator scenario leaves me unfazed. There is a long trail of predictions made by leading computer scientists that brains would outclass the human mind. Curiously, this apocalyptic event was always supposed to happen within the lifetime of the technological guru making the prediction. I wonder what that says about gurus.

    If the world was made by God, as I believe, surely He must have foreseen what we’d get up to, and therefore designed the cosmos so that computers never could turn on their makers en masse and destroy humanity.

    You write that computers are getting better and better. Well, yes, but our knowledge of the human brain is getting deeper and deeper, at the same time. We have so much to learn about it. The distance between the computer and the brain is not shrinking, unless you look at very superficial comparisons like Mips.

    The fact that a computer managed to beat Kasparov at chess proves nothing more than the fact that an optimal strategy in chess can be computed, given enough processing resources. In other words, chess is just a glorified game of noughts and crosses. But not all games are like that.

    In the meantime, I’m sitting in front of a computer, and somehow I feel underwhelmed. The silly things don’t impress me any more than they did ten years ago, when I was a computer programmer. In fact, calling them things is crediting them with too much. They’re assembalges of parts, lacking intrinsic finality. I’d be much more impressed if they built a computer with a stomach, than if they built one as fast as a human brain.

    Let me conclude with an anecdote. There was a philosopher (Stuart Sutherland) who once remarked that he’d believe a computer was conscious when one of them ran off with his wife. A wise observation, if you ask me.

    Merry Christmas.

  52. Voice Coil (#43)

    Thank you for the long list of citations. You are quite right to assert that many scientists remain unconvinced of the existence of a great cognitive divide between humans and other animals. What I wanted to point out was that the hypothesis of a radical discontinuity is a scientifically respectable point of view which merits serious consideration. Penn, Holyoak and Povinelli are not its only defenders. I could also mention a recent article entitled “Origin of the Mind” by Professor Marc Hauser. Article in Scientific American, September 2009 (unfortunately only the first page is online).
    Marc Hauser is a professor of psychology, human evolutionary biology, and organismic and evolutionary biology at Harvard University. Here’s an excerpt from his article:

    “[M]ounting evidence indicates that, in contrast to Darwin’s theory of a continuity of mind between humans and other species, a profound gap separates our intellect from the animal kind. This is not to say that our mental faculties sprang fully formed out of nowhere. Researchers have found some of the building blocks of human cognition in other species. But these building blocks make up only the cement footprint of the skyscraper that is the human mind… Recently the author identified four unique aspects of human cognition… [These are:]”

    * “Generative computation,” that allows us to “create a virtual limitless variety of words, concepts and things.”
    * “Promiscuous combination of ideas,” meaning the ability to mingle “different domains of knowledge,” e.g., art, sex, causality, etc.
    * “Mental symbols” allow us to enjoy a “rich and complex system of communication.”
    * “Abstract thought,” which “permits contemplation of things beyond what we can see, hear, touch, taste or smell.”

    “What we can say with utmost confidence is that all people, from the hunter-gatherers on the African savanna to the traders on Wall Street, are born with the four ingredients of humaniqueness (Hauser’s term for “human uniqueness” – VJT). How these ingredients are added to the recipe for creating culture varies considerably from group to group, however… No other animal exhibits such variation in lifestyle. Looked at in this way, a chimpanzee is a cultural nonstarter…

    “Although anthropologists disagree about exactly when the modern human mind took shape, it is clear from the archaeological record that a major transformation occurred during a relatively brief period of evolutionary history, starting approximately 800,000 years ago in the Paleolithic era and crescendoing around 45,000 to 50,000 years ago…

    “[Other animals'] uses of symbols are unlike ours in five essential ways: they are triggered only by real objects or events, never imagined ones; they are restricted to the present; they are not part of a more abstract classification scheme, such as those that organize our words into nouns, verbs and adjectives; they are rarely combined with other symbols, and when they are, the combinations are limited to a string of two, with no rules; and they are fixed to particular contexts…

    “Still, for now we have little choice but to admit that our mind is different from that of even our closest primate relatives and that we do not know much about how that difference came to be. Could a chimpanzee think up an experiment to test humans? Could a chimpanzee imagine what it would be like for us to solve one of their problems? No and no. Although chimpanzees can see what we do, they cannot imagine what we think or feel because they lack the requisite machinery. Although chimpanzees and other animals appear to develop plans and consider both past experiences and future options, there is no evidence that they think in terms of counterfactuals – imagining worlds that have been against those that could be. We humans do this all the time and have done so since our distinctive genome gave birth to our distinctive minds. Our moral systems are premised on this mental capacity.

    The fact that these differences between humans and other animals are difficult to quantify, as some critics have pointed out, does not make them any the less real.

    Merry Christmas.

  53. VJ:

    What I wanted to point out was that the hypothesis of a radical discontinuity is a scientifically respectable point of view which merits serious consideration.

    Not in the sense you intend: a transformation that occurred within a 24 period reflecting the action of deus ex machina, as you have elsewhere characterized the putative transition:

    When I say “literally overnight” I mean literally overnight. I have no doubt that improvements in brain architecture occurred over a period of millions of years, but I would contend that at a critical point in evolutionary history, when the brains of our forebears became complex enough to be able to integrate information in the way that people need to in their everyday lives, our ancestors acquired an immaterial capacity to form abstract concepts – and in so doing, became true human beings.

    http://www.uncommondescent.com.....ent-333570

    This view is beset with a contradiction, as you attribute this transition both to “improvements in brain architecture that occurred over a period of millions of years,” and to changes that occurred in a literal 24 hour period and were not physical at all, but instead a sudden “ensoulment.”

    Further, in stating “our ancestors acquired an immaterial capacity to form abstract concepts – and in so doing, became true human beings” you omit mention that they also became “true Scotsmen.” The phrase “true human beings” lays the groundwork for arbitrarily denying continuity as our understanding of human evolution attains finer and finer resolution.

    The alternative view is not to deny that there is a cognitive chasm between human beings and our extant closest relatives, but rather to argue that these differences were attained when small evolutionary changes, attainable by Darwinian processes, resulted in hugely significant consequences (evolutionary tipping points, as it were) which in turn, due to their powerful adaptive consequences, were quickly elaborated by further selection – “overnight” in the sense of a few tens or hundreds of thousands of years.

  54. PaV: 150 divided by 1500 is obviously one tenth. Where in the world did you get 3%?

    You’re doing a bunch of calculations to arrive at something that requires only a single step. If mutations are evenly distributed across the genome, and the global rate is 3%, then that would mean 3% of any segment, including coding sequences.

    PaV: Per Nachman’s paper, a little less than ten percent of the human genome codes for proteins, and there are 70,000 genes found in the genome.

    But to belabor the point, the human genome is ~3*10^9 in length. If there are 70000 genes with an average length of 1500 then the entire length of nonsynonymous coding sequences of the genome is roughly 10^8, or ~3% of the total genome.

    PaV: Yes, every possible mutation has occurred, but just because it occurred doesn’t mean it became fixed.

    If any single-nucleotide mutation is beneficial, they would likely be tried, and are much more likely to become fixed than neutral mutations. And there are other mechanisms in play, including gene duplication.

    PaV: The odds of an everyday bridge hand would be the binomial coefficient of 52!/4!48!, which equals (52 x 51 x 50 x 49)/4 x 3 x 2 x 1 = (roughly) 10^5 possible hands.

    There are 600 billion individual bridge hands, somewhat more than 10^5. There are 53 octillion possible bridge deals.

    PaV: You’re not taking the problem Nachman is posing seriously either.

    Of course we are. You are not responding to the argument. Nachman tested a simple selection model, found it wanting, and suggested improvements to the model. If Nachman’s paradox meant that humans weren’t genetically viable, it would mean the population of humans would be declining. It’s not. Not even close.

  55. All this talk about mutations and still not one person on this planet knows whether or not the trandsformations required are even possible given any amount of mutational accumulation.

    What does that tell you about the theory of evolution? (hint- its grand claims cannot be tested)

  56. Zachriel [56]:

    You’re doing a bunch of calculations to arrive at something that requires only a single step. If mutations are evenly distributed across the genome, and the global rate is 3%, then that would mean 3% of any segment, including coding sequences.

    You’re not thinking through my numbers. The 10% represents the probability of 150 mutations becoming “fixed” in an individual gene in 10 million years, replacing one of the 1500 bp in the typical gene.

    Obviously 10% is higher than 3% that is the known difference between chimps and humans, and, thus, we know it’s unrealistically high; but I’m doing this to give evolution an even greater chance of doing something than it has a right to deserve.

    Now the other favorable assumption I’m making is that only ONE mutation per gene is needed, which, of course, compared to the 3% = 45 mutations/gene that mark the actual difference betw/ chimps and human, is an extremely favorable assumption.

    So, again, the calculation runs this way: you need 1 mutation somewhere along the length of the gene, and, per neutral evolution, this gives us a 150/1500 = 10% chance of getting it through a random processes. But, there are 70,000 genes, which can be assumed to be replicated independently of one another, and hence the probability of getting all the needed changes in each of the 70,000 genes, with these incredibly favorable assumptions (assumptions which favor Darwinism), is 1 in 10^70,000. Tell me, what is the difference between this number and the number ZERO, as in, ZERO probability of these changes occurring at random?

    If any single-nucleotide mutation is beneficial, they would likely be tried, and are much more likely to become fixed than neutral mutations. And there are other mechanisms in play, including gene duplication.

    If you want to assume that NS will “fix” a gene faster than neutral evolution, that’s perfectly fine. With a population of 10^5, it will take 10^4 years before we can be certain that the entire genome has had a mutation produced at each of its sites. Well, that means, in a sense, that ALL of the needed mutations are in place somewhere in the population. But each of the genomes in the population has, on average, 10 mutations, which are scattered over the entire genome, and, so, only ONE of them will be a beneficial mutation in some particular gene. Now the question is, Which of the 10^5 genomes will natural selection select? If we have Gene 1 that will become fixed, then what will happen to all of the other genomes where favorable mutations occur? Fixation means that ONE genome will sweep through the population, and that ONE genome will have on it only ONE needed mutation. At that rate, in 10^6 years, we can expect 100 mutations to fix (10^6 years/10^4 years/fixed mutation) every million years, giving a grand total of 5,500 over a 5.5 million year period (actual time distance), or 10,000 fixed mutations over 10 million years, meaning that ONE out of seven genes will have ONE fixed mutation. That certainly can’t account for what we see.

    If you then say, well, NS can “fix” more than one mutation at a time, then there is the problem that Nachman’s Paradox proposes, and the problem of deleterious mutations. For NS to work, then obviously the mutation is not neutral, and so its selection coefficient is likely that of 0.1 or higher, but certainly higher than 0.01. Since the fitness of the population in one generation is roughly equal to the fitness in the previous generation raised to the 1-s power, if NS tries to “fix” 10 such mutations all at once (let‘s say Genes 1-10), then if s = 0.1 for each, the population will completely die off trying to save all 10 mutated genes. If s = 0.01, then saving 10 of these at once means that in twenty generations of trying to “fix” these 10 mutations (and remember that Haldane gives 300 generations as the minimum time needed to fix one mutation), the population will have fallen off by (0.9)^20 = approx. 94%, leaving .06 x 10^5 members = 6 x 10^3 members. For a population size this small, it would take almost half a million years to again “try” all 3.0 x 10^9 bp of the genome. At this rate, only 200 mutations are fixed.

    But, at the same time one beneficial mutation is also connected, via the genome, to any deleterious mutations that randomly occur. And the number of deleterious mutations per beneficial one is quite high, meaning, then, that one beneficial mutation on a genome is much more likely to be lost to the population than it is to be “fixed”. So, the above calculations for NS end up being highly optimistic. Thus, neutral evolution gives the best result, and is why population genetics accepts it as the mode of evolution. But, of course, we’ve already seen how inadequate the neutral theory is to explain molecular/biological evolution. So now what do you propose?

    There are 600 billion individual bridge hands, somewhat more than 10^5. There are 53 octillion possible bridge deals.

    And compared to 10^70,000 is this supposed to be big? Let’s assume 53 x 10^27 is accurate ( I suspect this number is 52!), then if we shuffled and dealt a bridge hand every 30 seconds, it would take 5 billion persons shuffling and dealing non-stop roughly10 trillion years to come up with a specific deal. But what are the odds of getting a bridge hand that can be played? It’s 1.0. So, to play bridge, all you have to do is shuffle and deal the cards. Evolution doesn’t work this way. To get something that can begin the process of Darwinian evolution you need something that happens with a probability of 10^-9, not 1.0. And you need a lot of these to accumulate independently. I still haven’t been shown how this can be done given the biological improbabilities known to exist.

  57. As to 53 octillion, the number of combinations of 52 taken 13 at a time is 52!/13!39!, and since the order of the cards reaching a hand is unimportant, we don’t need to take into account the various permutations of the hands. So C_52,13 should do the trick. Your figure of 53 octillion probably is derived by multiplying C_52,13 by (13!)^4. But this, I believe counts the permutations twice, and so is wrong.

  58. PaV: So, again, the calculation runs this way: …

    It was suggested above that you avoid the error of claiming a typical bridge deal was so rare as to not plausibly occur.

    Look at this series of numbers between 1 and 100:

    95 51 29 49 77
    4 19 89 14 45
    49 75 33 55 75
    85 95 5 90 48
    17 74 52 81 2
    97 46 56 29 80
    79 39 52 14 64
    36 48 86 21 25
    43 51 41 2 28
    8 92 83 57 24
    45 64 40 1 72
    65 99 29 10 70
    46 58 25 19 12
    13 86 78 19 94
    12 89 74 44 46
    43 95 28 89 69
    37 57 15 49 1
    32 5 43 16 81
    67 28 95 32 76
    53 61 75 87 7

    The chances of this sequence occurring by chance is just 1 in 10^200, way past the universal probability bound. Yet, it did occur by chance.

    The divergence of an evolutionary lineage can follow any number of paths. It happened to follow a particular path, much or most of which was neutral evolution. By assuming that only the one particular path leads to a lot of numbers with no relevance whatsoever.

  59. Zachriel:

    I just checked online at two blogs and the number I gave is the correct number, which turns out to be 635,013,559,600. Using 100,000 people shuffling and dealing twice a minute, 24-7, it would take six years, on average, to come up with a particular deal. Ouch! IOW, if everyone in the world shuffled and dealt once every Sunday, it would take less than 3 years to get your specific deal. A bit deflating, isn’t it?

  60. PaV: Let’s assume 53 x 10^27 is accurate ( I suspect this number is 52!) …Your figure of 53 octillion probably is derived by multiplying C_52,13 by (13!)^4.

    It’s 52!/(13!^4). The point is that any particular hand is incredibly unlikely.

  61. The 52!/(13!)^4 is the multinomial coefficient, and would include the various orderings of a bridge hand. But when we play bridge, whether the ace of Spades is dealt to you first or last doesn’t really matter. So in terms of your normal bridge hands, the binomial coefficient works, which is 635 billion or so. Anyways, it’s a large number.

    The point is that any particular hand is incredibly unlikely

    No, that’s not true. Any bridge hand is likely. The probability of dealing a bridge hand is 100% = 1.0.

    When you shuffle and deal a bridge hand, you can immediately play. Why? Because you’re not looking for any particular hand. But when you say that you can’t play until someone gets a hand with nothing other than one suit of cards, then what happens? You end up having to shuffle and deal for a long time. What if you want to generate a sequence of integers 1 to 100 randomly, and you’re not concerned with a particular sequence. Well, then, every generation of such a sequence will suffice. But if you wanted a particular sequence, then it would require, per your numbers, 10^200 generations of sequences before you could be assured of arriving at that particular sequence. Does it matter which sequence you select? No, it can be any one of the 10^200 possibilities. This means that improbabilities arise ONLY when some particular combination is required.

    So, for example, with state lotteries, the reason that it becomes an improbable event is because a particular sequence of numbers is selected, and your guess has to match that. If no particular sequence is required, then everyone wins. All you have to do is pick six numbers. So, too, with the genome: if particular nucleotides differ between species A and species B, given the huge length of the genome, and the relatively low rate of mutation, and with the further complications of a needed change spreading through a population without it first getting eliminated because of some deleterious mutation someplace else on the genome, it becomes mathematiclly improbable—to the point of impossibility—to arrive at species B from species A if the differences are great enough. My calculation of 10^70,000 makes that point.

    The mechanisms that Darwinian evolutionists propose don’t have the power to bring about these “particular” changes via random, and/or, guided processes (assuming that NS is a ‘guiding’ process). Unless a mechanism can be proposed that overcomes the difficulties that “particular” nucleotide sequences present, then we have to look for some other mechanism. Behe’s EoE points out the limitations of Darwinian mechanisms. Can so-called Darwinian mechanisms bring about changes in the genome? Yes. But they’re extremely limited, almost to the point of being completely trivial. Please feel free to enlighten me, but unless viable mechanisms are proposed, I remain a Darwinian skeptic.

    [BTW, I tried to get Kimura's book out of the library, but it appears it's been stolen, so I can't get you the quote I was hoping for.]

  62. Zachriel [60]:

    The number of permutations of 100 objects is 100! = roughly 10^158 per my calculations. But, if you get up to 150!, I’m sure that your number of 10^200 is exceeded. But, again, unless you’re considering a “particular” sequence, this improbability doesn’t apply. Now, if YOU are doing the specifying, then your very act of specifying a sequence brings the sequence into existence, and the probability of its existence is 1.0. However, if your friend wrote down a sequence and asked you to guess what it is, then your effort at guessing it would have the probability of being correct of 1 in 10^158. Well, “nature” is doing the specifying in the case of genomes. And that is why the improbabilities apply. The specification exists because species A and B both exist. And any natural explanation for this transformation has to overcome the improbabilities that these specifications bring into existence.

  63. 52!/(13!^4) = 53 octillion is the number of deals. 52!/13!/39! = 600 billion is the number of individual hands. These are the same numbers originally posted above (#56).

    StephenB: Anyways, it’s a large number.

    Somewhat more than your original calculation (#51) of 10^5.

    Zachriel: The point is that any particular hand is incredibly unlikely

    PaV: No, that’s not true. Any bridge hand is likely. The probability of dealing a bridge hand is 100% = 1.0.

    The probability of any *particular* hand is highly unlikely. The probability of *some* bridge hand is 1.

    PaV: This means that improbabilities arise ONLY when some particular combination is required.

    That’s right. And humans are not requried. They’re just one hand of many that could have been dealt. Just within the human family tree, there are many branches that could now be occupying the niche now occupied by Homo sapiens, e.g. Homo neanderthalensis.

    PaV: The mechanisms that Darwinian evolutionists propose don’t have the power to bring about these “particular” changes via random, and/or, guided processes (assuming that NS is a ‘guiding’ process).

    You’re still assuming that there had to be a particular result. There didn’t. We know there were lots of branchings and many possible avenues evolution could have taken. It’s not a particular hand, it some hand that was dealt (only some of which were then subject to selection). Even within extant humans there can be millions of differences in genomes, including copy number variations.

  64. Zachriel:

    Somewhat more than your original calculation (#51) of 10^5.

    Have you ever heard of the expression: He strains gnats and swallows elephants? Well, what does 10^5, or even 10^27 , matter as compared to 10^70,000? If you put thirty zeroes on each of thirty lines on a piece of paper, then 10^27 would consist of 26 zeroes followed by a 1, all fitting on the first line of the first page, whereas 10^70,000 would require roughly 75 pages completely filled with zeroes, and, finally, a 1 at the end. Each card hand can be played; but only the rarest of mutations will be both beneficial and fixed. And there simply isn’t enough of them. Under one of the most optimistic scenarios imaginable for RM+NS, the likelihood of chimps becoming humans by mere chance is 1 in 10^70,000. How in the world is it that anyone takes Darwinism seriously anymore?

    PaV: This means that improbabilities arise ONLY when some particular combination is required.

    Zachriel: That’s right. And humans are not requried.

    Only humans understand improbabilities. Only humans play bridge.

    As to your larger point that improbabilities arise in nature without human assistance, I would wholeheartedly agree. It is the task of NS to overcome these improbabilities. And, as my previous calculations have demonstrated, NS is not up to the task. Now, if you had 52 cards, face-up, on a table, and you had a chimp separate them randomly into four hands of 13 cards each, with each hand consisting of one of the four suits of cards, and with them ordered from the Ace down to the two, then, on average, it would 53 octillion tries for the chimp to do it randomly. But I assure, I could do it on my first attempt. It’s amazing, isn’t it, at what improbabilites intelligence can make go away.

    In reference to my calculations, you replied thusly:

    You’re still assuming that there had to be a particular result. There didn’t. We know there were lots of branchings and many possible avenues evolution could have taken.

    My friend, Zachriel, where are these “intermediates”? If you mean these to be Homo neanderthalis, Homo erectus, etc, that’s fine. But let’s point out the problem with this: namely, that if you invoke these other races as intermediates on the way to humankind, then this restricts the amount of time between each such intermediate. If there is not enough time (meaning enough mutations) between chimps and apes to account for genetic differences, well then, while fewer mutations may be needed to go from a chimp to some putative intermediate, nonetheless, a lesser amount of time is available. It seems like that puts us right back where we started from—-unless you can find lots more “intermediates”. Ironically, every attempt to justify Darwinism, starting with Darwin himself, ends up relying on the presence of intermediates. And, of course, they’re nowhere to be found.

    [RM+NS is not an optimization program, but a stabilization program, around which adaptation can occur. But, of course, this is the bill of goods that Darwinists sell everywhere.]

  65. Consider a baby. Even assuming that each mating was destined by fate, there are millions of sperm with each coupling. Looking back over just the last few generations, the probability of this particular baby with this particular genome being born is astronomically unlikely. Yet babies are born every day.

    Zachriel: You’re still assuming that there had to be a particular result. There didn’t. We know there were lots of branchings and many possible avenues evolution could have taken.

    PaV: But let’s point out the problem with this: namely, that if you invoke these other races as intermediates on the way to humankind, then this restricts the amount of time between each such intermediate.

    You are correct that the presence of intermediates doesn’t reduce the amount of total change required. Nevertheless, Your calculation assumes a specific result. The Theory of Evolution posits that it is just one of many possible paths that could have been taken.

    The correct calcuation is that the rate at which mutations accumulate must be sufficient to account for the differences in the genomes. You start some place, you move about, and you end up someplace else.

  66. Many of the relevant mutations are neutral (and of those, many have no phenotypic effect whatsoever). Hence, they are precisely like shuffling a deck and dealing a typical hand.

  67. Voice Coil (#55)

    You argue that my views on the radical cognitive discontinuity between humans and other animals are incompatible with my affirmation of common descent:

    This view is beset with a contradiction, as you attribute this transition [from non-human to human - VJT] both to “improvements in brain architecture that occurred over a period of millions of years,” and to changes that occurred in a literal 24 hour period and were not physical at all, but instead a sudden “ensoulment.”

    Just to be clear: (i) the neural changes in our ancestors’ brains were a necessary but not sufficient condition for the transition from non-human to human; (ii) I do not think that the neural changes occurred as a result of an undirected process, as the brain is the most complex structure known to exist; (iii) at the present time, I have no idea how many mutations were required to effect this transformation, and I don’t know anyone else who would know, either.

  68. PaV: If you mean these to be Homo neanderthalis, Homo erectus, etc, that’s fine. … And, of course, they’re nowhere to be found.

    You’re being a bit inconsistent. Cleary, there are primitive hominids that predate modern hominids.

Leave a Reply