Home » 'Junk DNA', Evolutionary biology, Intelligent Design » The Value Of Probabilistic Arguments In The Debate Over Evolution

The Value Of Probabilistic Arguments In The Debate Over Evolution

A couple months ago, I took an online “moodle” class with the Center for Inquiry on the topic of the evolution debate. The instructor was the renowned philosopher and evolutionary biologist, Massimo Pigliucci. I entered into an exchange of dialogue with Pigliucci and the other students on the evidence for the efficacy of naturalistic evolution, as well as presenting some counter-arguments against it and for ID.

During the course of our discussion, Pigliucci made some claims which astonished me — especially as arguments coming from a trained philosopher and world-renowned evolutionary theorist. To my surprise, when I articulated the numerous probabilistic hurdles — hurdles which are so pervasive at every level — which Darwinism has to overcome in order to be considered a viable paradigm, he wrote,

No evolutionary biologist I know…actually attaches probabilities to specific evolutionary events of the type you are talking about. There is no way to do that. Similarly, there is no way to attach probabilities to the set of physical laws regulating our universe, for the simple reason that we have no sample population to draw from (which is why typically you estimate probabilities).

This struck me at the time as a very strange argument to be making given the fact that many Darwinists (Dawkins & Futuyma spring to mind) say that the brilliance of Darwin was to reduce the improbability of getting complex, design-like systems. What was the whole point of “Climbing Mount Improbable”? The point was that probability didn’t have to jump up the sheer face of the cliff. It could meander up the gently sloping rear side, in small probability increments. But if we can’t assign probabilities to the events, exactly what has Darwin’s theory done?

In response to Massimo, I cited several attempts by Darwinists — many of which feature in the peer-reviewed literature — which attempt to demonstrate the efficacy of the Darwinian mechanism by virtue of probabilistic arguments. I wrote,

I am not convinced that we are reading the same literature. I am sure that the recent Wilf and Ewens PNAS paper cannot have escaped your notice (much was made of it by several prominent internet bloggers). The whole purpose of this paper was to demonstrate (unsuccessfully, in my opinion) that “There’s plenty of time for evolution” and it attempts to address probabilistic arguments against the efficacy of blind evolution.

Another such paper which springs to mind is the Durrett and Schmidt (2008) paper, which attempts to calculate the waiting time for a pair of mutations, the first of which inactivates an existing transcription factor binding site and the second of which creates a new one.

Moreover, if not to simulate the probabilistic feasibility of the Darwinian “search” function, what is the purpose of evolutionary computer algorithms such as Dawkins’ Weasel or Lenski’s Avida? The evolutionary informatics lab lists several peer-reviewed publications which evaluate the probabilistic plausibility of Darwinian theory (and find it wanting).

Further, Sean B. Carroll makes a probabilistic argument in his book, The Making of the Fittest, in his discussion of evolutionary convergence. He begins by introducing “some hard evidence from the evolution of ultraviolet vision in birds.” He continues, “In four different orders, there are both ultraviolet sensing and violet-sensing species. This means that the switch between violet-sensing and UV-sensing capabilities must have evolved at least four separate times. The difference between birds is always correlated with a particular amino acid, at position 90 in their short wavelength (SWS) opsin; birds with a serine in this position are tuned to violet, birds with a cysteine here are tuned to UV.”

Carroll explains that “this amino acid is encoded by DNA positions 268-270 in the text of the birds’ SWS opsin genes. Close scrutiny of the DNA text of the birds’ SWS opsin gene reveals that the difference between serine and cysteine involves just a single letter of the DNA text at position 268.”

So, in the case of switching from a violet-sensing opsin to an ultra-violet sensing opsin, you need a mutation at position 268 and this must have occurred independently four times. Carroll then reaches for the calculator in an attempt to reassure us that convergent evolution is not only probable, but “abundantly so”.

The figures used in the calculation are as follows:

Average per-site rate of mutation = 1 per 500,000,000 bases.

Number of copies of the gene = 2 copies

Number of offspring produced per year = estimated to be at least 1 million offspring per year.

Because there are 2 copies of the gene, we can cut the average per-site rate of mutation to 1 per 250,000,000 offspring.

There are three possible mutations at the locus (A to T, A to C and A to G). Only A to T will create a UV-shifting Cysteine. Assuming that the probability of each mutation is similar, this means that one out of three mutations at this locus will cause the switch. Thus, one A to T mutation will occur in roughly 750 million birds.

Carroll then factors in the number of offspring produced per year (taken as 1 million per year). When we divide this into the rate of one mutation per 750 million birds, the result is that the serine-to-cysteine switch will occur once every 750 years. Thus, Carroll argues, Darwinism may be rendered a plausible explanation for such convergence.

Is that not a probabilistic argument? I don’t find this argument very impressive (it cherry picks). But this should be sufficient to refute your claim that evolutionary biologists are not interested in evaluating probabilistic feasibility.

To this, I received no response.

Indeed, the whole discipline of population genetics is predicated on evaluating probabilistic feasibility (e.g. “How long will a certain variant take to appear and be fixed in the population given this population size and generation turn-over time?”). But it doesn’t end there. Darwinists are accustomed to making probabilistic arguments all the time in order to establish common ancestry (e.g. “What are the chances of these same parallel substitutions or element inserts occurring by convergence in independent lineages?”). Of course, when shared similarities can be explained by common descent, these similarities are taken as evidence for the descent model. On the other hand, when shared similarities cannot be explained by common ancestry, it is taken as evidence for convergent evolution. In chapter 5 of The Myth of Junk DNA, Jonathan Wells highlighted two papers (Balakirev and Ayala 2003; Khachane and Harrison 2009) in which similarities in pseudogenes (which cannot be explained by descent) are taken as presumptive evidence that those pseudogenes are functional. The whole argument is thus rendered circular.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

28 Responses to The Value Of Probabilistic Arguments In The Debate Over Evolution

  1. “To this, I received no response.”

    Of course not. The silence is revealing of a vulnerability.

  2. So he would have us believe that there is no way to tell if evolution could probably do what it claims it can do? Then why is Darwinism a science?

  3. JM:

    Another excellent article.

    GEM of TKI

  4. The Darwinists make the sweeping claim that they’ve got everything figured out in terms of evolution being driven by probabilistic mutations. Then they shriek “Creationist!” the instant anyone tries to calculate probabilities. No honest person who has studied the question in any depth could possibly take them seriously.

  5. This weasel attempt to get out from under the probabilistic hurdle is different from, but suprisingly closely related to, the position other anti-design folks take against design as it relates to life.

    Namely, they espouse a view that unless we have some prior knowledge about exactly what or how it occurred, we can’t calculate the odds of it happening and, therefore, cannot infer design. This, of course, is the whole point of the design inference: can we reliably infer design based on the system as it stands before us, without any knowledge of exactly how, when, why, or by whom it came about? Anyone who demands some prior knowledge is essentially making a circular argument: we can’t know if it was designed unless we have some prior knowledge that lets us know whether it (or at least things of a similar class) was designed. Such individuals do not engage the design argument on the merits.

    I’ve been thinking of writing up a more detailed discussion of this point, because it is a common tactic and quite central to the debate (as seen recently with Elizabeth Liddle on several threads). Maybe I’ll get some time soon to flesh this point out . . .

  6. So, what’s the probability of the Grand Canyon? Go on, calculate it.

  7. Please, please Pleeeeezzzzzzzzzze!

  8. Dr Matzke:

    Why do you insist on a strawman distortion?

    You know or should know the steps in the per aspect explanatory filter, and that the Grand canyon is easily established as a product of chance and mechanical necessity.

    Try the expression here for the simplified, reduced Chi_metric, which I will apply to the Grand Canyon:

    Chi_500 = I*S – 500, bits beyond the solar system PTQS threshold

    The relevant issue is S, the dummy variable on specificity, which here would be 0. So,

    Chi_500 [GC] = – 500, the lower limit.

    NOT recognisable as designed.

    As in: explainable on blind mechanical necessity and/or chance.

    A giant gully, in short.

    Complex, maybe, but not specified, as you could have easily seen from the principles laid out by Orgel in 1973:

    . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]

    Wicken is like unto that:

    ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

    These two foundational references are not exactly state secrets, and are are 30 – 40 years old.

    Plain enough.

    Now, contrast say the value that could be derived for this post where we can identify a high value for I in bits — well beyond 72 ASCII characters, and we can see that the post is specified as a contextually responsive message in English.

    Detected as designed.

    Correctly, along with billions of other such cases.

    Then, when we look at the functionally specific digital code in the living cell, we see similarly, a high I value and S = 1, so the filter [tested reliable against known cases] points to design as best explanation.

    If you want to overturn this, you need to show adequate, material, observationally anchored evidence that points to the living cell as credibly produced by blind chance plus mechanical necessity. Not, Lewontinian just so stories backed up by a priori materialism dressed up in the holy lab coat.

    As this thread is currently demonstrating, you cannot do that, or you would have it in details and broadcast all over the Internet.

    So, kindly, get over it.

    Unless and until you can SHOW that on observational evidence and evident reason, chance and necessity better explain OOL and OO body plans, the best explanation for same is design. Not tha this is an inference to Goddidit, as ever since the first technical design theory work, TMLO in 1984, it has been acknowledged thast designers within the cosmos could have created what we see. Venter’s recent work points to a proof of concept that shows that a molecular nanotech lab a few generations beyond where he is, would be enough to explain what we see on earth. That tweredun does not warrant an immediate inference to whodunit.

    But the fact of warranted inference to design is enough to break up the monopoly of evolutionary materialism on origins science thought, which is why high priests of the reigning materialist orthodoxy seem to be in such a tizzy and to be sometimes frothing angry.

    Then, when we go on to look at the evidence of cosmological fine tuning that points to a cosmos set up carefully for C-chemistry, cell based life, volcanic eruptions are heard behind the closed doors of the holy seminar rooms of the new magisterium. For, with the evidence of a cosmos with a beginning, rumour has it that astrophsysicists are running across to the chapel of the First Church of God, Big Bang, to hear meditations by agnostic Sir Fred Hoyle, on monkeying with the physics of the cosmos. Worse, many are lining up to get baptised!

    Horrors!

    I mean imagine an agnostic, Nobel-equivalent prize holder like Hoyle actually saying:

    From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.]

    I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [["The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12]

    So, we need to ask a fairly direct question: why is it that, years after being NCSE’s point spokesman on this matter, your remarks show such a profound tendency to not accurately or fairly represent design thought?

    Cho, man, do betta dan dat!

    GEM of TKI

  9. The reason Dawinists won’t even attempt to tackle the probabilistic problems with Darwinian “theory” — or, when they do, they just make up fantastic stories that any fool can figure out are manifestations of The Darwin Delusion — is that the numbers can’t be reconciled with the theory, even given the most optimistic assumptions. It’s really just that simple.

    As a legitimate scientist and engineer, who pursues the evidence wherever it leads, I can smell a snake-oil pseudoscience con-artist and his wares from a mile away.

  10. 10

    Hmm, still no calculation of the probability of the Grand Canyon. If probability calculations are so easy, why don’t you guys do it?

  11. Hi Jonathan,

    I am guessing that you and Massimo are talking past each other. If your initial “assault” on Massimo (if I may be so bold as to playfully inject some color into the thread) was like your confrontation with PZ Myers, then Massimo probably heard a rapid-fire laundry list that bore little resemblance to the issue he is interested in. The specifics you give later prove this to be true. Think about it – Massimo is interested in biological evolution, and you are arguing about contrived computer simulations or simplified calculations. There is a pretty big gap here, which is why Massimo is correct in his assertion.

  12. Great tangential talking point on a strawman caricature. Meanwhile the material issue has been addressed and there is no sensible answer to it, especially the persistent willful misrepresentations. Sad, really.

  13. “The relevant issue is S, the dummy variable on specificity, which here would be 0.”

    “S for a dummy variable that is 1/0 accordingly as the information in I is empirically or otherwise shown to be specific, i.e. from a narrow target zone T, strongly UNREPRESENTATIVE of the bulk of the distribution of possible configurations, W])”

    So S gets set to 0 because of what you already have concluded about the specificity of the Grand Canyon? How do you demonstrate the Grand Canyon is not specific? What is the distribution of possible configurations of soil space, and why is the Grand Canyon representative of that sample?

  14. Nick and DrRec,

    Good grief, do you guys even understand what you write, or is this intentional misrepresentation?

  15. Eric,
    I’m sure they understand what they write. I understand what they write. More likely it’s just you that doesn’t understand what they write.

  16. Hmm, still no calculation of the probability of the Grand Canyon. If probability calculations are so easy, why don’t you guys do it?

    The probability of the Grand Canyon is 1.

  17. What is the distribution of possible configurations of soil space, and why is the Grand Canyon representative of that sample?

    The Grand Canyon is a large hole. It’s the absence of soil, so taking soil samples isn’t going to help. The probability of finding the Grand Canyon in soil space is 0.

  18. 18

    Hmm, still no one has produced a calculation of the probability of the Grand Canyon, except Mung who has answered both “0″ and “1″, both without any calculations at all. I guess maybe these probability calculations aren’t so easy to produce after all…

  19. Of course some probability calculations “aren’t so easy.” What’s your point? Did I miss the memo from ID Theory 101 that all probability calculations are “a piece of cake?”

    How does that effect ID Theory? How would your ability, or inability, to calculate the difference in Shannon information between the Taj Mahal and the Eiffel Tower pose any problem to other applications of Shannon information?

  20. correction … *affect.*

  21. Similarities in pseudogenes (which cannot be explained by descent) are taken as presumptive evidence that those pseudogenes are functional.

    The idea that many pseudogenes are functional is because they have specific patterns of nucleotide variability usually observed in functional genes. Also they frequently do not show strong erosion of exon-intron structure and other features that suggest involvement of selection maintaining the structure of pseudogenes.

  22. 22

    Of course some probability calculations “aren’t so easy.” What’s your point? Did I miss the memo from ID Theory 101 that all probability calculations are “a piece of cake?”

    The point, of course, is that “The Value Of Probabilistic Arguments In The Debate Over Evolution” is pretty low. We can’t calculate the probability of the Grand Canyon, due to the huge complexity of the interacting processes of tectonics, uplift, sedimentation, erosion, lava flows, etc. Yet we know that it has a natural explanation, and we can learn a huge amount about the processes involved, all despite not being able to do a calculation.

  23. Nick @13:

    Clue in. Let’s assume that the Grand Canyon was an exceedingly improbable event. So what? This thread is about whether probability is a real issue that needs to be addressed by evolution (which Pigliucci, committing an intellectual stumble, foolishly said it isn’t). Surely you know, after all these years of being reminded over and over again, that the design inference does not depend just upon improbability. The design inference is *not* that something is improbable; therefore it was designed.

    Of course, even your implication that the Grand Canyon is all that improbable may not hold. Let’s see: thousands of rivers coursing for thousands or millions of years; some through hard rock, some through desert sandstone; many, many of them carving canyons, some deeper than others, some longer than others . . . What are the odds that over this time period we see at least one river carve an exceptionally deep and long canyon? Probably not too long of a shot.

    The real issue, however, which you are carefully sidestepping, is that probability is a real issue facing evolutionary scenarios and can’t be ignored with the juvenile and facile response that, “gee, improbable things happen all the time.”

  24. 24

    Of course, even your implication that the Grand Canyon is all that improbable may not hold. Let’s see: thousands of rivers coursing for thousands or millions of years; some through hard rock, some through desert sandstone; many, many of them carving canyons, some deeper than others, some longer than others . . . What are the odds that over this time period we see at least one river carve an exceptionally deep and long canyon? Probably not too long of a shot.

    Ah, now you get it. Now you just have to be fair and allow the same, very reasonable, very rational considerations to apply to evolution. All of these sorts of considerations apply in exactly the same way to the evolution of complex systems — millions of species, millions of years, almost uncountable numbers of mutations and possible pathways, and no single “target” — we are impressed by a flagellum when we see it, but we would have been equally impressed with some other flagellum built with completely different proteins with completely unrelated sequences (which we know is possible, because we have the archaeal flagellum). So all calculations based on assuming a particular sequence or a particular configuration of proteins or a particular set of structures or whatever as as bogus as a calculation that assumes that the Grand Canyon is the only way to have a canyon.

  25. 25

    typo: or whatever as as bogus –> or whatever are as bogus

  26. You’re conflating molecular machines with water pushing away rock and dirt to make a hole. Neither makes a good illustration for the other.

    Second, you’re increasing your odds of hitting a tiny target by imagining more tiny targets. Your argument is circular. You use the assumption that ‘countless mutations’ produced a flagellum to conclude that any number of other configurations were about as likely. And then you use your countless possible configurations to overcome the probabilistic argument against the random evolution of that same flagellum.

  27. 27

    You aren’t understanding the basics, I’m afraid. “Countless mutations” happened, only some of them produced the flagellum.

    “You’re conflating molecular machines with water pushing away rock and dirt to make a hole. Neither makes a good illustration for the other.”

    The Grand Canyon isn’t just any hole. It’s a pretty specific kind of hole — specifically one for which the formation process was so complex you have no hope of calculating the probability in a valid way.

    “Second, you’re increasing your odds of hitting a tiny target by imagining more tiny targets.”

    Since there are likely an almost infinite number of amino acid sequences that would produce something that we would call a flagellum, this is a pretty dang valid consideration. If the flagellum never evolved, we’d marvel at one of the dozens of other neat motility systems which has also evolved in microbes.

    And, you haven’t even passed Evolution 101, if you use the phrase “random evolution”. Natural selection is nonrandom. Use of the phrase “evolution says X evolved randomly” means you don’t know what you are talking about, and no actual scientist will have any reason to take you seriously. The evolution of the flagellum was no more random than the evolution of the Grand Canyon. They are both basically the product of millions of years of iteration of the process of chipping away at less persistent material.

  28. Nick, the fantastic improbabilities facing Darwinism in any search, for any particular functional protein, and/or gene, sequence within the flagellum, or any other biological system, has been clearly iterated to you before. Indeed livingstonmorford even dismantled your T3SS to flagellum sequence similarity argument,,

    Excerpt: I am convinced that the T3SS is almost certainly younger than the flagellum. If one aligns the amino acid sequences of the flagellar proteins (that have homologous counterparts in the T3SS), and if one also aligns the amino acid sequences of the T3SS proteins, one finds that the T3SS protein amino acid sequences are much more conserved than the amino acid sequences of the flagellar proteins.,,, – LivingstoneMorford – experimental scientist – UD blogger (Also notes in post 47 as to protein sequence homology being the result of De-Novo design)
    http://www.uncommondescent.com.....ent-389250

    And yet Nick, despite you having no actual empirical (observational) evidence that the astronomical improbabilities against such Darwinian searches for any functional gene/protein sequences are incorrect, you simply ignore what the science is saying and pretend if your imagined similarity argument has not been severely compromised. That is called ‘blind faith’ Nick!!!,,, So for something else for you to completely ignore about the flagellum, that severely compromises your belief (blind faith) that it is not the product of intelligent design, I present this recently discovered fact:

    INFORMATION AND ENERGETICS OF QUANTUM FLAGELLA MOTOR
    Hiroyuki Matsuura, Nobuo Noda, Kazuharu Koide Tetsuya Nemoto and Yasumi Ito
    Excerpt from bottom page 7: Note that the physical principle of flagella motor does not belong to classical mechanics, but to quantum mechanics. When we can consider applying quantum physics to flagella motor, we can find out the shift of energetic state and coherent state.
    http://www2.ktokai-u.ac.jp/~shi/el08-046.pdf

    and Yet Nick, though the quantum mechanical principles are now shown to be necessary for the operation of the flagellum, it is now shown that quantum coherence/entanglement cannot be reduced to the materialistic framework of neo-Darwinism.

    Quantum Entanglement – The Failure Of Local Realism – Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    The falsification for local realism (materialism) was recently greatly strengthened:

    Physicists close two loopholes while violating local realism – November 2010
    Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.
    http://www.physorg.com/news/20.....alism.html

    Quantum Measurements: Common Sense Is Not Enough, Physicists Show – July 2009
    Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables.
    http://www.sciencedaily.com/re.....142824.htm

    i.e. It is impossible, Nick, for you to ‘scientifically’ explain what we witness in the flagellum in neo-Darwinian terms!!! Nick perhaps you think this does not matter, but despite what you may think Nick, the ‘scientific’ fact is that until you can solidly refute finding quantum non-locality within the flagellum, using your materialistic neo-Darwinian framework you are falsified by yet another line of evidence indicating the intelligent design of the flagellum;

    further notes:

    Bacterial Flagellum – A Sheer Wonder Of Intelligent Design – video
    http://www.metacafe.com/watch/3994630

    Bacterial Flagellum: Visualizing the Complete Machine In Situ
    Excerpt: Electron tomography of frozen-hydrated bacteria, combined with single particle averaging, has produced stunning images of the intact bacterial flagellum, revealing features of the rotor, stator and export apparatus.
    http://www.sciencedirect.com/s.....tImgPref=F

Leave a Reply