Home » Intelligent Design » Who Says Darwinists Don’t Make Predictions

Who Says Darwinists Don’t Make Predictions

. . . so long as the predicted event is safely 100,000 years in the future:

 Human race will split into two different species 

The human race will one day split into two separate species, an attractive, intelligent ruling elite and an underclass of dim-witted, ugly goblin-like creatures, according to a top scientist. 100,000 years into the future, sexual selection could mean that two distinct breeds of human will have developed. The alarming prediction comes from evolutionary theorist Oliver Curry . . . Dr Curry’s theory may strike a chord with readers who have read H G Wells’ classic novel The Time Machine, in particular his descriptions of the Eloi and the Morlock races.  In the 1895 book, the human race has evolved into two distinct species, the highly intelligent and wealthy Eloi and the frightening, animalistic Morlock who are destined to work underground to keep the Eloi happy

Now if only ID theorists would make a testable prediction; something like “over many thousands of generations natural selection will account for only extremely modest changes in the malaria parasite’s genes and will be unable to cause any increase in genetic information.”  Oh wait a minute, that prediction was made and confirmed.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

52 Responses to Who Says Darwinists Don’t Make Predictions

  1. Is the malaria parasite gene prediction an ID prediction, or merely an anti-darwin/natural selection prediction? I mean, it is a great refutation of darwinism, but is it evidence for design?

  2. In the 1895 book, the human race has evolved into two distinct species, the highly intelligent and wealthy Eloi and the frightening, animalistic Morlock who are destined to work underground to keep the Eloi happy

    Niall Firth had better read The Time Machine again, The Eloi were rather pleasantly maintained sources of protein for the Morlocks, They may have been beautiful, but were perfectly useless otherwise. Just meat on the hoof (oops foot) out to pasture.

  3. Collin writes: “I mean, it [i.e., Behe’s work] is a great refutation of darwinism, but is it evidence for design?”

    Yes. First, as a general matter, Darwinism and ID are the only two games in town. Evidence disconfirming one necessarily supports the other.

    In this particular case, ID posits that intelligent agency is the only known source of increases in complex specified information. A corollary to that assertion is that unguided natural forces [chance and necessity filtered by NS] are not capable of causing increases in complex specified information. Behe’s work is compelling evidence supporting this corollary.

    Another corollary to ID is that a particular organism’s genetic code will be relatively stable over many generations. Behe’s work confirms this prediction.

    Finally, a third corollary to ID is that random mutations can result in a degradation of genetic information, but not an increase in genetic [complex specified] information. Again, this prediction is confirmed.

  4. “Collin writes: “I mean, it [i.e., Behe’s work] is a great refutation of darwinism, but is it evidence for design?”

    Yes. First, as a general matter, Darwinism and ID are the only two games in town. Evidence disconfirming one necessarily supports the other.”

    No. As a general matter, Darwinism and ID do not disagree on every single point of evidence, or interpretation. That would simply be silly. Science doesn’t work that way; competing theories are not mirror images of each other. Logical absurdities do not lend credibility to the ID case. If we want ID to be regarded seriously, sweeping generalizations such as this must be avoided. We’re smarter than that.

  5. I’m a little new to the nuts and bolts, as it were, of ID science, so forgive me if there’s an obvious answer to this question. BarryA says,

    …random mutations can result in a degradation of genetic information, but not an increase in genetic [complex specified] information.

    This sounds like a tautology might be involved wrt “complex specified” information and the ability of random mutations. Are you saying that there can be increases in information as a result of random mutations, just not Complex Specified information?

  6. The splitting of the humans into two separate species by natural means implies no inner breeding amongst different social classes for a long time which would be necessary so that certain genetic differences could fix themselves. Certainly such a thing is possible but how long would this social isolation have to exist for two populations to drift apart genetically. Who knows where present trends will lead but I am not so sure this would happen with today knowledge.

    This type of thinking might have had some traction 40-50 years ago before the advent of birth control but birth control tends to be the wild card because it may affect less functional humans more than those who can and want to raise children. There is some evidence that dysfunctional people might have less children because they too like the good life and who needs kids today when it is unlikely you can take care of them.

    So I am not so sure what social scenario could lead to the isolation over thousands of years that would be necessary for one group to break off genetically from another. Many science fiction stories had worlds where the underclass essentially developed out of control and lived in ghettos of poverty stricken crime infested areas but birth control would be available to all of these potential worlds and there would be no compulsion to have children when they were not necessary.

    Of course all of this is trumped by genetic engineering of the genome and I just read a serious discussion of medical care that thought it likely that the first person to live 1000 years was probably alive now. If genetic engineering gets a hold I am afraid we will see several potential new species with consequence we cannot even dream of.

  7. MacT writes: “No. As a general matter, Darwinism and ID do not disagree on every single point of evidence, or interpretation.”

    MacT, you miss my point. I never said ID and Darwinism disagree on every point. But if there are two and only two credible theories (I don’t include panspermia among credible theories), it is practically a truism that to the extent one is disconfirmed the other is supported.

  8. Mickey writes: “Are you saying that there can be increases in information as a result of random mutations, just not Complex Specified information?”

    Yes; consider scrabble letters. If you throw them in the air enough times it is likely that very simple letter combinations (“it” “at” “so” perhaps even “cat”). That information is specified (it has meaning), but it is not complex. But there is a virtual certainty that you will never get a sentence like this one. The information in the preceeding sentence was both spcified complex.

  9. BarryA,

    You really need to do a little more reading in the field of logical fallacies.

    Just because you see only two possibilities does not necessarily mean there are in fact only two possibilities. especially in the case of the origin and history of organisms on this planet. Evolution could be wrong but that does not, in it self, mean that ID is true. Your argument is most definitely a fallacy of a false dichotomy.

  10. terminiki

    You need to do a little more thinking. Here’s what you need to think about.

    Something can be designed.

    Something can be not designed.

    Use the logic skills you think you have and present us with a third option. Failing that, you’re out of here for belligerant stupidity.

  11. Barry

    With regard to your comment about ID predicting genomic stability over many generations – I don’t agree. I don’t see anything about ID that predicts stability. What is your basis for that claim?

    This got me thinking about a book I’m reading. J.Sanford’s Genetic Entropy. It would seem that p.falciparum’s stability over billions of trillions of generations is direct refutation of Sanford’s genetic entropy hypothesis.

  12. Having demonstrated an inability to discriminate between a true dicotomy and a false dichotomy, terminiki has been terminated.

  13. BarryA said,

    Yes; consider scrabble letters. If you throw them in the air enough times it is likely that very simple letter combinations (”it” “at” “so” perhaps even “cat”). That information is specified (it has meaning), but it is not complex. But there is a virtual certainty that you will never get a sentence like this one. The information in the preceeding sentence was both spcified complex.

    I’m still not quite getting it. You seem to be saying that if information is generated by random mutations it can’t be CSI by definition. How is this not tautological?

  14. DaveScott says: “With regard to your comment about ID predicting genomic stability over many generations – I don’t agree. I don’t see anything about ID that predicts stability. What is your basis for that claim?”

    When we observe known designers we see that they often build in redundancy and error correction mechanisms in order to increase stability. We observe both redundancy and error correction in the genetic code, which results in stability over thousands of generations. I infer that an unknown designer built in the redundancy and error correction for the purpose of obtaining stability that is observed similar to the way known designers achieve the same result.

  15. Mickey Bitsko

    It’s not impossible for random mutation to generate complex specified information. Likewise it’s not impossible to shuffle a standard deck of 52 cards and have it come up perfectly ordered by suit and rank (that particular order of the deck is complex and specified). This is where statistics come into the picture. We can be almost certain that we will never observe that result from a random shuffle in a finite universe. An intelligent agent however can order a deck in that manner easily.

    This is where natural selection comes into play. It is supposed that random mutation filtered by natural selection makes complex specified outcomes not so improbable. In the real world however natural selection is overwhelmed by nearly neutral deleterious mutations. Beneficial random mutations are so rare and slightly deleterious (near neutral) mutations so frequent that natural selection can’t single out the beneficial mutations. Random events unrelated to genomic fitness also serve to thwart natural selection’s ability to select. Survival of the fittest would be more aptly called survival of the luckiest. The only thing natural selection is good at is culling the very deleterious mutations by killing the unfortunate mutant before it can reproduce. It is thus a conservative force which stabilizes a working genome but does little to nothing in the way of building novel complexity from random changes.

    I think what misleads so many into thinking that random mutation + natural selection can produce novel complexity is the variability displayed in complex genomes through recombination (sexual reproduction). Natural selection works well there but it isn’t random mutation driving it. All the complexity is already there and recombination simply suppresses or expresses preexisting traits. The variability we see in dogs is a good example. There’s a huge range in size and cosmetics but there are limits which cannot be exceeded. The limits are established by the genome. You can’t breed dogs to the size of a mouse or elephant because the genome isn’t designed to support it and neither random mutation + natural selection nor recombination is capable of going beyond those limits. Likewise you won’t ever see a cold blooded dog, a dog with feathers or scales, or even retractible claws! Those options simply don’t exist in the canine genome. It’s easy to imagine there are no limits but that’s never been demonstrated – it is only imagined – and it will never be anything but imagined if ID is true.

    Another thing that misleads is things breaking and in the act of breaking it appears like an improvement. Say the lock on your car door jams and you can’t get into it. So you break the window and now you can drive it. That’s a big improvement over a car that is useless because you’re locked out of it but it’s not as good as car with working locks and no broken windows. This is how resistance to antibiotics, insectides, and the like arises. We introduce something into the environment that jams a mechanism in the bug. Random mutation breaks something so the jam is ineffective. The modified bug survives and it looks like it improved but it’s really not as fit as it was before.

  16. mickey, i think what he is saying is that information that is added but doesn’t change anything can’t be counted as CSI. i don’t find that very controversial. you do see the difference between CSI and just simple information right?

  17. Dave,

    This got me thinking about a book I’m reading. J.Sanford’s Genetic Entropy. It would seem that p.falciparum’s stability over billions of trillions of generations is direct refutation of Sanford’s genetic entropy hypothesis.

    While not exactly p.falciparum, we did discuss a related topic recently:

    http://www.uncommondescent.com.....ell-cycle/

    Short version: High replicators avoid genetic entropy?

  18. This got me thinking about a book I’m reading. J.Sanford’s Genetic Entropy. It would seem that p.falciparum’s stability over billions of trillions of generations is direct refutation of Sanford’s genetic entropy hypothesis.

    Dave,

    I asked this question a couple of months back, and did not get any discussion about it. I think you are right on. There are two possibilities:

    1. Sanford’s idea is refuted
    2. Sanford’s idea is not refuted because it hasn’t been long enough

    if #2 is correct, then Behe’s ideas, while not refuted, are irrelevant because not enough time has passed for p.falciparum to have become extinct, and probably not enough time has occurred for macro evolution to work itself out.

  19. Dave,

    One of the admissions that great_ape made here when he used to comment was that the diversity in a genome was antithetical to neo Darwinism.

    By all accounts it should not exist. Neo Darwinism winnows out variance either through natural selection or genetic drift so all the alleles of dogs that create the variety we witness should not have existed or at least be very limited. There were millions of years that wolves or other canines had in the wild to eliminate most of this diversity.

    According to neo Darwinism diversity is eliminated over time even though this is what they need to drive differences in new species.

  20. DaveScot,
    Though malaria is caused by a eukaryote,,

    I’ve also noticed the surprising stability of simple life (bacteria) over long periods of time,,,yet this overall stability seems to be limited to the “simple” asexual life,, Remember higher “complex” life forms have a fairly constant unexplained extinction rate in the fossil record. Genetic Entropy would explain that very well!

    Is the fact that bacteria, or eukaryotes, are staying stable, over long periods of time, overturning the principle of Genetic Entropy? Of course not,,, The principle of Genetic Entropy still has overwhelmingly convincing validity since it is in fact based on foundational principles of science! Whereas, evolution is not based on any foundational principles of science!
    Genetic Entropy (GE) is able to be considered a foundational principle of science because it draws its inferences for biology directly from the marriage of a foundational principle of physics, the second law of thermodynamics (Entropy), with a foundational principle of information theory (conservation of information), Conservation of information states that it is impossible to create complex specified information (CSI) in the universe by totally natural means! (Werner Gitt, In the Beginning was Information, 2000)( William Dembski, Intelligent Design: The Bridge Between Science and Theology;1999).

    As a sidelight to the ancient DNA studies:
    The following is an interesting study of ancient DNA that caused quite a stir among evolutionists!
    (Vreeland, R.H., Rosenzweig, W.D., and Powers, D.W., 2000, Isolation of a 250-million-year-old halotolerant bacterium from a primary salt crystal: Nature
    http://news.bbc.co.uk/1/hi/sci/tech/1375505.stm

    Evolutionists are in disbelief about these results, and other results similar to these, since they challenge basic presumptions of theirs, but Vreeland is adamant that his results for similarity are indeed valid! As he stated in defense:
    “Strain 2-9-3 is not a contaminant. I estimate that its chances of being a contaminant are less than one in a million.” Vreeland.

    This ancient DNA study is also interesting,

    http://mbe.oxfordjournals.org/...../19/9/1637

    in which they admit.

    “Almost without exception, bacteria isolated from ancient material have proven to closely resemble modern bacteria at both morphological and molecular levels.”

    Their explanation for the similarity of ancient and modern bacteria turns out to be quite convoluted (I truly am impressed with their contortions to make it fit a evolutionary scenario) but I maintain that the evidence might be exactly what it is telling us it is….That is,,,THERE IS NO drastic change in DNA from ancient to modern bacteria!

    Does Dr. Sanford stretch the available evidence a bit to much to fit his preconceived bias? ,I certainly think he does,,,But that “stretching” does absolutely nothing to invalidate the principle of Genetic Entropy he relies on in the first place, since the principle of Genetic Entropy is drawn from first principles of science, not from his “impressive” evidence he cites.

  21. 21

    DaveScot said,

    It’s not impossible for random mutation to generate complex specified information. Likewise it’s not impossible to shuffle a standard deck of 52 cards and have it come up perfectly ordered by suit and rank (that particular order of the deck is complex and specified). This is where statistics come into the picture. We can be almost certain that we will never observe that result from a random shuffle in a finite universe. An intelligent agent however can order a deck in that manner easily.

    I understand your analogy, but don’t see how a group of 52 unique, unchanging objects equates to what goes on in the genome. We can calculate the probability of any given order of a deck of cards precisely, but that’s not possible with mutation and subsequent changes in the genome. Thus my question to BarryA still remains: if you identify CSI strictly by its complexity, how do you escape the tautology?

  22. bornagain,

    I don’t see how the second of thermodynamics can be applied here. It is very specific to energy systems and in order to use that concept in a different field I think it has to be shown how and why it should apply. Basically, Genetic Entropy is about information, while actual entropy is about the effect of heat on particles (very basically). To say that Genetic entropy is based on foundational principles because it co-opts a term from a completely unrelated branch of science is ridiculous. To me, this is no more than “Fashionable Nonsense”

    http://en.wikipedia.org/wiki/F.....of_science

    Dr. Alan Sokal so easily dismissed.

  23. The evident absence of genetic entropy can’t be explained if the mutation rate Sanford uses is correct but it can be easily explained if the mutation rate in eukaryotes is the commonly given one in one billion chance per nucleotide. P.falciparum’s genome size is about 23 million nucleotides. Thus on average, with an error rate of 1 in 10^9, we can expect that 97% of all p.falciparum replications to be perfectly error free copies. In mammalian cells with genomes roughly 100 times larger we can expect only 3% of the replications to be perfect copies. This great disparity probably explains why p.falciparum’s genome is immune to genetic entropy. ID still explains why p.falciparum failed to evolve any novel complexity – intelligent agency is the only mechanism reasonably capable of generating novel biological complexity. P.falciparum did exactly what we expect in the absence of input from intelligent agency.

  24. leo,
    Entropy applies to all complex material systems,,,i.e. all complex material systems will eventually degrade into equilibrium! In its broader meaning for what we are talking about, Entropy means that complex things such as Space Shuttles and Genomes will not self assemble themselves…It is a commonsense inference, as well as foundational inference!!!
    I know evolutionists try to elude this direct inference by referring to high rhetoric of closed and open systems YET… Since Complex Specified Information is indeed encoded on a complex material system (DNA),,,scientifically, our first and foremost presumption for the information will be that the complex specified information (CSI) will degrade in accordance to the material degradation (entropy) of the material that it is encoded on!!!

    This is a first inference postulation of basic principles of science!

    The second law of thermodynamics, and conservation of information (and thus Genetic Entropy) are overriding first principles of science that have primary authority in science, i.e. Their inferences are considered valid and can only be overcome by hard evidence,,you cannot refute a first principle of science by alluding to high rhetoric! Science runs on evidence not high rhetoric!
    You MUST conclusively demonstrate the generation of Complex Specified Information in a material system by totally natural means in order to overcome this inference to Genetic Entropy!!!

  25. DaveScot,
    I agree with you totally on your reasoning!..I would love to see the foundational work done on Genetic Entropy thus, further clarifying what I truly believe is a foundational principle for biology!!!
    I believe once the mathematical mo^dels are refined for Genetic Entropy, this will clear up a lot of the garbage that evolution has generated and reveal important insights into biology!

  26. bornagain,

    You can’t simply state that entropy applies to complex material systems, you have to actually prove it. Entropy does not mean that complex thing can’t self assemble. Entropy is simply a MEAUSRE of disorder. In terms of DNA, entropy can apply to the physical medium (ie the degradation of the DNA can be measured in terms of entropy)- that can be derived from the statistical equation. But to apply it to the information encoded therein, that is taking a term out if its intended concept and that cannot be derived from the definition (or at least, I have yet to see that – it you could show me I would be more than happy to admit that I am wrong)

  27. Leo,
    http://www.iscid.org/papers/Se.....012304.pdf

    of special note:
    But the Earth is an open system, and it is often argued that any increase in
    order is allowed in an open system, as long as the increase is ”compensated”
    somehow by a comparable or greater decrease outside the system. S. Angrist and
    L. Helper [3], for example, write, ”In a certain sense the development of civilization
    may appear contradictory to the second law… Even though society can effect
    local reductions in entropy, the general and universal trend of entropy increase
    easily swamps the anomalous but important efforts of civilized man. Each localized,
    man-made or machine- made entropy decrease is accompanied by a greater
    increase in entropy of the surroundings, thereby maintaining the required increase
    in total entropy.”
    According to this logic, then, the second law does not prevent scrap metal
    from reorganizing itself into a computer in one room, as long as two computers in
    the next room are rusting into scrap metal–and the door is open. The spectacular
    increase in order seen here on Earth does not violate the second law because order
    is decreasing throughout the rest of this vast universe, so the total order in the
    universe is surely still decreasing.
    So I wrote a reply, ”Can ANYTHING Happen in an Open System?” [4] to my
    critics which was published in the Fall 2001 issue of The Mathematical Intelligencer.
    In that reply, I first showed (see Appendix) that the second law does not
    simply require that any increase in thermal order in an open system be compensated
    for by a decrease outside the system, it requires that the increase in thermal
    order be no greater than the thermal order entering the open system.

    Leo, He has wrote extensively on this if you want to check out his other writings.
    http://www.iscid.org/boards/ub.....00038.html

  28. leo, Now can you demonstrate the generation of CSI by natural means thus establishing evolution as valid?

  29. “In a certain sense the development of civilization
    may appear contradictory to the second law… ”

    But of course, in the REAL sense, it isn’t at all.

    “Each localized,
    man-made or machine- made entropy decrease is accompanied by a greater
    increase in entropy of the surroundings, thereby maintaining the required increase
    in total entropy.”

    Exactly.

    “According to this logic, then, the second law does not prevent scrap metal
    from reorganizing itself into a computer in one room, as long as two computers in
    the next room are rusting into scrap metal–and the door is open.”

    Actually it is nothing like that. Comparing DNA to something like a computer or book or airplane is comparing a single molecule to a mixture of many different types of molecules – something way more complex. Furthermore, no one knows if DNA was made in one step as he is proposing, but books etc are much more complex and took many many more steps to form.

    He does a lot of equating thermal order with any order, and they are not the same thing.

    Now, the one interesting thing that he does say is

    “This is inexplicable–I don’t see any reason why
    all living organisms do not constantly decay into simpler components–as, in fact,
    they do as soon as they die.”

    Now, of course we all have mechanisms of self repair – both macroscopically and microscopically. But the interesting question is how this occurred before such systems evolved. We would have to know the degradation rate of DNA (or whatever the system of inheritance was) and compare that to the rate of replication for that system. If the information could be passed on prior to degradation of the physical medium than there would not be a problem.

    “Now can you demonstrate the generation of CSI by natural means thus establishing evolution as valid?”

    Can I ask: Whose definition are we talking about? Orgel or Dembski?

  30. DaveScott,

    As I was thinking about your comment in 11 above, it occurred to me that perhaps our difference lies in the fact that you overlooked my use of the term “relatively” in the following sentence:

    “Another corollary to ID is that a particular organism’s genetic code will be relatively stable over many generations.”

    A manmade system may incorporate redundancy and error correction mechanisms to promote stability. Nevertheless, the manmade system will always be only “relatively stable,” not absolutely stable. In the same way, the biological systems we observe that have redundancy and error correction mechanisms built in are also only “relatively stable.” I do not claim they are absolutely stable. In the larger picture, I see no conflict between my view stated here and genetic entropy.

    I think I am describing something real here and not simply erecting a semantic dodge to your criticism. Not being a specialist in the area, however, I am open to being shown I am wrong.

  31. 31

    Another thought comes to mind regarding the card deck analogy. Keep in mind that I’m an engineer and not a scientist, and my knowledge in science is predictably superficial, although probably better than the average layman’s. Nonetheless, my primary exposure is through the Internet and the popular press (Scientific American, e.g.). I do work with statistics and probability, however.

    If we encounter a deck of cards (or any other group of 52 unique objects, the probability that they will be in *any* particular order is 1 in 52!. Thus it seems possible to say, in encountering even a well-shuffled deck, that some sort of miracle or intelligent agency must have been involved, because there is only a 1 in 52! chance that they could be in that particular order. In other words, it seems to be me that the argument from probability is being misused here, because the order that the cards are in doesn’t tell us anything about how they got that way. A seemingly random-ordered deck might have been deliberately arranged, and what appears to be a deliberately-ordered deck might have happened randomly.

  32. Mickey, you are making a fairly common mistake.
    Go here to see the answer to your comment: http://www.uncommondescent.com.....-homework/

  33. Leo,
    Seeing as I am not that literate in math, could you just show me complexity being generated in the real world that would violate the concrete limit of 2 protein/protein binding sites being generated by Dr. Behe in Edge of Evolution.

  34. Bah, 100,000 years in the future. Might as well say a zillion years in the future.

    We’ll all have been raptured by then, and only the servants of satan will be left behind.

  35. Though I haven’t read Edge of Evolution, I was looking up this limit that he postulated and the review in Science would likely do a better job (clearer, more knowledge in that specific area) than me (lowly cell biologist).

    http://www.sciencemag.org/cgi/.....1427#ref10

    Please, if this is not what you are looking for, let me know.

  36. 36

    BarryA,

    That post of Dr. Dembski’s doesn’t seem to address my question. While he references what he calls “probabilistic resources” in correcting a critic (and rightly so, it seems), my point in responding to DaveScot was that his card deck analogy didn’t seem apropos of my question to you, which is in comment #21 above.

  37. Leo,
    Behe’s responce to Sean Carroll in Science:

    http://www.amazon.com/gp/blog/.....EF4DT51SV2

    Almost the same day that The Edge of Evolution was officially released Science published a long, lead review by evolutionary developmental biologist Sean Carroll, whose own work I discuss critically in Chapter 9. The review is three parts bluster to one part substance, which at least is more substance than Jerry Coyne’s essay.

    Here I’ll ignore the bluster and deal with the substantive points. Carroll first covers his rhetorical bases by warning readers that “Unfortunately, [Behe’s] errors are of a technical nature and will be difficult for lay readers, and even some scientists (those unfamiliar with molecular biology and evolutionary genetics), to detect. Some people will be hoodwinked. My goal here is to point out the critical flaws in Behe’s key arguments and to guide readers toward some references.” So, you see, if Carroll’s reasoning doesn’t sound right, well, maybe that’s because you, dear reader, are too slow to understand him. If that’s the case, you’re supposed to just take his word for it.

    Unfortunately, his word is demonstrably questionable. He claims that

    Behe’s chief error is minimizing the power of natural selection to act cumulatively… Behe states correctly [my emphasis] that in most species two adaptive mutations occurring instantaneously at two specific sites in one gene are very unlikely and that functional changes in proteins often involve two or more sites. But it is a non sequitur to leap to the conclusion, as Behe does, that such multipleamino acid replacements therefore can’t happen.

    But I certainly do not say that multipleamino acid replacements “can’t happen”. A centerpiece of The Edge of Evolution is that it can and did happen. I stress in Chapter 3 that in the case of malarial resistance to chloroquine, multiple necessary mutations did happen in the membrane protein PfCRT. I also of course emphasize that it took a huge population size, one that would not be available to larger organisms. But Carroll seems uninterested in making distinctions.

    Carroll cites several instances where multiple changes do accumulate gradually in proteins. (So do I. I discuss gradual evolution of antifreeze resistance, resistance to some insecticides by “tiny, incremental steps — amino acid by amino acid — leading from one biological level to another”, hemoglobin C-Harlem, and other examples, in order to make the critically important distinction between beneficial intermediate mutations and detrimental intermediate ones.) But, as Carroll might say, it is a non sequitur to leap to the conclusion that all biological features therefore can gradually accumulate. Incredibly, he ignores the book’s centerpiece example of chloroquine resistance, where beneficial changes do not accumulate gradually.

    As a “second blunder”, he asserts I overlook proteins that bind to “short linear peptide motifs” of two or three amino acids. I’ll get to that in a second. Notice, however, that here he is writing simply of a sub-class of protein binding sites, and never gets around to dealing with the question of how the majority of binding sites, those with interacting folded domains, developed. I assume that’s because he has no answer.

    Carroll lets his imagination run wild. He thinks it would be child’s play for random processes to develop binding sites, at least for the sub-category of short peptide motif binding:

    Very simple calculations indicate how easily such motifs evolve at random. If one assumes an average length of 400 amino acids for proteins and equal abundance of all amino acids, any given twoamino acid motif is likely to occur at random in every protein in a cell.

    Wow, every protein in the cell will have a binding site! Methinks Carroll has just stumbled over an embarrassment of riches. If every protein (or even a large fraction of proteins) had such a binding site, then binding would essentially be non specific. (It would be much like, say, the case of the digestive enzyme trypsin, which binds and cuts proteins wherever there is the amino acid lysine or arginine.) As I make clear in The Edge of Evolution, the problem the cell faces is not just to have protein binding sites (which could simply be large hydrophobic patches), but to bind specifically to the right partner.

    In fact, if one takes the trouble to look up the references Carroll cites, one sees that a short amino acid motif is not enough for function in a cell. For example, Budovskaya et el (Proc. Nat. Acad. Sci USA 102, 13933-8, 2005) show that the majority of proteins in the yeast Saccharomyces cerevisiae containing a motif recognized by a particular protein kinase were not phosphorylated by the enzyme. What does that mean? It just means that the simple motifs, while necessary for binding, are not sufficient. Other features of the proteins are necessary, too, features which Sean Carroll ignores.

    In his enthusiasm Carroll seems not to have noticed that, as I discuss at great length in my book, no protein binding sites — neither short linear peptide motifs nor any other — developed in a hundred billion billion (1020) malarial cells. Or in HIV. Or E. coli. Or in human defenses against malaria, save that of sickle hemoglobin. Like Coyne, Carroll simply overlooks observational evidence that goes against Darwinian views. In fact, Carroll seems unable to separate Darwinian theory from data. He writes that “what [Behe] alleges to be beyond the limits of Darwinian evolution falls well within its demonstrated [my emphasis] powers”, and “Indeed, it has been demonstrated [my emphasis] that new protein interactions (10) and protein networks (11) can evolve fairly rapidly and are thus well within the limits of evolution.”

    Yet if one looks up the papers he cites, one finds no “demonstration” at all. Those papers show, respectively, that: A) different species have different protein binding sites (but, although the authors assume Darwinian processes, they demonstrate nothing about how the sites arose); or B) different species have different protein networks (but, again, the authors demonstrate nothing about how the networks arose). Like Jerry Coyne, Sean Carroll simply begs the question. Like Coyne, Carroll assumes whatever exists in biology arose by Darwinian processes. Apparently Darwinism has eroded Coyne’s and Carroll’s ability to separate data from theory.

    In fact, the data I cite in The Edge of Evolution is a real demonstration. While we have studied them, in a truly astronomical number of chances, a variety of microbes developed precisely none of the sophisticated cellular mechanisms that Darwinist imaginations ascribe to random mutation and selection. That data demonstrates random mutation doesn’t explain the elegance of cellular systems.

  38. Mickey, I’ll take one more run at answering your question.

    There are about 10^68 different combinations that you can make with a deck of cards. It is true that any particular shuffle will result in only one of those 10^68 combinations and is exceedingly unlikely. But that misses the point.

    R. Totten answers this objection this way:

    “The card-shuffling illustration assumes that basically ANY ordering of the cards is an acceptable outcome –and, comparing it to life-chemistry, this would be the equivalent of saying that almost any ordering of the amino acids would work to build a functional protein. So, whatever one might randomly come up with is basically “easy” to achieve –no matter how “unlikely” the probability calculations might make it seem.

    “However, the critic unwittingly brings out the correct perspective when he says we are basically looking for one “particular ordering of the cards” –because the research just previously cited in this article (esp. from Behe), points out that –in reality– only about one specific sequence of amino acids out of 10^60 possible sequences is adequate to produce a properly folding protein which could be used by actual life. The rest are junk, and useless to life.

    “Therefore –to more accurately represent the life-chemistry situation– the card-illustration should actually be restricted to say that there are only a few specific orderings of the cards which are the acceptable outcomes of the random shuffles of cards. That is, only about 24 out of the 10^68 possible outcomes will do. –For example, the only good outcomes in cards would be: a well-shuffled deck must randomly end up with all four suits in proper numerical order starting with the Ace, then the 2, then the 3, etc., on up through to the King. All four suits must be so ordered. –Specificity is required.

    The whole article is here. http://www.geocities.com/Athen.....creat.html It is interesting reading.

  39. leo

    As soon as I saw a Monty Python cartoon appearing in Sean Carrol’s review of Edge of Evolution I stopped reading. Anyone who needs to resort to Monty Python in a scientific argument can be safely ignored as not having any legs to stand on.

  40. Leo [and others]:

    Re: Thermodynamics, information, entropy and bio-functional CSI

    Have a look at Appendix A [and its context and the onward links] in my always linked through my name, in the left column.

    I think you will find that since the nanotech of life is based on molecules which can potentially be in very large config spaces, statistical mechanical considerations — thus entropy etc — apply, and that when such systems are opened up to raw energy flows, that naturally tends to INCREASE their entropy.

    So, spontaneous origin of CSI as seen in life forms is statistically so unlikely on the gamut of the observed universe, that its probability is negligibly different from zero. The same holds for the increments in information and functionality required for the body-plan level biodiversity we observe. The odds that both originated by the sort of processes envisioned in evolutionary materialist mechanisms, are therefore so close to zero as makes no practical difference.

    So, on inference to best explanation relative to the world we actually observe [I am here underscoring that the speculative quasi-infinite cosmos as a whole models are metaphysics, not physics], agency is the best explanation of both life and biodiversity at body plan level. For, on routine and general observation, agents are the known cause of CSI.

    In turn, that traces to the classic trichotomy of causal forces as long since documented by Plato in Book X of His The Laws: chance, mechanical necessity, agency. (Excerpt is in Appendix B. I am now adding “mechanical” as I have always been a little uncomfortable with “necessity” alone. Not sure who it was I first saw using it here at UD.)

    Highly contingent situations are not dominated by [mechanical] necessity, and chance runs out of probabilistic resources once we see informational complexity greater than about 500 – 1,000 bits, as per a Dembski UPB type calculation.

    Even the lower end of life forms is about 1 Mbits long, and the human genome is about 6 gigabits long, as each 4-state base-pair holds up to about 2 bits of information.

    BTW, on Genetic Entropy, it seems to me that malaria parasites and bacteria replicate themselves in vast numbers and have very large populations in general, many of which will be genetically fairly close to “the original.”

    Winnowing out through the sort of functionality collapse that has been discussed would seem to be a mechanism for preserving the genome. That is, a population with the near-original information is likely to be preserved, and functionality- damaged variants which may survive for a time in niches, in the long run will not. [Cf. here the rise of hospital superbugs that can't compete with the originals in the wider world.]

    Hope that helps.

    GEM of TKI

  41. The interesting thing about the Darwinist commentators on Amazon is that they were so focused on “we must prove Behe to be wrong somehow” that they fail to realize they’re shooting themselves in the foot. If CQ resistance did indeed come about by a 2-part gradual scenario then all that does is make this example of the “all-mighty powers of Darwinian mechanisms” even more trivial than before! After all, a direct stepwise scenario is much more likely to occur than one that requires simultaneous changes or an indirect pathway. Yet even then Darwinian mechanisms have a hard time bring about such a change even with the extremely high number of replications (in comparison to higher animals). (BTW, I would rank in order of difficulty from easiest to hardest: direct gradualist, indirect gradualist, direct multiple/simultaneous, and then a combination of gradual changes combined with indirect multiple/simultaneous)

    Now I have seen excerpts where scientists hypothesize gradualistic scenarios…

    Current evidence from transfection studies (71, 187) strongly suggests that the mechanism of P. falciparum resistance to CQ is linked to mutations in the pfcrt gene, especially the substitution of threonine for lysine at position 76. However, other mutations in the pfcrt gene at positions 72 to 78, 97, 220, 271, 326, 356, and 371, as well as mutations in other genes such as pfmdr1, might be involved in the modulation of resistance (173, 223). CQ resistance seems to involve a progressive accumulation of mutations in the pfcrt gene, and the mutation at position 76 seems to be the last in the long process leading to CQ clinical failure (53, 92).

    …but I have not seen a direct statement of certainty (anyone care to supply a link?). But just because other scientists are discussing other scenarios for generating CQ resistance that “must” mean Behe is lying in the minds of these Darwinists….

    This back and forth made me shake my head in exasperation:

    If you want to refute Behe on this point, what you need to prove is that (1) CQ resistance actually demands more than two mutations. (To show that potentially profitable mutations arise more frequently than Behe claims)

    A Darwinist responds:

    Which leads me to think you don’t understand what everybody is talking about in regards to CQ resistance. Behe’s false assertion is that CQ resistance requires two SIMULTANEOUS mutations to occur. The reality is that the published literature clearly shows that the mutations for CQ resistance occur gradually, one mutation at a time. No one has said anything about CQ resistance needing more then 2 mutations. For one thing, that would be HELPING Behe’s claim, not refuting it. As such, your claim that in order to “refute Behe” I would have to show that CQ resistance requires more then 2 mutations makes no sense. The whole point of this little exercise is to point out that Behe’s false assertion greatly exaggerates the difficulty in CQ resistance by claiming that both mutations have to happen simultaneously.

    A big “no duh” here…3 or more simultaneous mutations would put Darwinism in a better light.

    Another:

    However, Professor Behe does not offer a scientifically credible means of demonstrating how Intelligent Design could account for “common descent”.

    That was outside the scope of the EoE book, but I’m guessing that commentator has not bothered to read other ID writers.

    Otherwise, many of the commentators do not seem to have bothered to read what Behe had said previously:

    Incidentally, this bears on Coyne’s comment on Miller’s review that “one of the two mutations that Behe claims are ‘required’ for CQR is not actually required (Chen et al. 2003, reference accidentally omitted from Miller’s piece).” If you read that paper you see that, yes, A220S is not found in some resistant strains, as it is in most. (By the way, I was always quite careful in my book to state that A220S had been found in most strains, because I was quite aware of the several exceptions.) However, one also reads that the strains missing A220S have several other, novel mutations, which may be playing a comparable role in them that the mutation at position 220 plays in most other strains. My argument does not depend on exactly which changes are needed in the protein. Rather, the important point is that multiple changes appear to be required for resistance in the wild.

    bg77,

    could you just show me complexity being generated in the real world that would violate the concrete limit of 2 protein/protein binding sites being generated by Dr. Behe in Edge of Evolution.

    Concrete limit? Don’t be giving Darwinists more strawmen. It’s an “estimate” based upon observed evidence. Throughout the history of life there “might” have been instances of 3-6. Or there might be very limited scenarios where more can be accomplished. Saying it’s concrete in general goes too far.

    Oh, I noticed a basic error in the front page post:

    over many thousands of generations natural selection…will be unable to cause any increase in genetic information

    ID proponents should always be careful to not just say “information” or “complex information”. Yes, I know Barry meant complex specified information, but just an increase an information in general can and does occur. Newcomers to ID not familiar with the language employed on UD might be put off by such a broad statement.

  42. Very good responses guys!

    Thanks for the correction Patrick,,I will be careful to say tentative limit of 2 protein/protein binding sites set by Dr. Behe, and not say concrete limit.

    Thanks for the info on the second Law kairosfocus,,I will dig through it, along with Granville Sewell’s, later today to shore up my logic with Genetic Entropy,

    Thanks for the linked to site on abiogenesis BarryA,,,there is a lot of good stuff in there that I will dig through and make use of also.

  43. Leo,
    You called me to task to prove my assertion that Genetic Entropy is a foundational principle of science.

    To which I refer you to kairofocus’s work On Thermodynamics, Information and Design
    http://www.angelfire.com/pro/k.....tm#thermod

    And I also refer you to Dr. Dembski’s work on Conservation of Information;
    http://cayman.globat.com/~trad.....veInfo.pdf

    I have to humbly admit that much of the math is beyond me,,,but I am sure if you have any questions the Author’s themselves, or someone who has a better grasp of the details than I, will be more than happy to answer your questions on this site!

  44. 44

    BarryA,@#38:

    We seem to be talking past one another, in that the argument you quote from R. Totten seems to assume its own conclusion, thus I still don’t know how the tautology may be logically escaped.

    If we find a deck of cards ordered by rank and suit, there is an assumption that they were ordered that way intentionally, but only because that particular order is meaningful to the observer. The cards being ordered by rank and suit is, in fact, a state that is no more or less likely than any random order.

    You have to understand that my personal faith as a Christian is not swayed in any way by my struggles to reconcile the reality of Intelligent Design with what appears to me to be attempts to force round pegs into square holes. If there’s a difference between what my faith tells me and what I actually observe, I know that what I’m *able* to observe is severely limited by my human condition.

  45. Mickey Bitsko,

    I think this may help you understand.

    What makes an event improbable in biology is that a particular order (shape space) in a particular protein is required to be generated to match the configuration of other protein shape spaces to accomplish a specific novel task,,,,

    Maybe the following article will help you understand a bit better than that general description.:

    The simplest bacteria ever found on earth is constructed with over a million protein molecules. Protein molecules are made from one dimensional sequences of the 20 different L-amino acids that can be used as building blocks for proteins. These one dimensional sequences of amino acids fold into complex three-dimensional structures. The proteins vary in length of sequences of amino acids. The average sequence of a typical protein is about 300 to 400 amino acids long. Yet many crucial proteins are thousands of amino acids long. Proteins do their work on the atomic scale. Therefore, proteins must be able to identify and precisely manipulate and interrelate with the many differently, and specifically, shaped atoms, atomic molecules and protein molecules at the same time to accomplish the construction, metabolism, structure and maintenance of the cell. Proteins are required to have the precisely correct shape to accomplish their specific function or functions in the cell. More than a slight variation in the precisely correct shape of the protein molecule type will be for the life of the cell. It turns out there is some tolerance for error in the sequence of L-amino acids that make up some the less crucial protein molecule types. These errors can occur without adversely affecting the precisely required shape of the protein molecule type. This would seem to give some wiggle room to the naturalists, but as the following quote indicates this wiggle room is an illusion.

    “A common rebuttal is that not all amino acids in organic molecules must be strictly sequenced. One can destroy or randomly replace about 1 amino acid out of 100 without doing damage to the function or shape of the molecule. This is vital since life necessarily exists in a “sequence—disrupting” radiation environment. However, this is equivalent to writing a computer program that will tolerate the destruction of 1 statement of code out of 1001. In other words, this error-handling ability of organic molecules constitutes a far more unlikely occurrence than strictly sequenced molecules.” Dr. Hugh Ross PhD.

    It is easily demonstrated mathematically that the entire universe does not even begin to come close to being old enough, nor large enough, to ally generate just one small but precisely sequenced 100 amino acid protein (out of the over one million interdependent protein molecules of longer sequences that would be required to match the sequences of their particular protein types) in that very first living bacteria. If any combinations of the 20 L-amino acids that are used in constructing proteins are equally possible, then there are (20^100) =1.3 x 10^130 possible amino acid sequences in proteins being composed of 100 amino acids. This impossibility, of finding even one “required” specifically sequenced protein, would still be true even if amino acids had a tendency to chemically bond with each other, which they don’t despite over fifty years of experimentation trying to get amino acids to bond naturally (The odds of a single 100 amino acid protein overcoming the impossibilities of chemical bonding and forming spontaneously have been calculated at less than 1 in 10^125 (Meyer, Evidence for Design, pg. 75)). The staggering impossibility found for the universe ever generating a “required” specifically sequenced 100 amino acid protein by would still be true even if we allowed that the entire universe, all 10^80 sub-atomic particles of it, were nothing but groups of 100 freely bonding amino acids, and we then tried a trillion unique combinations per second for all those 100 amino acid groups for 100 billion years! Even after 100 billion years of trying a trillion unique combinations per second, we still would have made only one billion, trillionth of the entire total combinations possible for a 100 amino acid protein during that 100 billion years of trying! Even a child knows you cannot put any piece of a puzzle anywhere in a puzzle. You must have the required piece in the required place! The simplest forms of life ever found on earth are exceedingly far more complicated jigsaw puzzles than any of the puzzles man has ever made. Yet to believe a naturalistic theory we would have to believe that this tremendously complex puzzle of millions of precisely shaped, and placed, protein molecules “just happened” to overcome the impossible hurdles of chemical bonding and probability and put itself together into the sheer wonder of immense complexity that we find in the cell.

    Instead of us just looking at the probability of a single protein molecule occurring (a solar system full of blind men solving the Rubik’s Cube simultaneously), let’s also look at the complexity that goes into crafting the shape of just one protein molecule. Complexity will give us a better indication if a protein molecule is, indeed, the handi-work of an infinitely powerful Creator.
    In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, that is 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it is estimated it will take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape.

    “Blue Gene’s final product, due in four or five years, will be able to “fold” a protein made of 300 amino acids, but that job will take an entire year of full-time computing.” Paul Horn, senior vice president of IBM research, September 21, 2000
    http://www.news.com/2100-1001-233954.html

    In real life, the protein folds into its final shape in a fraction of a second! The computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. That is the complexity found for JUST ONE “simple” protein. It is estimated, on the total number of known life forms on earth, that there are some 50 billion different types of unique proteins today. It is very possible the domain of the protein world may hold many trillions more completely distinct and different types of proteins. The simplest bacterium known to man has millions of protein molecules divided into, at bare minimum, several hundred distinct proteins types. These millions of precisely shaped protein molecules are interwoven into the final structure of the bacterium. Numerous times specific proteins in a distinct protein type will have very specific modifications to a few of the amino acids, in their sequence, in order for them to more precisely accomplish their specific function or functions in the overall parent structure of their protein type. To think naturalists can account for such complexity by saying it “happened by chance” should be the very definition of “absurd” we find in dictionaries. Naturalists have absolutely no answers for how this complexity arose in the first living cell unless, of course, you can take their imagination as hard evidence. Yet the “real” evidence scientists have found overwhelmingly supports the anthropic hypothesis once again. It should be remembered that naturalism postulated a very simple “first cell”. Yet the simplest cell scientists have been able to find, or to even realistically theorize about, is vastly more complex than any machine man has ever made through concerted effort !! What makes matters much worse for naturalists is that naturalists try to assert that proteins of one function can easily mutate into other proteins of completely different functions by pure chance. Yet once again the empirical evidence we now have betrays the naturalists. Individual proteins have been experimentally proven to quickly lose their function in the cell with random point mutations. What are the odds of any functional protein in a cell mutating into any other functional folded protein, of very questionable value, by pure chance?

    “From actual experimental results it can easily be calculated that the odds of finding a folded protein (by random point mutations to an existing protein) are about 1 in 10 to the 65 power (Sauer, MIT). To put this fantastic number in perspective imagine that someone hid a grain of sand, marked with a tiny ‘X’, somewhere in the Sahara Desert. After wandering blindfolded for several years in the desert you reach down, pick up a grain of sand, take off your blindfold, and find it has a tiny ‘X’. Suspicious, you give the grain of sand to someone to hide again, again you wander blindfolded into the desert, bend down, and the grain you pick up again has an ‘X’. A third time you repeat this action and a third time you find the marked grain. The odds of finding that marked grain of sand in the Sahara Desert three times in a row are about the same as finding one new functional protein structure (from chance transmutation of an existing functional protein structure). Rather than accept the result as a lucky coincidence, most people would be certain that the game had been fixed.” Michael J. Behe, The Weekly Standard, June 7, 1999, Experimental Support for Regarding Functional Classes of Proteins to be Highly Isolated from Each Other

    “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed – along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering)

    Even if evolution somehow managed to overcome the impossible hurdles for generating novel proteins by totally natural means, Evolution would still face the monumental hurdles of generating complimentary protein/protein binding sites in which the novel proteins could actually interface with each other in order to accomplish specific tasks in the cell (it is estimated that there are least 10,000 different types of protein-protein binding sites in a “simple” cell). What does the recent hard evidence say about novel protein-protein binding site generation from what is actually observed to be occuring on the protein level of malaria and HIV since they have infected humans? Once again the naturalists are brutally betrayed by the hard evidence that science has recently uncovered!

    The likelihood of developing two binding sites in a protein complex would be the square of of the probability of developing one: a double CCC (chloroquine complexity cluster), 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the entire world in the past 4 billion years, so the odds are against a single event of this variety (just 2 binding sites being generated by ) in the history of life. It is biologically unreasonable. Dr. Michael J. Behe PhD. (from page 146 of his book “Edge of Evolution”)

    Mickey, I Hope that help explain why just any random event can’t be considered a complex specified event.

  46. Mickey said:

    “If we find a deck of cards ordered by rank and suit, there is an assumption that they were ordered that way intentionally, but only because that particular order is meaningful to the observer.”

    Exactly. If we find a deck of cards ordered by rank and suit, we can assume without doubt that they were ordered by agency, especially because that order is meaningful to the observer.

    The meaning isn’t arrived at after the fact. The arrangement conforms to a preexisting pattern. It’s not as if meaning is derived after the deck is shuffled.

    Only specific arrangements have meaning and could reasonably be attributed to agency. The fact that any arrangement is equally improbable is irrelevant.

    If we encounter a deck of cards (or any other group of 52 unique objects, the probability that they will be in *any* particular order is 1 in 52!. Thus it seems possible to say, in encountering even a well-shuffled deck, that some sort of miracle or intelligent agency must have been involved, because there is only a 1 in 52! chance that they could be in that particular order.

    The tautology is introduced by your imposition of a straw man. You don’t get to impose the pattern after the deck is shuffled and revealed. According to your wording of the issue, there is a probability of nearly 1 that the deck will be reordered after it’s shuffled. There’s no miracle there.

    “Sufficiently shuffling the deck will sufficiently randomize its order.”

    This is the tautology and thereby says nothing meaningful.

    “After the deck is sufficiently shuffled the deck will be ordered by rank and suit.”

    That’s the miracle, and the reason why this analogy is appropriate to CSI.

    Just to note, the rank/suit ordering of the deck is not only meaningful, it’s compressible and subject to simple semantic expression. These features are not shared by more than a few other arrangements.

  47. 47

    Hi Apollos–

    Suit/rank is only meaningful to an observer who recognizes its significance. Thus to say that “You don’t get to impose the pattern after the deck is shuffled and revealed,” is correct, but misses the point. There must be prior knowledge of the significance of ordered relationships in order to be able to recognized them as ordered. This seems fundamental to me.

    To illustrate my point, let’s forget about decks of cards for the moment, and take some other group of 52 unique objects. Let’s say that in some isolated culture, a particular ordering of these things has some cultural or religious significance, and the members of the culture all recognize it as such. Now further suppose that some cultural anthropologist pays a visit and finds the objects lined up in a neat row. He may assume from the neatness of the display that it was deliberate, but he has no way of knowing anything about the order of the objects. With the evidence at hand, he can see a form of order–the alignment of the objects–but that’s all. He would have no basis for thinking that the placement of the object was the result of any type of deliberate ordering–it appears to be random, even if it wasn’t, because he knows nothing about the significance of the order.

  48. Mickey,

    With the evidence at hand, he can see a form of order–the alignment of the objects–but that’s all. He would have no basis for thinking that the placement of the object was the result of any type of deliberate ordering–it appears to be random, even if it wasn’t, because he knows nothing about the significance of the order.

    Try reading Dembski’s books. That’s called a false negative, which is a valid minor issue with formalized design detection. But we’re really only concerned if there is a false positive.

    While there are specifications that are context sensitive other specifications are independent of culture and such. The flagellum provides motility, for example.

  49. Mickey, but wouldn’t this arrangement still fall within a tiny minority of combinations that have significance? I think so. Your argument seems to redefine the discussion. Besides, the properties of your alter-cultural display still betray agency, even if the message is not understood.

    Whether or not some other arrangement might have a significance to another culture still doesn’t address that the probability of arriving at one of those arrangements randomly is vanishingly small. Also, I may not understand the meaning of the arrangement, but I could still identify the involvement of agency with astonishing reliability. Finding things in neat rows, when arrangement by row is not a property inherent to the objects, is a clear indicator of design.

    However another thing to consider are properties intrinsic to a rank/suit arrangement. I touched on this briefly in my previous post.

    Just to note, the rank/suit ordering of the deck is not only meaningful, it’s compressible and subject to simple semantic expression. These features are not shared by more than a few other arrangements.

    The rank/suit arrangement conforms to logical patterns that are a property of the deck’s design. A deck of cards has 4 categories of repeating indices from 1 to 13. This can be expressed this way:

    suit = "hearts", "diamonds", "spades", "clubs";
    rank = "ace", 2...10, "jack", "queen", "king";
    for(i=1; i

    Without logical arrangement, the expression of a deck of cards could not be reduced to code semantics. Therefore this arrangement is the logical expression of the deck's design, and is fairly unique by nature -- exhibiting properties unshared with other arrangements.

    A very small percentage of other meaningful arrangements could be expressed semantically, and are likewise compressible; however the "random" combinations that make up the majority of probabilities will not exhibit these properties.

    This gives a tiny minority of patterns properties not shared by the majority, making design detection of these arrangements objectively possible without equivocation.

  50. reposting the code sample:

    suit = "hearts", "diamonds", "spades", "clubs";
    rank = "ace", 2...10, "jack", "queen", "king";
    for(i=1; i<=4; i++)
      for (j=1; j<=13; j++)
        output(suit[i], rank[j]);

  51. Mickey:

    You say:

    “If we encounter a deck of cards (or any other group of 52 unique objects, the probability that they will be in *any* particular order is 1 in 52!. Thus it seems possible to say, in encountering even a well-shuffled deck, that some sort of miracle or intelligent agency must have been involved, because there is only a 1 in 52! chance that they could be in that particular order”

    Briefly, because this subject has been already discussed many times:
    the example of the deck of cards completely misses the point. The point in the concept of Dembski’s CSI is: complexity “plus” specification. Each of the possible combinations of your deck of cards is a legitimate example of complexity, because each one is very unlikely, but only a tiny subset of combinations can be said to be specified in one way or another. Therefore, only a tiny subset of combinations is specified, and exhibits CSI. The others are random.

    Now, you can ask what specification means. That’s really the big question, and the answer is not necessarily simple and not necessarily final, and specification is often context dependent, but that does not mean that clear answers have not been given. Please, read Dembski and especially his paper on specification, on his site.
    Again briefly, I’ll try to give here my simple personal view of specification, just to discuss.

    Specification is everything which allows us to “recognize” a subset of combinations of a system as not random. It has a strict relationship with the more general (and equally elusive) concept of meaning (at least in its cognitive sense).

    Specification can be of at least 3 different kinds:

    1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence “after” a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases).

    2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as “10 times 3″. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties.

    3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can “do” something very specific, in the right context. That’s the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge).

    So, this is the theoretic frame of CSI: complexity “plus” specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is:

    a) If you have a very complex pattern (very unlikely) and

    b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and

    c) If that pattern is recognizable as specified, in any of the ways I have previously described:

    then

    we are witnessing CSI, and the best empirical explanation for that is an intelligent agent.

    That’s just that simple.

  52. Mickey:

    Now further suppose that some cultural anthropologist pays a visit and finds the objects lined up in a neat row. He may assume from the neatness of the display that it was deliberate, but he has no way of knowing anything about the order of the objects. With the evidence at hand, he can see a form of order–the alignment of the objects–but that’s all. He would have no basis for thinking that the placement of the object was the result of any type of deliberate ordering–it appears to be random, even if it wasn’t, because he knows nothing about the significance of the order.

    Seems to me this is entirely possible and consistent with the idea of specified complexity. In your thought experiment, the arrangement is specified but the anthropologist is not able to recognize it as such. Therefore he is unable to infer intelligent agency (at least not as much as a cultural insider would).

    But as others have pointed out, in biological systems, we *do* know some things about what patterns will be meaningful: for example, arrangements that function well; and especially, ones that require all their parts to work well. So we are unlike the anthropologist; we *can* recognize the specificity of certain patterns (but there may be patterns whose meaning we’re unaware of).

Leave a Reply