Home » Comp. Sci. / Eng., Complex Specified Information » Dawkins Weasel vs. Blind Search — simplified illustration of No Free Lunch theorems

Dawkins Weasel vs. Blind Search — simplified illustration of No Free Lunch theorems

I once offered to donate $100 to Darwinist Dave Thomas’ favorite Darwinist organization if he could write an genetic algorithm to solve a password. I wrote a 40-character password on paper and stored it in safe place. To get the $100, his genetic algorithm would have to figure out what the password was. I was even willing to let him have more than a few shots at it. That is, he could write an algorithm which would propose a password, it would connect to my computer, and my computer that had a copy of the password would simply say “pass or fail”. My computer wouldn’t say “you’re getting closer or farther” from the solution it would merely say “pass or fail”. But he wasn’t willing even to go that far. He declined my generous offer. :-)

Dave Thomas, like Richard Dawkins, advertise the supposed mighty power of genetic algorithms, but when pressed to solve the sort of problems that are relevant to evolution, they are no where to be seen.

Complex functional proteins for a system are notoriously difficult to construct. They are like passwords that must be matched to the right login.

Evolving a functional protein in one context to become functional in another context is not so easy. This is akin to taking a functional password for one account and presuming we could evolve it in steps to become a functional password for another account. Thankfully this doesn’t happen, otherwise thieves could be evolving their bank account passwords to be able to compromise your bank account passwords!

In general, transitionals from one functional protein to another are not selectively favored so as to coax evolution toward a new functional target. If each attempt to evolve a new protein is met with “pass or fail” versus “you’re getting closer or farther”, the search is effectively blind as a random search. The evolutionary search for new functional proteins fails for the same reasons thieves cannot evolve their functional passwords into your functional passwords.

The fact that Dave Thomas declined my offer indicates deep down he understands the fallacious claims of Darwinism and that Dawkins Weasel is a misleading picture of how natural selection in the wild really works when trying to solve problems like protein evolution. He knew he couldn’t take his passwords and evolve them into mine.

Despite this, we hear evolutionists proudly proclaim “evolution doesn’t evolve proteins from scratch, it evolves them from existing ones”. This claim may look promising on the surface, but let me pose this rhetorical question to the readers. You have a functioning password that works for your account, it may even share extreme similarities to other passwords that people have for their accounts. Does that fact give you a better chance of solving their passwords over blind luck? No. But evolutionary biologist are effectively saying that when then say “evolution evolves one protein from another.” So if Darwinian evolution will not evolve proteins what will? Surprise, there is a New mechanism of evolution — POOF….

But these considerations do not hinder Dawkins from advertising Weasel as the way evolution works. In contrast, as reported at UD, real evolution destroys complexity over time. The average of all reported real-time or near-real time lab and field observations is that most adaptive evolution is loss of function, not acquisition of function — Behe’s rule. In fact real evolution is worse than blind search, it can’t even hold on to the complexity that already exists, much less create it. The Blindwatchmaker would dispose of lunches even if they were free.

No Free Lunch theorems are the formalization that shows that Darwinian search is no better than blind search for cases like solving passwords. No Free Lunch would assert that if Dave Thomas’ genetic algorithm solved my password, he likely was privy to specialized information which a random search algorithm didn’t have. By way of analogy, in the case of Dawkins Weasel, if we viewed the phrase: “METHINKS IT IS LIKE A WEASEL” as the target password, then Dawkins pretty much front loaded the desired password to begin with. But Dawkins and Thomas will have no such luck if they don’t have the desired password up front.

But I didn’t give Dave the target answer, hence there was no free lunch ($100 worth) for Dave Thomas.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

53 Responses to Dawkins Weasel vs. Blind Search — simplified illustration of No Free Lunch theorems

  1. NOTE #1:
    There has been some debate over whether an evolutionary system (be it bacteria or robots) can create CSI that wasn’t pre-existing. In the case of the password challenge above, it is clear, an algorithm or robot cannot reduce the uncertainty about the password, no matter what strategy is used. That uncertainty is pretty much immutable, and the consequences of the No Free Lunch (NFL) theorems are blatantly in evidence. No algorithm, Darwinian or otherwise, will get a free lunch in solving the password without sneaking in some specialized knowledge. The password challenge is a closed system, and clearly there can be no CSI increase in such closed systems. But what about open systems?

    There has been recent debate at UD whether there can be information increase in scenarios outside of blind search, like say a Robot studying and environment and then constructing Rube Goldberg machines.

    I support the NFL theorem’s conclusions with respect to blind search (a closed system), but I’m not quite so enthusiastic to apply NFL theorems outside of blind search (open systems). What do I mean?

    I’ve given illustrations where I think CSI can arguably be increased in an open system by AI, namely robots building Rube Goldberg machines. Whereas for a closed system, like the password problems, it is clear CSI cannot be increased by any AI system (a Darwinian algorithm is an instance of an AI system). I don’t think the debate will have resolution because, unlike closed systems, the measures of information for open systems are quite ambiguous. I tried to illustrate the information measurement problem in: The Paradox in Calculating CSI Numbers for 2000 coins. The conflicting answers in that discussion illustrate the difficulty in applying NFL to open systems. I know several of my UD colleagues will disagree with me on this issue, and I don’t think it has resolution because of the paradoxes that arise in measuring information in physical artifacts.

    Even though we might say biology is in an open system, it does not mean evolution will not be challenged by the problem NFL poses for closed systems. The protein evolution problem can be made analogous to a closed system password problem, and hence Darwinian evolution is precluded as being able to evolve sufficiently complex proteins. I’m quite enthusiastic to apply NFL in that context.

    Also, I pointed out, even supposing CSI can increase in an open system, there is no empirical evidence real selection in the wild will do so. See: How Darwinists confuse Extravagant with Essential.

  2. NOTE #2
    Evolutionist will argue that evolution doesn’t work with explicit targets like Weasel. What No Free Lunch illustrates is that knowledge of the structure of the search space reduces uncertainty about the final answer, hence even if the target is not explicitly stated, there is no ultimate uncertainty about the final answer in certain cases. The algorithm effectively only used what was known, it did not somehow create knowledge out of a vacuum (doing so would be making free lunches).

    What is an illustration of having knowledge of the target, but not explicitly stating it? For example if I added the integers 1 to 1000:

    1 + 2 …. 1000 = 500,500

    I could write a Weasel-like program that will converge on the answer of “500,500″ (in the way Dawkins mutated letters), or I could write a more clever Weasel-like program that uses Gauss’ formula such that nowhere will the phrase “500,500″ be found explicitly in the algorithm, but still the algorithm will converge on the right answer.

    I wrote just such an algorithm, but unfortunately, the original source code is lost. All we have left on the net are traces of the original debate about what I wrote: Dave Thomas says Cordova’s Algorithm is Remarkable. The point however, is that nowhere in my algorithm did I explicitly use the phrase “500,500″, my algorithm implicitly described the target, but I was able to do so because I had a priori knowledge of the search space that effectively reduced the uncertainty over the final answer to zero.

  3. NOTE #3

    In the case of my challenge, a 40-character case-sensitive alphanumeric password has 62^40 possible configurations, which yields a Shannon uncertainty (entropy, information) of:

    log2(62^40) = 238 bits

    If I cheated myself by giving Dave Thomas the password up front, he could easily write a Weasel-like program because he knows the target in advance. But in such case, there would really be no uncertainty, and if there is no uncertainty, his algorithm really would not have more insight than the information that was front loaded. If I gave him the information up front, there really would be only one final answer (the correct answer) by his algorithm, and thus the Shannon uncertainty in such case would be:

    log2(1) = 0 bits

  4. Very interesting post. I need to spend some time to digest it because it touches on things that are dear to my heart such as the limits of artificial intelligence.

  5. SC: As in isolated islands of function in a vast sea of non-functional configs. Where also the available time and resources are such that only a negligibly small fraction of possibilities can be blindly searched through chance and/or necessity. For 500 bits, and solar system scale resources [10^57 atoms, ~ 10^17 s, one search per atom per 10^-14 s, effectively fastest Chem rate] my back of envelope estimate is search to space is as one straw sized sample to a cubical haystack 1,000 light years across, about as thick as our galaxy at its thickest. Thus, Chi_500 = i*s – 500, functionally specific bits beyond the threshold. The predictable reaction is stout resistance and refusal to take the issue seriously — this is essentially what we find in NFL. Mix in singleton proteins aplenty, the OOL challenge, missing body plan evo links, lack of observed capacity of mechanisms to do the job and there is a serious challenge to answer to. Hotly denied of course. KF

  6. 6
    CentralScrutinizer

    WEASEL is a front loaded program with a guaranteed outcome.
    But to be fair to Dawkins, himself acknowledged that WEASEL is not actually an analog of Darwinian evolution but that it is merely an illustration of how small changes might accumulate by whatever means over time in a system.

  7. 7
    CentralScrutinizer

    …towards a certain outcome.

  8. I once offered to donate $100 to Darwinist Dave Thomas’ favorite Darwinist organization if he could write an genetic algorithm to solve a password. I wrote a 40-character password on paper and stored it in safe place. To get the $100, his genetic algorithm would have to figure out what the password was … Dave Thomas, like Richard Dawkins, advertise the supposed mighty power of genetic algorithms, but when pressed to solve the sort of problems that are relevant to evolution, they are no where to be seen.

    What? How on earth is this relevant to evolution?

  9. How on earth is this relevant to evolution?

    It’s not as such or only peripherally. Sal’s after Winston Ewert’s job or better!

    Ewert didn’t cope too well with Lizzie Liddle

    Sal thinks he can do better!

  10. “How on earth is this relevant to evolution?”

    Evolution Vs. Functional Proteins – Where Did The Information Come From? – Doug Axe – Stephen Meyer – video
    http://www.metacafe.com/watch/4018222/

    Stephen Meyer – Proteins by Design – Doing The Math – video
    http://www.metacafe.com/watch/6332250/

    “Biologist Douglas Axe on Evolution’s (non) Ability to Produce New (Protein) Functions ” – video
    Quote: It turns out once you get above the number six [changes in amino acids] — and even at lower numbers actually — but once you get above the number six you can pretty decisively rule out an evolutionary transition because it would take far more time than there is on planet Earth and larger populations than there are on planet Earth.
    http://intelligentdesign.podom.....5_14-07_00

    Doug Axe PhD. on the Rarity and ‘non-Evolvability’ of Functional Proteins – video (notes in video description)
    http://www.metacafe.com/watch/9243592/

  11. What? How on earth is this relevant to evolution?

    If genetic algorithms can’t solve passwords, why should they be expected to solve complex proteins. It’s the same class of problems except maybe my password problem was easier!

  12. But evoplutionary pcesses don’t find particular solutions like passwords. they find any solution!

  13. Lets try that again with spell-check:

    But evolutionary processes don’t find particular solutions like passwords. They find any solution!

  14. But evolutionary processes don’t find particular solutions like passwords. They find any solution!

    So evolutionary processes are incapable of finding very specified, singular ‘solutions’?

  15. But evoplutionary pcesses don’t find particular solutions like passwords. they find any solution!

    Nope, they don’t even find those, they throw them away (Behe’s rule).

    Two cases:

    1. protein doesn’t exist, but the rest of the system already exists, but it needs the protein to function

    2. protein exists already, and we build a system to utilize it

    The difficulty of case #1 is easy to see the analogy to the password login.

    Case #2 is actually worse. You form a password first, then you have to build a computer system to utilize it!

    Just because there are an infinite number of solutions to a problem doesn’t mean the probability of solving that problem is close to zero. There are an infinite number of ways to write solutions to Einsteins field equations, doesn’t mean you’ll find all of them. :-)

  16. But evolutionary processes don’t find particular solutions like passwords. They find any solution!

    So evolutionary processes explain anything they are proposed to explain.

  17. Nope, they don’t even find those, they throw them away (Behe’s rule).

    Pin your colours to that mast if you want Sal. Disappointment awaits. Richard Lenski’s work shows the power of evolutionary processes in an asexual environment. Given eukaryotes and sexual reproduction, the sky’s the limit!

  18. But even when writing password login pairs the designer has access to both parts (the login and password).

    In the case of proteins, the architecture of the necessary component is unknown. For example, a living creature may need insulin. The requirement of what the insulin protein need to do are clear, actually constructing it from scratch is pretty difficult. Even we humans with all our technology cannot resolve the protein folding problems in sufficient detail. It’s combinatorially prohibitive, like passwords, it doesn’t lend itself to simple laws.

    So add that complication to the two cases in my previous comment, and that is why evolutionary algorithms (be they biological or Weasels in Dawkins computer), cannot resolve them in geological time.

  19. So evolutionary processes explain anything they are proposed to explain.

    Nope. What an odd question! Evolutionary processes are not searches. Solutions lie in wait and trap unsuspecting populations into new niches. It’s so unfair!

  20. 20

    Regarding Sal’s comment #1, it’s unclear to me how AI can increase information at all, since an algorithm is essentially a necessity mechanism — a law — where sufficient conditions produce a reliable, repeatable result given identical input from an initial starting state s0.

    This automaton is fascinating: see Jaquet-Droz The Writer, and Video: The Writer. It produces a handwritten message with ink and quill on paper, and it’s a product of clock-making genius. However it doesn’t actually produce any information, rather it’s programmed with cams and cam followers to transfer preexisting information from one form/medium to another.

    At best it seems that an algorithmic system might incorporate some sort of true randomness, however that would merely introduce arbitrary inputs at prespecified decision points, it doesn’t change the necessary mapping of inputs to outputs, so F(A) → B, regardless of whether A is random or not, and we still have a necessity mechanism — a law contrived from a contingent arrangement of matter. Certainly such an algorithm could be programmed to rewire itself on the fly, so that F(A) → C instead of B at some point in the execution, but that itself would also be the result of a necessity mechanism, or at best a random occurrence, which would limit the new information it could plausibly generate to somewhere around the universal probability bound.

    (I’m attributing law-like properties to algorithms and physical automata because in all cases it would seem that sufficient antecedent conditions produce a reliable consequent.)

  21. So evolutionary processes are incapable of finding very specified, singular ‘solutions’?

    Exactly, because as I said, evolutionary processes are not searches.

  22. Oops messed up HTML

    So evolutionary processes are incapable of finding very specified, singular ‘solutions’?

    Exactly, because as I said, evolutionary processes are not searches.

  23. Let’s make it a physical “problem”.

    You set the password as the combination to your briefcase, and you have something of value in the briefcase.

    How might evolution “solve the problem”?

    Maybe it will invent a hacksaw. Or maybe it will invent some acid that disolves the lock on the briefcase.

    Evolution is not a search. Confining it to a fixed formal setup is unrealistic.

  24. as to:

    “But evolutionary processes don’t find particular solutions like passwords. They find any solution!”

    Well ‘any solution’ (not necessarily the solution to the immediate problem evolution NEEDS a solution for at a particular time) would still be a 1 in 10^70 to a 1 in 10^77 chance:

    Proteins Did Not Evolve Even According to the Evolutionist’s Own Calculations but so What, Evolution is a Fact – Cornelius Hunter – July 2011
    Excerpt: For instance, in one case evolutionists concluded that the number of evolutionary experiments required to evolve their protein (actually it was to evolve only part of a protein and only part of its function) is 10^70 (a one with 70 zeros following it). Yet elsewhere evolutionists computed that the maximum number of evolutionary experiments possible is only 10^43. Even here, giving the evolutionists every advantage, evolution falls short by 27 orders of magnitude.
    The theory, even by the evolutionist’s own reckoning, is unworkable. Evolution fails by a degree that is incomparable in science. Scientific theories often go wrong, but not by 27 orders of magnitude. And that is conservative.
    http://darwins-god.blogspot.co.....d-not.html

    Estimating the prevalence of protein sequences adopting functional enzyme folds: Doug Axe:
    Excerpt: The prevalence of low-level function in four such experiments indicates that roughly one in 10^64 signature-consistent sequences forms a working domain. Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences.
    http://www.ncbi.nlm.nih.gov/pubmed/15321723

    Correcting Four Misconceptions about my 2004 Article in JMB — May 4th, 2011 by Douglas Axe
    http://www.biologicinstitute.o.....article-in

    Show Me: A Challenge for Martin Poenie – Douglas Axe August 16, 2013
    Excerpt: Poenie want to be free to appeal to evolutionary processes for explaining past events without shouldering any responsibility for demonstrating that these processes actually work in the present. That clearly isn’t valid. Unless we want to rewrite the rules of science, we have to assume that what doesn’t work didn’t work.
    It isn’t valid to think that evolution did create new enzymes if it hasn’t been demonstrated that it can create new enzymes. And if Poenie really thinks this has been done, then I’d like to present him with an opportunity to prove it. He says, “Recombination can do all the things that Axe thinks are impossible.” Can it really? Please show me, Martin!
    I’ll send you a strain of E. coli that lacks the bioF gene, and you show me how recombination, or any other natural process operating in that strain, can create a new gene that does the job of bioF within a few billion years.
    http://www.evolutionnews.org/2.....75611.html

    But hey, even if we take the extremely unrealistic low end probability estimate of evolutionists (1 in a trillion) for finding a specific protein domain we still be faced with this completely absurd scenario:

    How Proteins Evolved – Cornelius Hunter – December 2010
    Excerpt: Comparing ATP binding with the incredible feats of hemoglobin, for example, is like comparing a tricycle with a jet airplane. And even the one in 10^12 shot, though it pales in comparison to the odds of constructing a more useful protein machine, is no small barrier. If that is what is required to even achieve simple ATP binding, then evolution would need to be incessantly running unsuccessful trials. The machinery to construct, use and benefit from a potential protein product would have to be in place, while failure after failure results. Evolution would make Thomas Edison appear lazy, running millions of trials after millions of trials before finding even the tiniest of function.
    http://darwins-god.blogspot.co.....olved.html

    But that absurd scenario is not what we find in biological life, instead we find and an extreme level of fidelity for protein synthesis:

    The Ribosome: Perfectionist Protein-maker Trashes Errors
    Excerpt: The enzyme machine that translates a cell’s DNA code into the proteins of life is nothing if not an editorial perfectionist…the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products… To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is “shocking” and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis.
    http://www.sciencedaily.com/re.....134529.htm

    And exactly how is the evolution new life forms suppose to ‘randomly’ occur if it is prevented from ‘randomly’ occurring to the proteins in the first place?

    Indeed, I want to know where this marvel of the ribosome, the only known protein factory in the world, came to be in the first place:

    LIFE: WHAT A CONCEPT!
    Excerpt: The ribosome,,,, it’s the most complicated thing that is present in all organisms.,,, you find that almost the only thing that’s in common across all organisms is the ribosome.,,, So the question is, how did that thing come to be? And if I were to be an intelligent design defender, that’s what I would focus on; how did the ribosome come to be?
    George Church – Harvard Wyse Institute
    http://www.edge.org/documents/.....index.html

    Honors to Researchers Who Probed Atomic Structure of Ribosomes – Robert F. Service
    Excerpt: “The ribosome’s dance, however, is more like a grand ballet, with dozens of ribosomal proteins and subunits pirouetting with every step while other key biomolecules leap in, carrying other dancers needed to complete the act.”
    http://creationsafaris.com/cre.....#20091010a

    Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems – 2012
    David J D’Onofrio1*, David L Abel2* and Donald E Johnson3
    Excerpt: The DNA polynucleotide molecule consists of a linear sequence of nucleotides, each representing a biological placeholder of adenine (A), cytosine (C), thymine (T) and guanine (G). This quaternary system is analogous to the base two binary scheme native to computational systems. As such, the polynucleotide sequence represents the lowest level of coded information expressed as a form of machine code. Since machine code (and/or micro code) is the lowest form of compiled computer programs, it represents the most primitive level of programming language.,,,
    An operational analysis of the ribosome has revealed that this molecular machine with all of its parts follows an order of operations to produce a protein product. This order of operations has been detailed in a step-by-step process that has been observed to be self-executable. The ribosome operation has been proposed to be algorithmic (Ralgorithm) because it has been shown to contain a step-by-step process flow allowing for decision control, iterative branching and halting capability. The R-algorithm contains logical structures of linear sequencing, branch and conditional control. All of these features at a minimum meet the definition of an algorithm and when combined with the data from the mRNA, satisfy the rule that Algorithm = data + control. Remembering that mere constraints cannot serve as bona fide formal controls, we therefore conclude that the ribosome is a physical instantiation of an algorithm.,,,
    The correlation between linguistic properties examined and implemented using Automata theory give us a formalistic tool to study the language and grammar of biological systems in a similar manner to how we study computational cybernetic systems. These examples define a dichotomy in the definition of Prescriptive Information. We therefore suggest that the term Prescriptive Information (PI) be subdivided into two categories: 1) Prescriptive data and 2) Prescribed (executing) algorithm.
    It is interesting to note that the CPU of an electronic computer is an instance of a prescriptive algorithm instantiated into an electronic circuit, whereas the software under execution is read and processed by the CPU to prescribe the program’s desired output. Both hardware and software are prescriptive.
    http://www.tbiomed.com/content.....82-9-8.pdf

  25. Chance,

    Depending on how one calculates the CSI of 2000 coins:

    http://www.uncommondescent.com.....000-coins/

    One might get different answers to the question of AI in an open environment.

    This automaton is fascinating: see Jaquet-Droz The Writer, and Video: The Writer. It produces a handwritten message with ink and quill on paper, and it’s a product of clock-making genius. However it doesn’t actually produce any information, rather it’s programmed with cams and cam followers to transfer preexisting information from one form/medium to another.

    Very similar to Eric Anderson’s question about the copies of War and Peace that inspired the 2000-coin paradox.

    One view would say more copies creates more CSI, another view would say not. You’ll notice there isn’t agreement about how much CSI there is in 2000 coins — that is the point. That is the source of the irresolution on this question…

    I wrote this thread to point out, there is one area, closed systems, where the NFL arguments clearly are in play and are relevant to evolution. I think other areas may not be so clear cut.

  26. Maybe it will invent a hacksaw. Or maybe it will invent some acid that disolves the lock on the briefcase.

    Evolution is not a search. Confining it to a fixed formal setup is unrealistic

    The problem is that life implements extravagant solutions (like proteins necessary for multicellular life). If life evolved, there needs to be an explanation for why it chose the extravagant solutions, and why, in the present day, evolution in the present day is disposing of extravagance rather quickly (Behe’s rule).

  27. Thanks for the post at 20 Chance,

    http://www.uncommondescent.com.....ent-481017

    I strongly agree with you.

    i.e.

    “Information does not magically materialize”
    William Dembski

  28. Neil,

    Evolution is not a search. Confining it to a fixed formal setup is unrealistic.

    What makes it unrealistic?

    Can you explain to me the sort of things we cannot or should not expect to evolve, given your objection? What is it about highly specified passwords in the situation Sal has described that makes evolution incapable of ‘finding the solution’?

    And would your position therefore be that ‘genetic algorithms’ don’t have much to do with evolution? Those are, after all, searches.

  29. There are an infinite number of ways to build houses of cards, that doesn’t make them highly probable based on random positions, orientations, and initial velocities of cards.

    The protein problem, the password problem, the house of cards problem have comparable statistics of improbability.

    Lenski’s work is an embarrassment to evolutionary claims. ID proponents love his work!

  30. Richard Lenski’s work shows the power of evolutionary processes in an asexual environment. Given eukaryotes and sexual reproduction, the sky’s the limit!

    Lenskis experiments resulted in one(!) significant change (usage of citric acid as carbon source) after 31.500 generations of bacteria. And this wasn’t even a “new invention” since the bacteria already had this ability if there was no oxygen present. The change was caused by simple gene duplication that moved the citT gene to a promotor that’s active under the presence of oxygen which then allowed the gene to be expressed in this strain.

    So, is this supposed to be the “power” of evolutionary processes?

  31. Alan Foxes:

    Evolutionary processes are not searches.

    Wow. This is the first I heard of this. Of course, evolutionary processes are searches. They don’t know what they are looking for in particular but they are certainly searching for anything that survives in an astronomical large search space. Needle in the haystack does not do it justice.

  32. If genetic algorithms can’t solve passwords, why should they be expected to solve complex proteins.

    That’s a pretty great non sequitur.

    If you think this example (an unknown pre-specified target and no fitness landscape) has anything to do with evolutionary biology you’ll need to explain how. And organisms don’t “solve” protein structures.

  33. In my opinion, an AI can increase information by asking simple questions such as “what would happen if I combine A with B and/or C?

  34. If a chef creates a new secret recipe for a special sauce, does he or she create new information? I think so. I can easily imagine an intelligent robot doing the same thing.

  35. If genetic algorithms can’t solve passwords, why should they be expected to solve complex proteins.

    That’s a pretty great non sequitur.

    No it’s not because a functional protein is composed of an alphabet of amino-acid “letters”. Only certain combinations of letters are functional, just like only certain combinations of characters are functional passwords for a given system. Amazing you can’t see the analogy since you know proteins are described with alphabetic characters.

    If a system needs insulin proteins, bone morphogenic proteins won’t do.

    If you think this example (an unknown pre-specified target and no fitness landscape) has anything to do with evolutionary biology you’ll need to explain how.

    Physics and engineering principles pre-specify what will and what will not be functional ahead of time. They demonstrate the space of solutions is small to begin with, and when in the context of complex systems with matching parts, the solutions space for a given protein is even smaller, not to mention all the other necessary features that need to be in place such as regulation.

    And organisms don’t “solve” protein structures.

    Agree, they either maintain existing proteins, or lose them. We don’t observe organisms creating new proteins in the field or lab of any degree of serious complexity above its ancestral form do we?

    If the evolutionists didn’t like my single password challenge to Dave Thomas, I could have made a multiple password space of possible solutions. If the suite of passwords is collectively difficult to solve, he still won’t be getting a free lunch.

  36. They demonstrate the space of solutions is small to begin with, and when in the context of complex systems with matching parts, the solutions space for a given protein is even smaller, not to mention all the other necessary features that need to be in place such as regulation.

    A very important point. Even if the (alleged) algorithm of NS stumbles upon a protein which the system needs, the new protein will be disruptive for homeostasis and subsequently kill the organism if it is not perfectly regulated. A most important point which is often ignored by our Darwinian friends.

  37. From NFL applicability

    NFL applies only to algorithms meeting the following conditions:

    The algorithm must be a black-box algorithm, i.e. it has no knowledge about the problem it is trying to solve other than the underlying structure of the phase space and the values of the fitness function at the points it has already visited.


    In principle, there must be a finite number of points in the phase space and a finite number of possible fitness values. In practice, however, continuous variables can be approximated by rounding to discrete values.


    The algorithm must not visit the same point twice. This can be avoided by having the algorithm keep a record of all the points it has visited so far, with their fitness values, so it can avoid repeated visits to a point. This may not be practical in a real computer program, but most real phase spaces are sufficiently vast that revisits are unlikely to occur often, so we can ignore this issue.


    The fitness function may remain fixed throughout the execution of the program, or it may vary over time in a manner which is independent of the progress of the algorithm. These two options correspond to Wolpert and Macready’s Theorems 1 and 2 respectively. However, the fitness function may not vary in response to the progress of the algorithm. In other words, the algorithm may not deform the fitness landscape

    5.3 The No Free Lunch Theorems
    NFL is not applicable to biological evolution, because biological evolution cannot be represented by any algorithm which satisfies the conditions given above. Unlike simpler evolutionary algorithms, where reproductive success is determined by a comparison of the innate fitness of different individuals, reproductive success in nature is determined by all the contingent events occurring in the lives of the individuals. The fitness function cannot take these events into account, because they depend on interactions with the rest of the population and therefore on the characteristics of other organisms which are also changing under the influence of the algorithm. In other words, the fitness function of biological organisms changes over time in response to changes in the population (of the same species and of other species), violating the final condition listed above

  38. Coincidentally, I was just reading about Lenski’s work. It’s interesting in that actual data and observation is involved rather than simply hopeful conjectures, evolving expectations, and emphatic arm waving.

    I was left with the following impressions:

    - The bacteria adapted and optimized for their environment.

    - All the adaptations involved a loss or degradation of function or complexity.

    - No novel structures formed: no light-sensitive spots, no injection mechanisms, no new motive structures.

    - Perhaps the experiment should have involved a greater variety of environmental challenges.

    - Introducing and tracking DNA fragments from dead organisms should also be interesting.

    - Q

  39. I know the objection to the link below, so here - Objection overruled ! because:the fitness function defines the problem to be solved, not the way to solve it, and it therefore makes little sense to talk about the programmer fine-tuning the fitness function in order to solve the problem
    Antenna using Genetic algorithm

  40. Of related note:

    Some proteins are now shown to be absolutely irreplaceable in their specific biological/chemical reactions for the first cell:

    Without enzyme, biological reaction essential to life takes 2.3 billion years: UNC study:
    In 1995, Wolfenden reported that without a particular enzyme, a biological transformation he deemed “absolutely essential” in creating the building blocks of DNA and RNA would take 78 million years.“Now we’ve found a reaction that – again, in the absence of an enzyme – is almost 30 times slower than that,” Wolfenden said. “Its half-life – the time it takes for half the substance to be consumed – is 2.3 billion years, about half the age of the Earth. Enzymes can make that reaction happen in milliseconds.”
    http://www.med.unc.edu/www/new.....=Wolfenden

    “Phosphatase speeds up reactions vital for cell signalling by 10^21 times. Allows essential reactions to take place in a hundreth of a second; without it, it would take a trillion years!” Jonathan Sarfati
    http://www.pnas.org/content/100/10/5607.abstract

  41. If you really can’t see that a search for an ultra-specific pre-specified sequence with no fitness function at all is not a very good analog for biological evolution then I really can’t see the point of corresponding.

  42. If you really can’t see that a search for an ultra-specific pre-specified sequence with no fitness function at all is not a very good analog for biological evolution then I really can’t see the point of corresponding.

    This is nonsense on the face of it.

    1. There is a fitness function: either you pass or you don’t.
    2. All sequences are pre-specified, whether or not there is only one or many.

    In biology, any existing DNA sequence is specified by virtue of having been selected by whatever method. It it can be found, it follow that was in the search space whether or not anybody knew it beforehand. Your objection is noted and rejected.

  43. If you really can’t see that a search for an ultra-specific pre-specified sequence with no fitness function at all is not a very good analog for biological evolution then I really can’t see the point of corresponding.

    On the contrary, the expected result of “a search for an ultra-specific pre-specified sequence with no fitness function at all” is exactly the result of what we see in the wild today.

    There is no fitness function evolving new proteins in the present day. The model I suggest accords with what is actually observed, not with what evolutionary biologists speculate happened in the past.

    Why the present day doesn’t line up with the claims of evolutionary biology is something evolutionary biologists have to contend with. But sensible theory and real-time or near-real time data are not cooperating with the claims of evolutionary biologists.

    I’d find it more palatable if evolutionary biologists admitted there was an unknown mechanism that is the primary cause of protein evolution. To keep insisting it is something like selection is to insist on something contrary to theory and empirical evidence.

  44. nullasalus:

    And would your position therefore be that ‘genetic algorithms’ don’t have much to do with evolution?

    I’m agnostic about genetic algorithms, because I have not studied them enough.

    As I see it, evolution is driven by change in the environment. How well a simulation corresponds to evolution would depend on how realistic is its simulation of environmental change.

  45. Sal:

    A few thoughts.

    Your post illustrates very well the impossibility for any random/algorithmic system to create new CSI. In essence, genetic algorithms cannot work when the complex functional information cannot be “deconstructed” into functional transitional forms whose sequence distance is in the range or random variation. This is another way to say that they cannot work in a “pass or fail” context, they absolutely need a “you’re getting closer or farther” context.

    I have repeatedly stated that no such “you’re getting closer or farther” context exists naturally for truly complex digital information, such as protein sequences or software algorithms. The absolute lack of functional simpler intermediates for any basic protein domain is clear empirical demonstration of that.

    The argument that evolution has no specific target is an old one, and essentially irrelevant. I have many times repeated that, in an already existing biological context, which is already built on very complex solutions, the concept of “any possible function” is of little help. Only a few very specific biological and biochemical functions will work there, and will give the reproductive advantages necessary for NS to take place. Even summing the probabilities of all possible functions that could have that result, the complexity remains so huge that RV is out of the game.

    Strangely enough, biological evolution does not use hacksaw or acid. Instead, for billions of years the emergence of new functions has been realized by the appearance of new, complex, functional proteins with highly sophisticated biochemical activities that exist nowhere else, and, even more surprisingly, by long protein cascades whose irreducible complexity is beyond any doubt for all reasonable people.

    Finally, I don’t agree with you about the “paradoxes” of assessing CSI in “open systems”. I see no difficulty at all in that. In my personal formulation of CSI (dFSCI), complete with rigorous definitions of dFSCI itself, and of how to assess it in some definite system, I believe there is no such difficulty. I would like to write about that, but in this moment I have not the time. Maybe I can do that later.

  46. Here of course scordova is perfectly right. EAs can optimize what is specified in the fitness function. If this is unspecified or poorly specified an EA optimizes nothing or next-to-nothing. Natural selection has a “fitness function” poorly specified (survival) then it is a bad EA, absolutely incapable to create ex novo the least system, go figure the giant functional hierarchies of organisms.

    No EA builds whatever new organization with a fitness function of simple “survival”. What a survival EA can do is to tune a parameter of a pre-existing system to help it to survive. A trivial job that has nothing to do with the creation of new CSI systems. Natural selection (meant as a survival EA) in the wild does exactly such trivial job.

  47. Sal:

    Here I am about dFSCI. I will try to answer your question about CSI in “open systems”.

    In your other post, you write:

    Suppose I have four sets of fair coins and each set of coins contains 500 fair coins. Let us label each set A, B, C, D.

    Suppose further that each set of coins is all heads. We assert then then that CSI exists, and each set has 500 bits of CSI. So far so good, but the paradoxes will appear shortly.

    I don’t think that the problem is correctly defined here.

    First of all, I would point out that dFSCI (I will use my restricted definition from now on) is not a property of an isolated object: it can only be assessed for an object in relation to a specifically defined system, which is the system that we believe generated the object in its present form.

    IOWs, a long protein exhibits dFSCI if we consider the natural system of our planet with its natural resources. Defining a system allows us to consider two important variables: the probabilistic resources of the system (IOWs, its natural capacities of implementing random variation); and potential deterministic algorithms that act in the system.

    If we correctly take into account those variables, then the assessment of dFSCI is perfectly rigorous.

    In the case of your coin sets, we must specify the setting. Are the coin sets the result of individual coin tosses, and are we sure that the coins are perfectly fair?

    These question are specially relevant, because you presented an example of complexity which is not necessarily an example of dFSCI.

    First of all, a series of all heads is not a very good example of “function”, but let’s accept that it can be defined as a function in a very wide sense.

    But the most important point is that the information here, while certainly complex, is highly compressible. Therefore, we must really ask ourselves is some necessity algorithm in the system can be responsible of the result. Even if we are sure that the coins were individually tossed, and that no kind of reproducible effect controlled the tossing, unfair coins are a simple possible “necessity cause”.

    What I mean is that, while considering a result whose functional complexity is highly compressible, we must be rally sure that no necessity algorithm in the system can generate that result: in this case, we must be sure that the coins are perfectly fair, and that the tossing is really random, and not subject to systematic, reproducible effects.

    This answers also your “problem” about one setting being the copy of another. That is a false problem. Again, what we must ask ourselves is: was one set copied from the other by some copying mechanism, already present in the system? In that case, no new information has been generated. Not so, instead, if both sets independently arose from random variation, that is if each of them was generated independently by true random tossing of perfectly fair coins. In that case, the improbability of the whole result multiplies and a design inference (for example, in the form of some kind of foul play).

    Let’s make the example of a protein. Indeed, there is no surprise that a protein is synthesized in a cell which already contains:

    a) The genetic information for it (the protein gene)

    b) The transcription and translation system.

    The existence of billions of molecules of hemoglobin on our planet is certainly not a sing of new dFSCI each time a new molecule is synthesized. We know how each molecule is generated in each case.

    The appearance of new dFSCI is when for the first time a molecule of hemoglobin appeared on our planet. Beyond the capacities of random variation, and without any reasonable algorithm in the system that could create a new functional protein, without any functional precursor. IOWs. as I have always stated, it’s the appearance of new basic protein domains, at multiple, definite times in natural history, that is truly an example of new dFSCI in a system which cannot explain it, and therefore supports a design inference.

    IOWs, a copy of Hamlet is not new dFSCI (although it implies the complexity of the copying system). But Shakespeare writing Hamlet in the beginning definitely is.

  48. gpuccio,

    So nice to hear from you!

    If I may offer a little personal history about the 2000 coin example. The coin example is obviously analogous to the problem of homochirality in biology.

    Not only are amino acids racemic in all lab OOL experiments, even if miraculously an OOL experiment generated homochiral amino acids, they won’t stay homochiral for very long, they will racemize according to various half-lives.

    In the case of survival, homochirality is critical since proteins will not fold properly if the amino acid are not homochiral. There is probably some great importance also for homochirality in DNA.

    From a probability standpoint, for the 2000 coins heads, the number of coins that are heads is the number of bits of CSI — 2000. It will pass the EF as such.

    Analogously, for a minimal biological organism , a very large number of amino acids must be homochiral. Let’s suppose some absurdly small organism of a million amino acids, the probability of homochirality is on the order of 1 out of 2^1,000,000.

    The example is close to my heart because that was the beginning of my journey from near agnosticism to strong belief in ID and the Christian faith.

    There are also examples of design in biology that may not be functional, such as the hierichical organization that Linnaeus and other creationists perceived.

    It might be possible in principle that some simple chemical process can induce homochirality in a pre-biotic soup, but that is only a speculation, and attempts to bias the ratio of L and D amino acids results in potentially lethal conditions for life, not to mention, homochirality dissipates over time anyway (some half-life are or on the order of decades). Furthermore, Fox unwittingly demonstrate polymerization through heat destroys homochirality.

    Like the 2000 coins heads, some might argue a simple mechanism created it, but the homochirality arguments was the spark of hope that the Designer left evidence for us to discover that we were designed. He could have chosen to leave us in the dark, but he did not.

    That’s probably why I have focused a little more on these very simple, and somewhat unspectacular examples that will pass the EF, but are not necessarily considered functional. It has some personal significance to me in my journey through ID — it was my starting point.

    How the paradox comes into play. What happens when we have a colony of cells that came from one cell. Now we have more homochiral molecules. Do we count this large number as being more improbable? I do, but if I do, this raises the problem of CSI (from homochirality) growing and growing because the cells keep multiplying. One resolution to the paradox is to postulate that CSI of this variety can grow in an open system, that is the solution to the paradox that I accept.

    Obviously most at IDists at UD (except maybe myself and Mapou) reject the postulate that CSI can grow in an open system via the agency of cells (which we view as AI systems).

    I posted this thread partly to affirm that I support the NFL theorems in the case of blind search. The theorem’s applicability is blatantly obvious for the problem of blind search.

    But it was also a good opportunity to point out, the application of NFL to open systems might require some consideration and caution. I don’t feel I can defend the applicability of NLF theorems in open systems with the same force that it can be defended in the case of blind search.

    But in sum, Dave Thomas doesn’t get a free lunch. :-)

  49. Very amusing comment by a Peter Wadeck, below, and all the more so for being entirely plausible. It is currently the last post on the page

    ‘The problem with evolution is that it is controlled by biologists that are on the low end of the intelligence spectrum in the scientific community. I am sure you have heard Dawkins complain that there is too much mathematics coming into biology. He is complaining because a rigorous science would have to reject his theological theory of evolution.’

    Read more: http://www.ncregister.com/blog.....z2lZulxmHK

  50. The bounder!

  51. scordova #48

    How the paradox comes into play. What happens when we have a colony of cells that came from one cell. Now we have more homochiral molecules. Do we count this large number as being more improbable? I do, but if I do, this raises the problem of CSI (from homochirality) growing and growing because the cells keep multiplying. One resolution to the paradox is to postulate that CSI of this variety can grow in an open system, that is the solution to the paradox that I accept.

    Here I don’t agree. I don’t understand why you count as new CSI the increase of homochiral molecules due to the reproduction of cells. Cells have the potentiality to reproduce. This reproduction generates new homochiral molecules. No wonder. But no new organization is produced. Simply the potentiality existent just from the beginning in the cells is actualized. This is not an example “that CSI of this variety can grow in an open system”.
    For this reason, at the question “Do we count this larger number [of homochiral molecules] as being more improbable?” I answer “I don’t”.

    Differently, if from the colony of cells in the environment (considered as an open system) arises a different kind of organism not present in the potentiality of the primitive cells, then we could say that “new CSI has grown in an open system”. This is what evolutionists hope (they trust on “open systems”), but it seems to me that you and I agree that such event is not possible.

  52. Just in time. Cornelius Hunter gives some insight into why evolutionists believed proteins could evolve so easily. They thought once upon a time, the sequences of amino acids were random!

    If they were random, evolution would not face the problems of No Free Lunch.

    Here is Dr. Hunter’s post:
    http://darwins-god.blogspot.co.....s-and.html

  53. Rajan, by refuting NFL applicability to the problem of the population genetics you shoot yourself in the foot. That is called special pleading. If the fitness function changes over time, it is an observation rather than a law by which the organisms organize themselves.
    Gravity never changes, the mechanics of flight never change. These are natural laws, i.e. immutable. If the fitness function changes than the function is not a natural law by definition but merely an ad hoc mechanism to explain the system. Stop funding evolutionary biology classes and build another Hilton Palace.

Leave a Reply