Home » Intelligent Design » Evolution and the NFL theorems

Evolution and the NFL theorems

Ronald Meester    CLICK HERE FOR THE PAPER  Department of Mathematics, VU-University Amsterdam,

“William Dembski (2002) claimed that the NoFreeLunch-theorems from op-
timization theory render Darwinian biological evolution impossible. I
argue that the NFL-theorems should be interpreted not in the sense that the models can be used to draw any conclusion about the real biological evolution (and certainly not about any design inference), but in the sense that it allows us to interpret computer simulations of  evolutionary processes. I will argue that we learn very little, if anything at all, about biological evolution from simulations. This position is in stark contrast with certain claims in the literature.”

This paper is wonderful! Will it be published? It vindicates what Prof Dembski has been saying all the time whilst sounding like it does not.
 
“This does not imply that I defend ID in any way; I would like to emphasise this from the outset.”
 
I love the main useful quote it is a gem!

“I will argue now that simulations of evolutionary processes only demonstrate good programming skills – not much more. In particular, simulations add very little, if anything at all, to our understanding of “real” evolutionary processes.”

“If one wants to argue that there need not be any design in nature, then it is hardly convincing that one argues by showing how a well-designed algorithm behaves as real life is supposed to do.”

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

241 Responses to Evolution and the NFL theorems

  1. Dr Dembski:

    There is a name for it: Grudging acknowledgement disguised as disagreement or even claimed refutation.

    Telling.

    Happy New Year when it comes.

    GEM of TKI

  2. We can only simulate that which we fully understand. And seeing that we don’t know what mutations can cause/ caused which changes there is no way we can simulate biological evolution.

  3. It vindicates what Prof Dembski has been saying all the time whilst sounding like it does not.

    Be careful what you ask for. Meester also writes

    Computing probabilities in a model is one thing, but for these computations to have any implication, the models had better be very good and accurate, and it is obvious that the various models do not live up to this requirement. In particular, it is quite meaningless to compute the probability that certain aminoacids combine to produce a particular molecule, if there is no reasonable mathematical model around.

    (emphasis added)

    This would mean that the whole approach of calculating CSI of proteins is flawed too, because those probabilities can’t be calculated either.

    Bob

  4. Meester cites the 2007 paper by O. Häggström. For links see:

    Olle Häggström: Some recent papers

    See particularly the section:
    ———————–
    My debunking of some dishonest use of mathematics in the intelligent design movement:

    Another look is taken at the model assumptions involved in William Dembski’s (2002) use of the NFL theorems from optimization theory to disprove the Darwinian theory of evolution by natural selection, and his argument is shown to lack any relevance whatsoever to evolutionary biology.

    I have two versions of this paper:

    * O. Häggström: Intelligent Design and the NFL Theorems: Debunking Dembski. This is the original manuscript, of September 2005.
    * O. Häggström: Intelligent Design and the NFL Theorems. This is a revised version of March 2006 (with a minor additional revision in June 2006) which has now appeared in Biology and Philosophy 22 (2007), pp 217-230. The most striking feature of this version compared to the original one is the removal of all rhetorics, and a more narrow focus on the mathematics.

    Shortly upon publication of the latter version, a manuscript entitled “Active information in evolutionary search” by William Dembski and Robert Marks (available via Dembski’s homepage) appeared on the web, with a response to my argument. This triggered me to elaborate my point a bit further:

    * O. Häggström: Uniform distribution is a model assumption.
    ———————–
    Dembski & Marks’ paper should also be available via the Evolutionary Informatics Lab (apparently being edited.)

    I recommend keeping discussion of Meester under this blog, and begin a new blog to discuss Häggström’s three papers.

    “As iron sharpens iron, so one man sharpens another” Proverbs 27:17
    Let the games continue.

  5. Good read. Dr. Dembski, is there an online work where you answer the critique of Haagstrom directly?

  6. Bored young men begin playing around with an inflated pigskin.

    Eventually, solely through random mutations and natural selection, a multi-billion dollar league appears and the ball become a prolate spheroid of leather with a polyurethane bladder

  7. “This does not imply that I defend ID in any way; I would like to emphasise this from the outset.”

    He should write that on a little wallet card and memorize it in the elevator. I expect he will need to keep saying it over and over.

  8. Atom: Unfortunately, the EvoInfo.org publications page is down right now. I’ve asked Robert Marks to look into that. We have a recently revised response to Haggstrom. Be looking for it there.

  9. In particular, it is quite meaningless to compute the probability that certain aminoacids combine to produce a particular molecule, if there is no reasonable mathematical model around.

    OTOH, if your claim is that amino acids can randomly form into a particular molecule and you fail to provide a reasonable mathematical model you are not practicing science.

    And note the word “randomly”.

  10. We can’t model it, we can’t calculate it, but we know it happened. Because we’re theoretical scientists!

  11. Thanks Dr. Dembski. I’ll be looking for it when it comes back up.

    Atom

  12. After being a long time lurker, both here and at the thumb, I finally have to make a comment. Why? Two reasons:

    1. I’m currently finishing up a Ph.D. in experimental evolution of baculoviruses (anyone ever heard of them here?). My thesis mainly concerns the development and validation of models of viral infection and population genetics – my first publications will be out the coming half year. Nevertheless, I’m interested in the philosophical ramifications of evolutionary biology. And I have a Christian background.

    2. I’m Dutch – as is R. Meester – and have read some of his popular articles in Dutch and heard a few debates on ID in which he has participated. But enough about me.

    What surprizes me greatly is no one has recognized that Ronald Meester is one of the people who started getting ID in the spotlights here in the Netherlands. Granted, he has always taken a somewhat agnostic position with respect to ‘ID proper’, and even more so to any religious/philosophical implications of ID. But, he has really stuck his neck out in order to get people – in scientific and lay circles – thinking about ID. And he has taken a lot of flak for his stance, from both camps. To qualify his position in his latest paper as ‘grudging acknowledgement disguised as disagreement or even claimed refutation’ is skewed. If anything, Meester is a friend of the ID movement, even if he is not (or perhaps no longer) a part of it. I am by no means an ID supporter myself, but cut the man some slack. ;-)

  13. Bob O’H at 3.

    No need to worry. It seems that what Meester says is that we don’t know enough to be able to model the probability of varions DNA bases or Amino Acids combining in various ways. To calculate probabilities, we need to know if there are intrinsic factors or laws that make it more likely that certain combinations will occur.

    From Meester’s own stated position, he cannot possibly claim Dr Dembski is wrong. He can only claim that Dr Dembski may not necessarily be right.

  14. From Meester’s own stated position, he cannot possibly claim Dr Dembski is wrong. He can only claim that Dr Dembski may not necessarily be right.

    That’s an other than not unmeaningless statement.

    Gloppy

  15. Gloppy, that was deep — and incredibly meaningful :-) Thanks a milion!

  16. Meester has this precisely backwards:

    Put in yet other words: we cannot expect a search algorithm to be efficient unless we restrict ourselves to functions f that are distinguishable from the “average” f, and I believe that this last formulation is a concise description of the importance of the NFL-theorems.

    It follows from basic results in Kolmogorov complexity that almost all of the functions f are algorithmically random. When f is algorithmically random, almost all search algorithms obtain good solutions rapidly. Intuitively, good solutions are no less common than bad ones, and the disorderly function no more hides the good solutions than it presents them.

    I have no idea how good a mathematician Meester is in general, but here he has jumped to a conclusion, and he could have avoided it with a better lit review.

  17. Joseph says:

    We can only simulate that which we fully understand.

    There’s a reason simulations are often referred to as simulation models. When I was a kid, I assembled various models of airplanes. I quickly figured out that I was not learning much about how the modeled aircraft were actually manufactured. But I could have placed some of those models in a wind tunnel and learned about their aerodynamics. That is, I could have simulated the flight of a modeled aircraft and gained important insights into its performance without any knowledge whatsoever of many details essential to fabrication and operation of the aircraft.

    If you think that matters change when the simulation model is computational, you have succumbed to some unfortunate mystification of computation.

  18. That is, I could have simulated the flight of a modeled aircraft

    Yes, but what is being attempted to be demonstrated by computer models of evolution? That evolution could have occurred without design. It’s like trying to prove you can’t paint a wall blue by painting a wall blue.

  19. Semiotic 007: “…almost all of the functions f are algorithmically random. When f is algorithmically random, almost all search algorithms obtain good solutions rapidly. Intuitively, good solutions are no less common than bad ones, and the disorderly function no more hides the good solutions than it presents them.”

    (f)s are supposed to be fitness functions (of base pair configurations in the genome). These are supposedly algorithmically random. A given fitness function could be for visual acuity. Only a tiny part of the genome could be modified to improve this or to degrade this, and the changes would have to be specific not just any to those particular loci. How could almost all search algorithms find these particular configurations rapidly? Intuitively, good solutions are vastly less common than bad ones.

  20. Semiotic 007: “If you think that matters change when the simulation model is computational, you have succumbed to some unfortunate mystification of computation.”

    Use of a physical model to investigate a problem is one thing, a computational model another. Try digitally simulating the flight dynamics of the airplane, but with some errors in the aerodynamic constants for computation of lift, drag, etc. We can only validly simulate what we understand.

  21. Maybe I’m wrong, but it seems to me that Meester’s point in all of this is that what Dembski calls the “displacement problem” is not really a problem at all, and that in nature—the ‘real’ world, not that of computers—efficient fitness functions are (I suppose that it would be more proper to say ‘have been’) found. I don’t buy his argument about the “displacement problem”. IIRC, the “displacement problem” says that any effort to find a proper ‘fitness function’ to aid in a search is itself doomed because of the extensive size of the search space of possible fitness functions. This space is extensive because what is being searched for isn’t sufficiently known.

    So it looks like Meester wants to conceded that evolutionary algorithms have nothing to do with evolution because he has shown that Dembki’s “displacement problem” is an illusion and that ‘real’ world has its own way of solving these problems.

    Again, I don’t buy Meester’s argument. I think Dr. Dembski can point out this error rather immediately.

  22. I have read most of the literature related to the NFL theorems. This paper is the worst I have ever read. I found myself wondering over and over where Meester hopes to publish it. Why in the world did he think he could waltz into a new area, read at most half of the seminal paper from 10 years ago, perhaps all of a paper from last year, and then tell the world what’s what?

    Most people who claim to have read Wolpert and Macready’s 1997 article on “No Free Lunch Theorems for Optimization” never bothered with any sections but the early ones. Meester gives absolutely no sign of knowing that he is reinventing Wolpert and Macready’s notion of “alignment” of algorithm and function. Furthermore, he’s doing a lousy job of it. Wolpert and Macready treat the information geometry of optimization with rigor Meester suggests is not possible. Meester seems to be good at attribution, and he is competent to understand what Wolpert and Macready wrote. I can only conclude he did not read all of the article. In fact, I wonder if he read the article at all, because he has not even gotten straight the first NFL theorem of Wolpert and Macready. His Theorem 1 is implied by Wolpert and Macready’s Theorem 1, but is not logically equivalent to it.

    Meester seems oblivious to the fact that there can be NFL for non-uniform distributions of functions. He seems oblivious to the fact, which came up in online discussion of NFL in 1995 and has appeared in the literature since, that, for any fixed encoding scheme, almost all functions have codewords too large for realization in the observable universe. And as I mentioned above, he is dead wrong about the difficulty of optimizing the typical function. This is not a matter of hand-waving like his, but of proof.

    Meester does write in plain language, however, which no doubt creates the illusion for many of you here that you’ve linked up with something wonderful. There’s not much in the paper that might make you lose bladder control — and that’s a big part of what’s wrong with it. Prof. dr. Meester is clearly a competent mathematician, and if he had bothered to do math, he probably would have caught most of his errors. What you’ve really linked up with is an “expert” who has ventured into seat-of-the-pants pronouncements on something he knows little about. We all make fools of ourselves sometimes, and Meester has been unlucky enough to have you advertise this lapse of his. I hope, for his sake, he does not draw buddies or incompetents for reviewers.

  23. Another way to look at this is that the materialist devises algorithms to demonstrate or simulate aspects of evolution. Dembski uses the NFLT to call foul on these algorithms and Meester agrees. Isn’t it incumbent on the Darwinist to devise a computer simulation that more accurately represents evolutionary processes? Otherwise Dembski’s NFLT refutation of the algorithms and consequently the life processes they claim to represent stands uncontested.

  24. Pav says,

    So it looks like Meester wants to conceded that evolutionary algorithms have nothing to do with evolution because he has shown that Dembki’s “displacement problem” is an illusion and that ‘real’ world has its own way of solving these problems.

    You are substituting evolutionary algorithm for evolutionary simulation. There is a long history of referring to evolutionary algorithms as “biologically inspired.” More recently, they are often referred to as biomimetic. There was a time when researchers spoke of their evolutionary algorithms as solving problems through “simulated evolution,” but they rarely intended for their findings to be of any use to life scientists. Put simply, evolutionary algorithms are employed predominantly by people with engineering goals.

    Systems like ev and Avida support simulation models. What some folks seem not to get is that every model is a simplification of the modeled entity. If it were entirely faithful, it would be a copy. This holds in mathematical modeling as well as in computational models. There was a lot Newton did not understand about mechanics, but I’d say he did well at modeling. Ev and Avida are meant to abstract from biological evolution certain salient features, and to test hypotheses that systems with those features exhibit certain qualitative behaviors observed in nature. (Both systems yield quantities, but the qualitative behavior is what is important. Meester seems not to understand this.)

    As Dr. Dembski frames the displacement problem, the meta-search is for an efficient search algorithm, not for a function. The function is given.

  25. Semiotic 007,

    On pp. 194-195, Dembski addresses a fitness function that is right out of Meester’s ex. 2, where the fitness function is ‘carefully adapted’ to the target.

    Here’s what Dembski says:

    “The collection of all fitness functions has therefore become a new phase space in which we must locate a new target (the new target being a fitness function capable of locating the original target in the original phase space). But this new phase space is far less tratable than the original phase space…….To say that E has generated specified complexity within the original phase space is therefore really to say that E has borrowed specified complexity from a higher-order phase space, namely, the phase space of fitness functions. … We have here a particularly vicious regress. …” (p.195)

    I don’t see anything at all in what Meester says/presents that overcomes this problem. It would appear that only in the mind of Meester has this problem been solved, and nowhere else.

  26. Isn’t it incumbent on the Darwinist to devise a computer simulation that more accurately represents evolutionary processes?

    Which processes? What does it mean to represent them? How do you propose to measure accuracy?

    Evolutionary systems are evidently complex nonlinear systems. We should expect them to be sensitive to initial conditions. What it means to model a complex nonlinear system in any domain is a tricky issue, particularly when there is a stochastic component. Investigators in computational quantum physics are having their problems, as well, even though their point of departure is a set of excellent mathematical models, and even though no one asks them dumb questions like “Why can’t you tell us exactly what the trajectories of all the particles in this system will be?”

    As I said above, what we hope for in evolutionary simulations is to observe, over many runs, qualitative aspects of biological evolution. When there is sensitivity to initial conditions, no initialization of a simulation is “correct,” and the only way to study the simulated entity is to run the simulation a number of times and somehow characterize the collection of observed results. There is never any way to predict over the long term the precise trajectory of the evolutionary system for a new set of initial conditions.

    My very favorite computational study with biological relevance is by David and Gary Fogel. They demonstrated that evolutionary stable strategies, shown stable through a mathematical argument assuming an infinite population (i.e., for mathematical tractability), are anything but stable with modest-sized populations. In fact, it appears that population proportions often vary chaotically over time. The precise numbers do not matter very much. What’s important is the demonstration that when a simulation model operates under assumptions more realistic than people know how to handle mathematically, “stable” strategies are not stable. That’s a qualitative conclusion.

  27. idnet.com.au – you’ve missed a couple of points:

    1. If Meester is right and we can’t calculate the probabilities (FWIW, I’m not as pessimistic), then the Explanatory Filter can’t be used to detect design in biological systems. If Meester is right, then the EIL’s activities with regards to biological evolution are pointless.

    2. Meester can and does claim Dembski is wrong, on mathematical grounds (he agrees with Häggström’s arguments).

    Meester is saying that Dembski’s original argument is wrong (using Häggström’s critique), but he resurrects the general thrust of the “displacement problem” argument to suggest that because search algorithms in computer simulations are designed, they say nothing about evolution.

    Meester’s argument is rhetorical, rather than mathematical, and I can see a couple of approaches to critiquing it. But I expect people working on evolutionary simulations will do a better job than me, so I’ll wait and see what they say.

    Bob

  28. PaV,

    I can’t recall the context from which you’ve drawn the quote, having spent more time with Dr. Dembski’s worthier Searching Large Spaces: Displacement and the No Free Lunch Regress. My guess is that he is referring to a new set of fitness functions constructed using candidate search algorithms.

    It would appear that only in the mind of Meester has this problem been solved, and nowhere else.

    To restate what I said above, only algorithmically compressible functions “fit into” the physical universe. There’s nothing new in that observation. You simply haven’t heard about it because your information on NFL comes by way of Dr. Dembski, who has not addressed the possibility that fitness landscapes might have predictable features due to compressibility. Yossi Borenstein and Riccardo Poli of the University of Essex have explored this recently.

  29. Matteo:

    We can’t model it, we can’t calculate it, but we know it happened. Because we’re theoretical scientists!

    I had the very same thought when I first read this article. The Lenski et.al. study is continually cited as an “explanation” for evolution produces a complex systems, even though not one single example from any biological system is referenced as confirming the study.

    Dawkins wrote in The Blind Watchmaker that evolution is such a “neat theory”. I can see why: we don’t know how it works, we can’t model it, we can’t make any predictions from it, but it explains all of biology! Yep, pretty neat!!

  30. I’m currently finishing up a Ph.D. in experimental evolution of baculoviruses (anyone ever heard of them here?).

    Are they viruses that infect the baculum? :-)

    Bob

  31. semiotic, re #26, I was referring to programs like ev. Ev claims to simulate the evolution of biological information. Dembski and Marks show that Ev uses additional (active) information to accomplish this task. Schneider claims on his blog (http://www-lmmb.ncifcrf.gov/~t.....og-ev.html) that Dembski and Marks misunderstand his algorithm. Meester agrees with Dembski and Marks, but only because ev is a working a toy problem. So the questions are:

    1. Is ev a valid simulation of the evolution of biological information?

    2. Do Dembski and Marks successfully refute the claims of this algorithm?

    If the answers to the above questions are yes, then materialists need to try again.

  32. Behe uses Lenski’s research as the type of research that supports ID. I bet Lenski does not like that endorsement.

  33. A man cannot build a house unless he has a house in mind, and a man cannot simulate an evolutionary process unless he has evolutionary processes in mind. To use such simulations to justify Neo-Darwinism, then, is tendentious, since nature has nothing whatsoever in mind; since the divide between intellect and matter is absolute. Now if our favorite Secret Agent Man wants us to believe that this is not the real purpose of such simulations–that they are nothing more than pristine academic experiments in computer engineering–then he has simply proven Meester’s point. They tell us a great deal about the ingenuity of the programmers and little or nothing about nature. But how can we resist the temptation to suspect that he’s wearing another one of his clever disguises?

  34. dgw,

    Model validity is a matter of degree. Newton’s model of mechanics is not categorically invalid because Einstein’s model makes more accurate predictions. And model utility involves more than accuracy of its predictions. Engineers get more use from Newtonian mechanics than they do Einsteinian mechanics.

    I last read an ev paper several years ago, and I cannot say whether Marks and Dembski understand it or not.

    1. Is ev a valid simulation of the evolution of biological information?

    Even if I had read the paper last hour, I would tell you that I am not a life scientist, and that I cannot assess the biological relevance of the simulation. (That term “biological relevance” is a good one to know.)

    2. Do Dembski and Marks successfully refute the claims of this algorithm?

    Their paper relies heavily on data collected with their own software for simulation (in the sense of numerical experimentation). Unfortunately, other parties inspecting the source code found serious defects that make all of the experimental results dubious. I wish Marks and Dembski would withdraw the paper from consideration.

  35. [...] 31, 2007 · No Comments A writer named idnet.com.au over on Uncommon Descent has found what looks like a rather interesting paper by Ronald Meester.  I hope to peruse it soon and will [...]

  36. semiotic

    Models and simulations are normally used to predict how things will behave in the real world. As such they are tested in two ways:

    1) they are set up with initial conditions obtained from the history of the real world and run backward and forward from that time to see if they agree with what was produced in the real history of the world.

    2) they are set up with present conditions in the real world, run forward faster than the real world, and then their predictions are compared to reality.

    Once the model or simulation has demonstrated a capacity for accurately reproducing real world results then and only then does anyone with a lick of common sense begin using them for practical purposes like predicting hurricane paths and taking costly precautions to limit loss of life and property, or committing a microprocessor design to silicon, or anything else that people in the design world affectionately refer to as “bending metal” to describe the costly things that modeling and simulation make less costly through accurate prediction.

    The common thread here is that models and simulations, the real ones that have merit to real scientists and real engineers (true Scotsmen notwithstanding) doing things that have meaning and impact in the real world, are benchmarked against reality. Anything else is woolgathering.

    Can you describe how “simulations” or “models” (using the words very loosely) of biological evolution were benchmarked against the real world in order to assess the validity of the so-called model?

  37. I’ve just finished reading Haggstrom’s June 2006 paper which Meester relies on.

    I would direct your attention to footnote #11. This paper was written before Behe’s new book. As I see it, based on what Haggstrom says in the footnote, and based on the results Behe provides in EoE, it is now, as they say, “Game. Set. Match.”

    I won’t say any more until others have had time to put this all together.

    But for right now, let me just say this: Haggstrom’s argument against Dembski is NOT mathematical; he’s using his notion of ‘reality’ to dismiss Dembski’s arguments. (And, for the most part, it won’t ‘fly’. )

  38. allanius,

    Hypothesis: Property Z of a natural process is caused by conditions X and Y.

    Empirical test: Create conditions X and Y in a computational process, and see if the process exhibits property Z.

    Note: If parameter tweaking is required to obtain Z in the computational process, then that may lead to hypothesis revision.

    There is nothing whatsoever that precludes implementation of a “virtual laboratory” in software and conducting unbiased experiments within it. The researcher has the laboratory in mind when designing the software, not the outcome of the experiments that will be conducted in the laboratory. The situation is not one whit different from that when a chemist designs a physical lab. (In fact, today’s labs often include virtual instrumentation.)

    By the way, fitness functions are not essential to evolutionary theory. Competition in a bounded arena is. We refer to individuals that survive and reproduce as more fit. That does not force us to include explicit fitness functions in models of evolution. Simulation models of coevolution often have no explicit fitness function. Meester seems unaware of this.

  39. The Publications page at Evolutionary Informatics Lab appears to be working again.

  40. 36 Davescot

    Thank you for expressing my thinking a lot better than I could have myself

  41. Semiotic 007:

    By the way, fitness functions are not essential to evolutionary theory. Competition in a bounded arena is.

    Is this ‘simulation-ese’ for Natural Selection?

  42. DaveScot,

    Your two approaches to evaluation of models of processes exclude a great deal of what happens in practice. What about crop simulations? Simulations in computational fluid dynamics (i.e., with high Reynolds numbers)? Simulations (statistical) of communications networks? You cannot peg initial or present conditions in these cases.

    See my comments on evolutionary systems as complex nonlinear systems, sensitive to initial conditions, in #26. Arbitrarily small differences in initial conditions lead to exponential divergence of system trajectories. You cannot measure any physical system with absolute precision, so evaluation of the fit of a simulation model to a natural evolutionary process in the senses you are accustomed to is impossible.

    Can you describe how “simulations” or “models” (using the words very loosely) of biological evolution were benchmarked against the real world in order to assess the validity of the so-called model?

    See #38. To rephrase what I have emphasized above, property Z is likely to be qualitative, not quantitative. Have you ever seen the simulation of schooling behavior of fish that imbues each fish with just 3 or 4 simple rules? The simulation does not precisely predict what any school actually does, but the features of the simulated school are remarkably like what we see in simulated schools. This does not mean that living fish actually act according to the rules of the simulated fish, but it shows definitively that very simple fish behavior can account for the complex behavior of schools.

    Obviously my example is not drawn from evolutionary simulation, but I am trying to bridge the gap by referring to a famous simulation you may well have seen, and for which it is easy to understand the value of capturing qualitative behavior of a complex nonlinear system.

    By the way, there are hurricane simulations that do not predict tracks, but do provide insight into the dynamics of cyclonic storms. I’ve stood in the middle of a forming cyclonic storm, in a virtual reality hut. The meteorology researchers say that the combination of simulation (based on their limited understanding of hurricanes) with visualization has done a lot to advance their understanding.

    I believe the precise “track” of evolving life on earth is much less predictable than that of a hurricane, but I do believe that simulations can be used analogously to improve understanding of evolutionary dynamics. Most evolutionary biologists avoid math and computing like the plague, however, and that, in my opinion, is why there has been relatively little work in biologically-relevant simulation of evolution.

  43. Is this ’simulation-ese’ for Natural Selection?

    “bounded arena” AND evolution

  44. Just a reminder of the severe discrimination against ID, and anyone remotely appearing to support ID or even publish results of experiments or modeling that could be construed to be supportive of ID – especially in the USA.

    Consequently, strongly recommend that anyone starting out in science, or who does not yet have tenure, should prudently use a pseudonym when posting such materials or comments on them, especially at Uncommon Descent or other ID friendly blogs.

  45. Semiotic @34
    “Unfortunately, other parties inspecting the source code found serious defects that make all of the experimental results dubious. I wish Marks and Dembski would withdraw the paper from consideration.”

    I understood that Dembski & Marks had withdrawn that earlier draft. They then substantially rewrote that paper and posted a second draft at Evolutionary Informatics Lab. See:
    “Unacknowledged Information Costs in Evolutionary Computing: A Case Study in the Evolution of Nucleotide Binding Sites.”

    William Dembski, could you please confirm/comment.

  46. Look at this man’s picture! Quick! Someone give him a laxative!

    Gloppy

  47. PaV,

    Do you have a link to the Haggstrom paper? (I’d like to see the footnote you refer to.)

    Thanks

  48. Atom, I got the link from one of DLH’s post #4.
    See:
    Olle Häggström: Some recent papers

    (PaV – I Added the link DLH.)

  49. Off Topic:

    Maybe a new listing could be done for this:

    http://www.futurepundit.com/archives/004492.html

  50. #37 PaV

    I’ve just finished reading Haggstrom’s June 2006 paper which Meester relies on.

    I’ve just finished reading it too.

    I would direct your attention to footnote #11. This paper was written before Behe’s new book. As I see it, based on what Haggstrom says in the footnote, and based on the results Behe provides in EoE, it is now, as they say, “Game. Set. Match.”

    That’s true, but IMHO Haggstrom’s argument is completeky invalid. Evolution cannot be considered as a single fittness function but as a very complicated and dynamic set of billions ones. In this context it’no sense to state that local regularities in the landscape space could be signs that in biology NFLT doesn’t actually hold. Indeed, they are more and more signs that some teleological process did produce them, ar Dembski and Marks have convincingly argued in their paper that addresses Haggstrom’s argument.

    Moreover, in this sense I observe that both Haggstrom and Meester DID NOT cite what David Wolpert (who did invent NFLT) wrote in his paper on IEEE Transactions on
    Evolutionary Computation), Dec. 2005: “in the typical coevolutionary scenarios encountered in biology, where
    there is no champion, the NFL theorems still hold.”

    What about this?

  51. What about crop simulations? Simulations in computational fluid dynamics (i.e., with high Reynolds numbers)? Simulations (statistical) of communications networks?

    Are you saying they don’t reflect real world results?

  52. semiotic

    When you say that initial conditions for a model can’t be obtained from the real world what you’re really saying is the model is bogus. It’s not a model or simulation unless there’s something in the real world to compare it against. Look up the words “model” and “simulation” for Pete’s sake. By definition what you’re describing are not models. They’re nothing more than woolgathering. Evolution “researchers” do a lot of woolgathering. So much in fact it’s not easy to determine what else, if anything, they really do.

  53. Another point: it seems that the rebuttal to Dembski et al can be summed up with the claim that it is reasonable to think things such as amino acids randomly combine in the proper sequence to form proteins, or more to the point, DNA coding happened by accident.

    It is not reasonable. It is silly. The counter-arguments to Dembski are resembling the ever shriller reasons a second-grader provides to skeptical classmates as to why his older brother can beat up the entire Gracie family.

    And for what purpose? To show that there ultimately isn’t one?

    It’s a depressing waste of brainpower, not to mention time.

  54. …why his older brother can beat up the entire Gracie family

    Another MMA fan? I say Rickson takes it by armbar. ;)

    Sorry, I’m a huge one and that comment distracted me for a sec. hehe.

  55. Happy New Years to all!! Best Wishes for the coming Year.

    What Haggstrom argues in his June 2006 paper that what NFL theorems really mean is that in the distribution of the set |S|^|V| the points alongside any individual point are ‘independent’ of one another. In particular, that the function (and when we start talking about evolution, this will then become a ‘fitness function’) generated, i.e., f: V?S, has values in |S|^|V| that are independent from one another. This means that as one moves away from any particular point, the value of the function at that point will have no correlation to the points next to the original point. And it is for this reason that any kind of ‘linked’ search is no better than a ‘blind’ search. That’s the mathematics; and I think he is to be lauded for this insight. But what comes next, is not, strictly speaking, mathematics.

    Haggstrom goes on from this above conclusion to state that what his observation implies is that any ‘fitness’ function found in nature would, essentially, have no landscape since his conclusion demonstrates that ‘fitness’ would, on average, fall off precipitously. [[We normally see pictures of curves rising up from a plane, reaching some maximum, and then returning to the plane. We’re told that this is what the landscape of fitness functions look like. Unfortunately, when real experiments are done, giving real results, this isn’t what we see. What we see are fitness functions that fall off so precipitously that even drawing a line straight up out of the plane is not sufficient to characterize it. ]] So, Haggstrom goes on to say: “We could, if we wanted to, dismiss Dembski’s application as irrelevant on the grounds that no physical or biological mechanism motivating (7) [which is the equation that Haggstrom derives based on ‘independence’] has been proposed.” This sentence ends with footnote #11. This is how the footnote reads:

    “In the hypothetical scenario that we had strong empirical evidence for the claim that the true fitness landscape looks like a typical specimen from the model (7), then this evidence would in particular (as argued in the next few paragraphs) indicate that an extremely small fraction of genomes at one or a few mutations’ distance from a genome with high fitness would themselves exhibit high fitness. It is hard to envision how the Darwinian algorithm A could possibly work in such a fitness landscape.”

    So what Haggstrom is saying is this: If, indeed, fitness functions in nature can be characterized by a uniform distribution, then the NFL theorems apply. But this would mean that the fitness functions would exhibit ‘landscapes’ wherein that “an extremely small fraction of genomes at one or a few mutation’s distance from a genome with high fitness would themselves exhibit high fitness.” And, the implication of this is that NS could not function since ‘blind search’ would never find its intended target given the size of the |S|^|V| spaces created by real-world proteins.

    Now enter Behe’s “The Edge of Evolution”—specifically, the PfCRt protein of P. falciparum, the malarial parasite. In P. falciparum’s life-and-death struggle with Chloroquine scientists have learned that this ‘pump’ protein, PfCRT, begins to ‘leak’ due to two amino acid changes at positions 76 and 220. PfCRT is 424 amino acids long. Well, let’s do some math.

    [Haggstrom is probably aware of so-called ‘neutral’ mutations. A fair amount of the length of most proteins can tolerate random changes from one a.a. to another. It’s because of this variability, I suppose, that Haggstrom thinks that Dembski’s treatment of NFL can be dismissed “as irrelevant on the grounds that no physical or biological mechanism motivating (7) has been proposed.”]

    Assuming that 60% of PfCRT is ‘neutral’, that leaves 40% of PfCRT, or, 170 a.a. that are not. We see that PfCRT, in its titanic struggle with Chloroquine, involving more duplications/replications than probably all mammals from the time that mammals began to exist, can only come up with 2 substitutions to ward off the effects of Chloroquine. That is, 2 out of 170 unchanging a.a.s, or 1 out of 85. We know that each a.a. is coded for by 3 nucleotides. We know—not from Dembski or the ID movement, but from scientists themselves—that Universal Probability Bound is 10^-150. [Haggstrom, in his paper, uses spaces that range from 10^1,000 to 10^1,000,000,000. But we need not concern ourselves with that size space.] This is equivalent to 2^500. It is also equivalent to 4^-250. Thus, in PfCRT, we have real-world test of its ‘fitness landscape’. What do we find? That, at most, only 2 a.a. can be substituted. Assuming 60% of the protein is free to mutate, we find that 2 out of 170 stable (therefore, necessary) a.a.s change. That is: 1 in 85. There are four nucleotides. 85 a.a.s represents 255 nucleotides. Thus, in the real-world, there is a 1 in 4^255 chance, or 4^-255, of being able to substitute for needed/conserved a.a.s. This is exactly the kind of ‘fitness landscape’ that Haggstrom suggests we would find if the NFL theorems, and Dembski’s analysis of them, were really true in the real world of nature.

    So, Haggstrom has done us a favor. His analysis provides us with a look at what ‘fitness landscapes’ would look like given a uniform distribution. Do some of you remember Dr. Oloffson visiting here and arguing we should use the maximum-likelihood approach, but that this wasn’t possible because we didn’t know what the probability distribution was like? Well, now we can answer that we do know what it is like: it’s a uniform distribution. This confirms Dembski’s theoretical work. And it means that ‘blind chance’, working in the natural order, cannot create the PfCRT protein since the possibility of ‘blind chance’ doing that in the natural order exceeds the UPB. Using the Explanatory Filter, that leaves only intelligent agency as an explanation for such ‘fitness landscapes’.

    Q.E.D. “Game. Set. Match.”

  56. Another MMA fan?

    Yes, I got hooked :-)

  57. Another MMA fan?

    Yes, I got hooked :-)

    And PaV, Happy New Year to you.

    And Happy New Year to you.

    And Happy New Year to you :-)

    (OK, there is some problem posting this morn.)

  58. Another thing w/regard to Dave’s point about the real world and models.

    When you apply Dembski’s EFs to the real world (i.e. objects of known design), they correlate.

  59. I tried to come up with a leprechaun simulation but the results don’t correspond to reality. I concluded that this is what one would expect as leprechauns are complex nonlinear systems, sensitive to initial conditions.

  60. atom, tribune7
    The whole point of mma was to show that while you can do all the kata you want and show off your black belt in super tiger dragon ninja kung fu, unless you can continually test your skills against live resisting opponents, you can’t say that you have an effective fighting system.

  61. kairos says,

    Moreover, in this sense I observe that both Haggstrom and Meester DID NOT cite what David Wolpert (who did invent NFLT) wrote in his paper on IEEE Transactions on Evolutionary Computation), Dec. 2005: “in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.”

    What about this?

    You’re engaged in the logical fallacy known as appeal to authority. Would you care to quote their argument in support of this conclusion? You won’t find it. This is something the special-edition editors and reviewers let pass that they should not have. Last time I searched the web for the paper, I hit upon an early version that was quite different from the published version. I have a hunch that the reviewers called for many changes, and the little proclamation did not rise to threshold. Wolpert had previously said the opposite, and had given a good argument here:

    [...] neo-Darwinian evolution of ecosystems does not involve a set of genomes all searching the same, fixed fitness function, the situation considered by the NFL theorems. Rather it is a co-evolutionary process. Roughly speaking, as each genome changes from one generation to the next, it modifies the surfaces that the other genomes are searching. And recent results indicate that NFL results do not hold in co-evolution.

    Evidently he thought that the coevolutionary “free lunch” results he and Macready were developing would apply to biological evolution. My best guess is that the two lapsed into thinking NFL applied when they determined that their results on coevolution did not. But if you examine the argument, you can see that it is fine without the last sentence. It includes elements of what Behe has said recently, and is similar to what English had said in 1996:

    Do the arguments of [NFL] contradict the evidence of remarkable adaptive mechanisms in biota? The question is meaningful only if one regards evolutionary adaptation as function optimization. Unfortunately, that model has not been validated. It is well known that biota are components of complex, dynamical ecosystems. Adaptive forces can change rapidly and nonlinearly, due in part to the fact that evolutionary adaptation is itself ecological change. In terms of function optimization, evaluation of points changes the fitness function.

    “Everything new is made old again.” And as Park said in GA-List discussion of NFL in 1995, almost all functions are “too big” for physical realization. Thus there were two arguments against the applicability of NFL results to real-world optimization before Wolpert and Macready published their first paper. Wolpert concurred with English later, but mysteriously changed his mind.

  62. you can do all the kata you want and show off your black belt in super tiger dragon ninja kung fu, unless you can continually test your skills against live resisting opponents,

    IOW, models that fail to replicate the real world ought not be taken seriously :-)

  63. PAV

    Your post has been displayed four or five times but, you know, repetita iuvant and this is certainly true when right ideas are the opposite NDE theory would like …

    Your hint is certainly true and this does invalidate the Hagg. argument but I have argued that his starting point (7) is not valid.

    Happy new year to anybody

  64. DaveScot says,

    It’s not a model or simulation unless there’s something in the real world to compare it against.

    Are you aware that layouts of manufacturing facilities are commonly chosen on the basis of simulation results? That is, the only layout that is ever realized is one that did not exist when it was simulated.

    Also, modeled entities need not be as concrete as you seem to think. Flight is an abstract physical phenomenon, and there are several models of flight. When Michelangelo came up with the notion of a helicopter, he had implicitly formed a model of flight that corresponded to no particular physical object. He put the model to empirical test and validated it. He established that the “principles of flight” were not to be obtained merely by looking a birds.

    Evolution is also an abstraction. Avida, for instance, is intended to put “principles of evolution” to test. You may criticize the test, but that is the intent.

    Look up the words “model” and “simulation” for Pete’s sake.

    Done. Now you take a look at:
    model OR modeling “first principles” (758,000 hits)

    “chaotic system” OR “dynamical system” sensitivity “initial conditions” (131,000 hits)
    “chaotic system” OR “dynamical system” qualitative (69,900 hits)

    It is reasonable to use first-principles models to predict the behavior of physical entities that have yet to exist. Any model of a chaotic system will, over the long term, diverge exponentially from the modeled system. But a model may be valuable in capturing qualitative features of the modeled entity. I am not making this stuff up.

  65. Are you saying they don’t reflect real world results?

    No, Dave said that we had to know either initial or present conditions to model, and I gave some examples of important simulations in which it is impossible to know those conditions with precision or certainty.

    Simulation models can take on many forms, and simulation results can be used in many ways — some valid, and some invalid. If I had only my experience in simulation modeling of crops, manufacturing facilities, and human lifting motions to draw upon, my notion of modeling would be relatively limited. There’s no substitute for reading (many) technical papers.

  66. My favorite quotes from the Meester paper:

    If one wants to argue that there need not be any design in nature, then it is hardly convincing that one argues by showing how a well-designed algorithm behaves as real life is supposed to do.

    and

    I do not think it is reasonable to summarise the extremely complex biology (and chemistry, physiscs . . .) that is associated to the process, into a single search algorithm. There are no realistic models of evolution that render this approach reasonable, life is simply too complicated. Computing probabilities in a model is one thing, but for these computations to have any implication, the models had better be very good and accurate, and it is obvious that the various models do not live up to this requirement.

    The second is really an indictment of Darwinian theory: No one really knows what’s happening in the evolution of life on Earth, Darwinist bluster notwithstanding.
    __________

    “What I cannot create, I do not understand.”
    – Richard Feynman (1988)
    __________

    Semiotic 007: “By the way, fitness functions are not essential to evolutionary theory. Competition in a bounded arena is.”

    In explaining what he meant by the term “natural selection”, Darwin wrote:

    I mean by Nature, only the aggregate action and product of many laws, and by laws the sequence of events as ascertained by us.

    Thus, it’s all just supposed to be the outworking of the laws of nature (including stochastic processes).

    Competition is defined by Merriam-Webster’s as “active demand by two or more organisms or kinds of organisms for some environmental resource in short supply.” So, technically, there is no competition in Darwinian theory. Competition (“active demand”) implies goal-directedness, but in Darwinian theory, there is no goal. Darwin’s use of the term in the Origin can only be considered metaphorical (just as his use of the term “natural selection” was).

    What would be needed to validate Darwinian theory is to abstract from “the sequence of events as ascertained by us” the “many [nonteleological] laws” that yield “endless forms most beautiful and most wonderful.” It’s approaching 150 years since Mr. Darwin wrote his book, and no one has done this yet. Why not? Throwing one’s hand up in the air and say, “It’s all just too complicated,” is to concede defeat. And to make a model with a built-in purpose-driven competition of some sort, and proclaim that this is a model of Darwinian evolution, is bogus. If the only way to get evolution to happen is with purpose-driven rules, then this implies something about evolution in nature.

  67. kairos says,

    Moreover, in this sense I observe that both Haggstrom and Meester DID NOT cite what David Wolpert (who did invent NFLT) wrote in his paper on IEEE Transactions on Evolutionary Computation), Dec. 2005: “in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.”

    What about this?

    You’re engaged in the logical fallacy known as appeal to authority. Would you care to quote their argument in support of this conclusion? You won’t find it. This is something the special-edition editors and reviewers let pass that they should not have. Last time I searched the web for the paper, I hit upon an early version that was quite different from the published version. I have a hunch that the reviewers called for many changes, and the little proclamation did not rise to threshold. Wolpert had previously said the opposite, and had given a good argument here:

    [...] neo-Darwinian evolution of ecosystems does not involve a set of genomes all searching the same, fixed fitness function, the situation considered by the NFL theorems. Rather it is a co-evolutionary process. Roughly speaking, as each genome changes from one generation to the next, it modifies the surfaces that the other genomes are searching. And recent results indicate that NFL results do not hold in co-evolution.

    Evidently he thought that the coevolutionary “free lunch” results he and Macready were developing would apply to biological evolution. My best guess is that the two lapsed into thinking NFL applied when they determined that their results on coevolution did not. But if you examine the argument, you can see that it is fine without the last sentence. It includes elements of what Behe has said recently, and is similar to what English had said in 1996:

    Do the arguments of [NFL] contradict the evidence of remarkable adaptive mechanisms in biota? The question is meaningful only if one regards evolutionary adaptation as function optimization. Unfortunately, that model has not been validated. It is well known that biota are components of complex, dynamical ecosystems. Adaptive forces can change rapidly and nonlinearly, due in part to the fact that evolutionary adaptation is itself ecological change. In terms of function optimization, evaluation of points changes the fitness function.

    “Everything new is made old again.” And as Park said in GA-List discussion of NFL in 1995, almost all functions are “too big” for physical realization. Thus there were two arguments against the applicability of NFL results to real-world optimization before Wolpert and Macready published their first paper. Wolpert concurred with English later, but mysteriously changed his mind.

  68. Thus there were two arguments against the applicability of NFL results to real-world optimization…

    Oops — meant to say “biological evolution,” not “real-world optimization.”

  69. No one really knows what’s happening in the evolution of life on Earth, Darwinist bluster notwithstanding.
    __________

    “What I cannot create, I do not understand.”
    – Richard Feynman (1988)

    No one knows how to build a bird, but we understand how to fly, and we have learned principles of flight that hold for birds as well as helicopters.

    Competition is defined by Merriam-Webster’s… So, technically, there is no competition in Darwinian theory.

    You may treat OOS like Darwinist scripture, cite chapter and verse, and then assert that any Darwinist believes as Darwin did in 1859 — fine with me. Darwin ascribed to Lamarckian evolution, you know. The last devout Darwinist died a very long time ago.

    Don’t you think it is just a tad silly to pull out a dictionary and parse a 150-year-old scientific text to determine what evolutionary theory really is? That’s the stuff of Biblical exegetics, not life science.

  70. That’s the basic problem. Evolutionary theory is undefined and can’t be tested.

  71. DLH says,

    Just a reminder of the severe discrimination against ID, and anyone remotely appearing to support ID or even publish results of experiments or modeling that could be construed to be supportive of ID – especially in the USA.

    I’ve corresponded with some regents and the provost of Baylor University in support of academic freedom.

    If you review my comments on the paper, you’ll see that I’ve said nothing in the “pro-ID / anti-ID” dimension. My efforts to prove mathematical results on search and to help people understand NFL span a number of years. It genuinely offends me to see someone who knows how to do math waltz in and make utterly bogus claims about NFL without bothering to do math (or even look into the math that’s already been done).

    I realize that Meester gives you some juicy quotes. But the technical content of the paper is atrocious. Are the quotes worth so much to you that you are willing to champion garbage?

    Here is Wolpert and Macready’s seminal No Free Lunch Theorems for Optimization from 1997. Theorem 1 is on the third page, early in Section III. It is easy to see that the theorem states that two sums are equal, while Meester’s theorem (page 4) states that two averages are equal. It may be somewhat harder to see that when Wolpert and Macready treat “alignment” of algorithms and problems in Section IV, “A Geometric Perspective on the NFL Theorems,” they anticipate and formalize what Meester has to say on page 8:

    These remarks sound like very obvious remarks, and in a way they are. Once a search algorithm is more (or less, for that matter) efficient than what you expect from the NFL-theorems, it must be the case that you use a special choice, or a special class, of fitness functions. But at the same time, you must use a careful choice of the search algorithm which must have been tailored around your choice of (the class of) fitness functions.

    However, it was established in 2000 that each search algorithm rapidly obtains a good value for almost all functions: Optimization Is Easy and Learning Is Hard in the Typical Function. That is, one need not “use a careful choice” — algorithm design is generally pointless because almost all functions are algorithmically random. Google Scholar indicates that this paper has been cited at least 16 times in the technical literature.

    Am I making it more clear now that Meester is simply wrong? An arbitrary (undesigned) search algorithm almost always works well in theory. But the theory does not apply to practice, because algorithmically random functions, for which almost all algorithms work well, do not “fit” into the physical universe. To my knowledge, no one has gotten a handle on what theory does apply in practice.

    Perhaps some of you have heard the saying from mathematical logic, “From a false premise, conclude anything.” No matter how much you like Meester’s conclusions, the “reasoning” he uses to reach them is bogus.

  72. “What I cannot create, I do not understand.”
    – Richard Feynman (1988)

    Semiotic 007: No one knows how to build a bird, but we understand how to fly, and we have learned principles of flight that hold for birds as well as helicopters.

    With computational fluid dynamics, one can model aerodynamics in practically as much detail as one wishes, and it actually provides results that match reality. Show us the equivalent for Darwinian evolution.

    There’s no fundamental reason that a robotic bird couldn’t be built. Unlike with Darwinian evolution, theoretical understanding isn’t lacking, just time, money and effort.

    Semiotic 007: You may treat OOS like Darwinist scripture, cite chapter and verse, and then assert that any Darwinist believes as Darwin did in 1859 — fine with me. Darwin ascribed to Lamarckian evolution, you know. The last devout Darwinist died a very long time ago. Don’t you think it is just a tad silly to pull out a dictionary and parse a 150-year-old scientific text to determine what evolutionary theory really is? That’s the stuff of Biblical exegetics, not life science.

    The quote in question regards the heart of Darwinian theory, not some small detail that Darwin got wrong from his ignorance or imagination. Instead of dancing, how about actually presenting an argument showing how Darwin’s statement that “I mean by Nature, only the aggregate action and product of many laws, and by laws the sequence of events as ascertained by us” is wrong. Tell us what nonteleological corrections or additions need to be made to get Darwinian evolution to work.

  73. Are you saying they don’t reflect real world results?. . . .No, Dave said that we had to know either initial or present conditions to model, and I gave some examples of important simulations

    I think if evo simulations reflected real world results they would be readily accepted.

  74. j @ 67 –

    I do not think it is reasonable to summarise the extremely complex biology (and chemistry, physiscs . . .) that is associated to the process, into a single search algorithm. There are no realistic models of evolution that render this approach reasonable, life is simply too complicated. Computing probabilities in a model is one thing, but for these computations to have any implication, the models had better be very good and accurate, and it is obvious that the various models do not live up to this requirement.

    The second is really an indictment of Darwinian theory: No one really knows what’s happening in the evolution of life on Earth, Darwinist bluster notwithstanding.

    As I pointed out above, this is not a good argument to make. Remember that the calculation of CSI needs a model for the probability of “finding” the specified pattern. So, if Meester is right, we can’t build an acceptable model, so we can’t calculate CSI correctly.

    A corollary of Meester’s argument is that the work being done by the EIL is irrelevant to biology. I’m not sure this is the best blog to be making that argument on. :-)

    Bob

  75. j

    re; a robotic bird

    We can build things that fly faster, farther, & higher than a bird but if we were REALLY good we could build an aircraft that repairs itself, replicates itself, flies itself, and consumes nothing but water and sunflower seeds in the process… :)

  76. Semiotic (73): When you say that

    It is easy to see that the theorem states that two sums are equal, while Meester’s theorem (page 4) states that two averages are equal.

    do you mean that Meester’s formulation differ from Wolpert and Macready’s?

  77. Bob

    No one really knows what’s happening in the evolution of life on Earth, Darwinist bluster notwithstanding.

    Sure we do. We know a massive amount of evolution happened in the prehistoric past and we know that, comparatively, virtually nothing has happened during recorded history. As far as we know creative evolution is no longer happening. That’s the real problem with Darwinian evolution – it hasn’t been observed. Now it could be that it’s just too slow to observe but it could also be that it doesn’t happen at all and evolution was driven by something other than chance & necessity. Either way it’s unconfirmed – purely hypothetical.

  78. davescot wrote “As far as we know creative evolution is no longer happening. ”

    yes it is…by intelligent designers

  79. semiotic

    That is, the only layout that is ever realized is one that did not exist when it was simulated.

    That’s because the model has been benchmarked against reality. Once a model is verified against reality then it can be used to predict outcomes from a hypothetical initial state. Isn’t it great how models work like that? Once we know that a model successfully predicts the flight characteristics of an aircraft that exists then we can initialize it with aircraft designs that don’t yet exist and see how they perform before we commit to bending metal. If we validate the design using the model then build it and it performs in reality as the model predicted we gain even more confidence in our model. Modeling a factory floor is the same thing – we trust that the hypothetical factory in our model will work the same way in the real world. We also build roads this way. We put in our signals and lanes and speed limits and intersections and so forth then put anticipated simulated traffic on it and see how it flows. We can do this because we have tested our model against reality in the past and are confident it accurately models reality. If we hadn’t tested it we might discover the hard way that it is flawed – perhaps it mistakenly presumes automobiles can make 90 degree turns at 60mph as long as the light is green or that they can accelerate/decelerate instantly or that driver reaction time is instantaneous. All sorts of mistakes can be lurking in untested models.

    Models of biological evolution are so unlike this they really aren’t models at all in any practical way.

  80. In theory, theory and reality are the same.

    In reality, they’re not.

    Gloppy

  81. j says,

    With computational fluid dynamics, one can model aerodynamics in practically as much detail as one wishes, and it actually provides results that match reality. Show us the equivalent for Darwinian evolution.

    It happens that I have sat with researchers who set up a National Science Foundation center for study of computational fluid dynamics, discussing computational requirements. You are quite wrong about “practically as much detail as one wishes.” With high Reynolds numbers (complex nonlinear dynamics), you never can get enough detail. The guys I talked with did not feel a Connection Machine was powerful enough for their work.

    Furthermore, you are playing it fast and loose with the notion of “matching reality.” In the case of, say, chaotic turbulence at the trailing edge of an airplane wing, the simulation never replicates measurements taken on an actual wing. That is, you cannot enter measured initial conditions of turbulent flow into a CFD simulator, set the simulator running, and then use the simulation to predict subsequent measurements. I believe I am saying for the third time in this thread that the trajectory of the simulated system will diverge exponentially from that of the actual system. There is most definitely nothing in CFD analogous to an answer to the question, “What is the probability that the flagellum evolves?”

    CFD simulations do not accurately predict nonlinear fluid flow in fine detail for more than a short time. To put things simply, they can predict what the flow will be like, but not precisely what it will be. This is a limitation intrinsic to modeling the “evolution” of nonlinear dynamical systems. Biological evolutionary systems are also nonlinear dynamical systems, and you should not expect predictions of specific evolutionary pathways to come out of simulation models. This is not a shortcoming of evolutionary theory.

    That said, work on biologically-relevant simulations of evolution has barely begun. Although such simulations go back fifty years, relatively few life scientists are interested in simulation. Most of the engineering-types who are interested in simulation of evolution do not know enough about evolutionary biology to come up with simulations the life scientists find interesting. The engineers and computer scientists tend to think they know a lot more about biological evolution than they actually do, and the life scientists are rarely interested in collaboration.

  82. “If one wants to argue that there need not be any design in nature, then it is hardly convincing that one argues by showing how a well-designed algorithm behaves as real life is supposed to do.”

    I just want to second that I think this is my favorite quote as well. I know little of the math involved, and admit to as much – but I personally think the more we learn about evolution, the more plausible it is to say ‘One can certainly argue there’s design in here’.

  83. But why don’t lips sweat? Show me an evolution model.

    Gloppy

  84. Instead of dancing, how about actually presenting an argument showing how Darwin’s statement that “I mean by Nature, only the aggregate action and product of many laws, and by laws the sequence of events as ascertained by us” is wrong.

    The dance was to spare you. Your quote of Darwin is utterly irrelevant to my remark that “competition in a bounded arena” is a more primitive concept than fitness. I am genuinely trying to understand your misunderstanding, and my best guess is that you simply do not understand the phrase. Here is an outstanding paper that not only ties “competition in a bounded arena” to neo-Darwinism, but also gets us (pretty please) on-topic: Notes on the Simulation of Evolution. The author, Wirt Atmar, earned a dual Ph.D. in electrical engineering and biology, and very few people are as qualified to comment as he.

  85. Gloppy,

    But why don’t lips sweat? Show me an evolution model.

    I suppose the Designer didn’t want all kisses to be wet.

  86. Bob O’H says,

    So, if Meester is right, we can’t build an acceptable model, so we can’t calculate CSI correctly.

    I always supposed that Dr. Dembski intended to use an analytic model, and not a simulation model, to obtain an upper bound on the probability of evolution of the bacterial flagellum. But I think Meester’s argument would apply at least as well to analytic models. One thing that can be said for computation of CSI is that only an upper bound, not a precise estimate, of probability is required, and a loose bound might suffice.

    A corollary of Meester’s argument is that the work being done by the EIL is irrelevant to biology. I’m not sure this is the best blog to be making that argument on.

    Dr. Dembski understands better than Meester that modeling is a matter of abstraction. I find it bizarre that Meester suggests that simulation of evolution will not be of value unless it accounts for the process at the level of chemistry. That’s like saying we can’t simulate an apple falling from a tree unless we take quantum gravity into account.

  87. perlopp,

    do you mean that Meester’s formulation differ from Wolpert and Macready’s?

    I know the thread’s gotten long. I pointed out in 22 that Meester’s “Theorem 1 is implied by Wolpert and Macready’s Theorem 1, but is not logically equivalent to it.” That is, the equality of the sums implies the equality of the averages, but not the converse.

    If your understanding of the NFL theorems is limited to the averages, you have little way of understanding important results that came along later. Meester does not understand.

  88. Semiotic 007: “Biological evolutionary systems are…nonlinear dynamical systems, and you should not expect predictions of specific evolutionary pathways to come out of simulation models. This is not a shortcoming of evolutionary theory.

    I do understand the limitations of modelling nonlinear systems. I’m not looking for a model that demonstrates the evolution of any particular complex functional structure. The evolution of any kind of complex functional structure would do (using a nonteleological evolutionary process).

    Thanks for the link to the paper. I’ll read it and let you know what I think.

  89. semiotic 007

    You’re engaged in the logical fallacy known as appeal to authority.

    Are you sure?

    Would you care to quote their argument in support of this conclusion? You won’t find it.

    Are you sure? Really this is not the case. That citation is in a paper that is all dedicated to show how NFLT DO NOT hold in some coevolution cases, precisely when there’s a champion. At the end of this paper Wolpert (who was DIRECTLY involved in the ID controversy in the 90′s) do explicitly state that “in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.”
    Sorry for you but this appears to be the argument you were looking for. Come on. Haven’t you anything better?

    This is something the special-edition editors and reviewers let pass that they should not have. Last time I searched the web for the paper, I hit upon an early version that was quite different from the published version.

    But was it different concerning this fact? If yes, please give reference; if not please don’t use an argiment that doesn’t matter.

    I have a hunch that the reviewers called for many changes, and the little proclamation did not rise to threshold.

    This can be your idea but it’s not correct. Indeed, what you have called “little proclamation” is somethinh that is present both in the Abstract and in the body of the paper, which is a clear sign that it’s a major statement. This is even more true starting from what you wrote before about Wolpert’s ideas.

    Wolpert had previously said the opposite, and had given a good argument here:
    […] neo-Darwinian evolution …
    And recent results indicate that NFL results do not hold in co-evolution.

    Indeed this WAS his OLD idea about, several years before the IEEE Trans paper on Dec. 2005.

    Evidently he thought that the coevolutionary “free lunch” results he and Macready were developing would apply to biological evolution. My best guess is that the two lapsed into thinking NFL applied when they determined that their results on coevolution did not. But if you examine the argument, you can see that it is fine without the last sentence.

    That the argument was fine is your idea. The new statement by Wolpert does suggest a very different scenario; initially he tought that the argument could be fine BECAUSE in coevolution things could have been different and free lunches could occur. But AFTER having found that this is true only in some cases, DIFFERENT from biological ones, he was forced to admit that he was wrong and NFLT still hold.

    It includes elements of what Behe has said recently, and is similar to what English had said in 1996:
    … And as Park said in GA-List discussion of NFL in 1995, almost all functions are “too big” for physical realization. Thus there were two arguments against the applicability of NFL results to real-world optimization before Wolpert and Macready published their first paper. Wolpert concurred with English later, but mysteriously changed his mind.

    No, it’s quite simpler, he did change his idea about and he did express so in the (new) paper. Both arguments (coevolution and dynamic fittness functions) are not sufficient to invalidate NFLT and Wolpert simply recognized this fact by writing:

    “in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.”

  90. kairos:

    “Your hint is certainly true and this does invalidate the Hagg. argument but I have argued that his starting point (7) is not valid.”

    (7) seems to be no more than an interpretation of what Dembski argues in NFL about blind searches and uniform distributions. What do you see wrong with it.

    The point I’ve picked up on in Haggstrom’s paper is where he says: “…..Dembski’s NFL argument would still pose an interesting challenge to evolutionary biology provided that empirical evidence shows that the true fitness landscape is similar to what one would expect to see under assumption (7).”

    Haggstrom goes wrong is in his either/or take on mutation rates. His logic seems to be that the mutation rate being what it is, renders (7) inconsistent with our knowledge of biology, that it would require the death of any organism experiencing any kind of simple nucleotide mutation at all. He seems unaware that protein space is composed of two kinds of subspaces depending on the importance of the protein’s function at any particular a.a. position along its length; i.e., some portions of the protein length are susceptible to ‘neutral’ mutations, and others are not. The calculation I made was to show that if only 40% of the protein length is constrained by function, then, in the case of the malarial protein PfCRT, we have empirical evidence that the “Darwinian algorithm A” (which means ‘reproduction-mutation-selection’) in action only permits, at most, the change of two a.a.s despite a huge number of ‘reproductions’ or the organism. The probability of an a.a. substitution for the PfCRT works out to be less than 1 in 10^150, science’s own UPB. This, then, implies that ‘fitness landscapes’ in nature really do fall of the edge of the table. Thus, per Haggstrom, Dembski’s NFL argument poses an “interesting challenge to evolutionary biology?.

    Sometime back, someone at Panda’s Thumb was arguing against Dembski using data from protein studies that showed permissible substitution rates at important functional points of the protein sequence to be of the order–this is strictly from a very flawed memory–10^-84. It was argued that this was below Dembski’s UPB of 10^-150. This, nevertheless, is again a very steep ‘fitness function’. Imagine two proteins being needed for a particular cellular function. Behe tells us that generally protein clusters of 6-10 proteins are needed for the performance of any cellular function. The probability of this cellular function coming about by “blind chance”, using the 10^84, would then minimally be 10^-504. Thus the ‘fitness function’ for this real-life cellular function would fall off almost exactly along the lines that Haggstrom suggests is biologically unreasonable. His biology is wrong. But his math, together with what we now know about the malarial parasite and from protein studies validate the use of uniform distributions, thus validating Dembski’s NFL argument.

    As I said: “Game. Set. Match.”

  91. In the case of, say, chaotic turbulence at the trailing edge of an airplane wing, the simulation never replicates measurements taken on an actual wing. That is, you cannot enter measured initial conditions of turbulent flow into a CFD simulator, set the simulator running, and then use the simulation to predict subsequent measurements. I believe I am saying for the third time in this thread that the trajectory of the simulated system will diverge exponentially from that of the actual system.

    So why do they use these simulations?

  92. tribune7: “So why do they use these simulations?”

    I expect that, although the simluation isnot modelled exactly time-wise, in terms of the exact location of individual eddies and features, the general number of and behaviour of these small-scale features as modelled averages out to produce very similar macroscopic values of lift and drag. Sort of like it’s impossible to model every rain drop in a storm, but you can show that the barrel fills up anyway.

  93. Semiotic(87),

    The average is just the sum divided by a constant. The theorem states that two probabilities are equal, computed with the law of total probability when the function f is chosen uniformly. Thus, Meester’s formulation is equivalent to W & M.

  94. SChessman — Sort of like it’s impossible to model every rain drop in a storm, but you can show that the barrel fills up anyway.

    That’s a good analogy. I suspect that since there are engineers involved, the simulations provide useful data and, on some level, corroborate with the real world.

    The only alternative I can think of is that these simulations are experimental, and have not been applied to real world decisions.

    But Semiotic has the first-hand experience so I’m curious as to his view.

    I’d be interested in seeing Gil Dodgen chime in.

  95. #90 PaV

    (7) seems to be no more than an interpretation of what Dembski argues in NFL about blind searches and uniform distributions. What do you see wrong with it.

    I mean that condition 7 is too much restrictive to bound what NFLT apply to. While certainly this condition would guarantee that NFLT hold, NFLT can also hold for fittness functions that are characterized by landscapes with some moderate and strict local regularity. After all this is what allow microevolution.

    In other words, your argument is correct but I simply argued that the criticism to the use of NFLT in biology falls before.

  96. kairos(95),

    It makes no sense to say that NFLT holds “for fitness functions”, what you need is to specify the probability distribution according to which the fitness function is chosen. If this distribution is uniform, then NFLT holds. For other distributions, NFLT may still hold but there must be conditions; it does not hold in general.

  97. PaV(90),

    You misunderstand Haggstrom’s argument. The uniform distribution is over the set of fitness functions.

  98. #96

    I haven’t said that NFLT holds for fitness functions but that NFLT can (but I would have used “may”) also hold for fitness functions chosen according to other distributions. So I haven’t claimed that NFLT should hold in general but only that the uniformity condition is too much restrictive. In fact, I meant just what you have said: “NFLT may still hold”.

  99. kairos(98),

    Not being an expert on NFLT, I am not aware of any other cases than the uniform distribution. Maybe they exist, maybe not, but in order to claim “much too restrictive” you should be able to at least come up with some other example.

  100. 100

    kairos,

    Pointing out that an authority who made an assertion REALLY, REALLY, REALLY is an authority and REALLY, REALLY, REALLY did make the assertion does not legitimize an appeal to authority:

    Are you sure? Really this is not the case. That citation is in a paper that is all dedicated to show how NFLT DO NOT hold in some coevolution cases, precisely when there’s a champion. At the end of this paper Wolpert (who was DIRECTLY involved in the ID controversy in the 90’s) do explicitly state that “in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.”

    The theorems still hold? Wolpert and Macready do not provide or cite any argument. The only arguments I have found are that the NFL theorems do not apply to biological evolution.

    The original NFL theorems assume uniform distributions on functions. If one ignores the objections of Wolpert and English (quoted in 68) and models biological evolution in terms of fitness functions on genotypes, then almost all theoretically- possible fitness functions “contain more information” than the observable universe registers (at most 10^90 bits, according to a paper Dr. Dembski likes to cite). Thus the probability of almost all theoretically- possible fitness functions is zero, contradicting the assumption of a uniform distribution.

    The upshot is that you have not produced an argument for application of the NFL theorems to biological evolution, and I have just sketched a simple argument against.

  101. Semiotic(100),

    Section 8 in this paper by Haggstrom might be of interest to you.

  102. 102

    perlopp,

    Not being an expert on NFLT, I am not aware of any other cases than the uniform distribution.

    Wolpert and Macready mention in their coevolutionary “free lunch” article that there is NFL if and only if p(f) = p(f o j) for all functions f and for all permutations j of the domain of functions. This applies to a static, not time-varying, probability distribution function p, so the applicability to biological evolution is dubious from the outset.

    Fitness functions are hypothetical constructs, so there is no way to ascertain that any particular fitness function f applies to any organism (i.e., that p(f) > 0). But for a wide range of fitness functions f a modeler plausibly invokes (implicitly declaring p(f) > 0), almost all composite functions f o j cannot be realized in the observable universe (i.e., p(f o j) = 0).

    The generalized NFL theorem does not rescue the notion that there is “no free lunch” when biological evolution is modeled as an optimization process.

  103. Semiotic(102),

    Thanks! Not sure I quite get it though; are we assuming that the domain of f consists of sequences? For example, if f:S x S –> R, are you saying that we require f and g to have the same probability if they have the property that f(s,t)=g(t,s) for all s,t?

    What does this permutation invariance imply for the sequence of function values f(v1), f(v2),…? Under the uniform distribution Haggstrom points out that the sequence is i.i.d (which also leads him to point out that this particular NFL theorem is more or less trivial).

    Sorry for being so inquisitive…maybe you shuold just give a link to the paper… :)

  104. Semiotic,

    Please see my post 93 in case you missed it. I really think Meester’s formulation is equivalent to W&M.

  105. Semiotic (last one!),

    Haggstrom’s description of Dembski’s argument is that either (a) the fitness function is chosen uniformly, or (b) we must infer design. Haggstrom then points to nature to refute both, and also points out the absurdity in giving the uniform distirbution such a privileged status. His argument sounds pretty strong to me. What do you think?

  106. 106

    perlopp,

    Section 8 in this paper by Haggstrom might be of interest to you.

    Synopsis of footnote 19: “If you allow me to depart from past practice, and to make individual fitness a function of not only the individual genotype, but the genotypes of all coexisting organisms, then I can produce an NFL theorem. But I don’t really want to show you how, and I don’t think you should bother to find out how.”

    To depart from the NFL framework of Wolpert and Macready (i.e., making individual fitness a function of the population rather than the individual), derive a new result, and then call it an NFL theorem would be just a tad impressionistic, don’t you think?

    Haagstrom’s remark in the footnote has no bearing on the applicability of the existing NFL framework to biological evolution. Also, Wolpert and Macready’s Theorem 2 assumes, loosely speaking, that fitness landscapes in succeeding time steps are uncorrelated. Even if fitness is a hypothetical construct, it is abundantly clear that the only times in which fitness landscapes are not highly correlated are when tsunamis hit, volcanoes erupt, meteors strike, forests burn, etc.

  107. semiotic

    Pointing out that an authority who made an assertion REALLY, REALLY, REALLY is an authority and REALLY, REALLY, REALLY did make the assertion does not legitimize an appeal to authority:

    Appeals to authority are generally legitimate when the authority is an acknowledged expert in a relevant field and it is not claimed the expert is infallible. You should read a little more and write a little less.

  108. Semiotic(106),

    I have no opinion; just wanted to inform you!

  109. 109

    perlopp,

    What is summed in Theorem 1 is probabilities of obtaining a certain sequence of observed values. A measure of performance of a search algorithm is a function of the sequence of values it obtains. If you define performance as a constant function of value sequences, then the mean performance of all search algorithms is identical for all probability distributions on functions. But this in no way implies that the equality in Theorem 1 holds. Put simply, a performance value conveys less information about a search algorithm’s behavior does than does the sequence of values it observes as it executes.

  110. I have no information. I just want to give my opinion.

    Gloppy

  111. I have no want, just opined to inform.

  112. Semiotic 007, others,

    From the Wolpert/McReady paper (1997), I think these paragraphs are relevant to the current discussion:

    Since it is certainly true that any class of problems faced by a practitioner will not have a flat prior, what are the practical implications of the NFL theorems when viewed as a statement concerning an algorithm’s performance for nonfixed ? This question is taken up in greater detail in Section IV but we offer a few comments here.

    First, if the practitioner has knowledge of problem characteristics but does not incorporate them into the optimization algorithm, then P(f) is effectively uniform. (Recall that P(f) can be viewed as a statement concerning the practitioner’s choice of optimization algorithms.) In such a case, the NFL theorems establish that there are no formal assurances that the algorithm chosen will be at all effective. Second, while most classes of problems will certainly have some structure which, if known, might be exploitable, the simple existence of that structure does not justify choice of a particular algorithm; that structure must be known and reflected directly in the choice of algorithm to serve as such a justification. In other words, the simple existence of structure per se, absent a specification of that structure, cannot provide a basis for preferring one algorithm over another. Formally, this is established by the existence of NFL-type theorems in which rather than average over specific cost functions , one averages over specific “kinds of structure,” i.e., theorems in which one averages P(d^y of m) | m, a) over distributions P(f). That such theorems hold when one averages over all P(f) means that the indistinguishability of algorithms associated with uniform P(f) is not some pathological, outlier case. Rather, uniform P(f) is a “typical” distribution as far as indistinguishability of algorithms is concerned. The simple fact that the P(f) at hand is nonuniform cannot serve to determine one’s choice of optimization algorithm. Finally, it is important to emphasize that even if one is considering the case where f is not fixed, performing the associated average according to a uniform P(f) is not essential for NFL to hold. NFL can also be demonstrated for a range of nonuniform priors.

  113. 113

    DaveScot,

    Appeals to authority are generally legitimate when the authority is an acknowledged expert in a relevant field and it is not claimed the expert is infallible.

    Appealing to an authority who has reversed himself without explanation, and who declares on his own authority a contradiction of what other researchers argue, is not legitimate.

    You should read a little more and write a little less.

    Have you read #22?

    I have read most of the literature related to the NFL theorems

    While some of you here pronounce on an amazing range of topics in which you have neither academic training nor research experience, I do not. This thread relates to the only topic I know a great deal about. No one else with my credentials in NFL is going to field questions here. I have not been attacking ID. Folks should be taking advantage of an opportunity to learn about NFL, not fending off the “evilutionist.”

  114. Atom(111),

    We learned earlier from Semiotic(102) precisely what that range of nonuniform priors is. Is it applicable to evolution? No, the permutation invariance that Semi mentions can hardly hold; swapping nucleotides around does not leave fitness unchanged (if I understand correctly, still hoping for Semi to respond to (103)).

  115. semiotic

    This thread relates to the only topic I know a great deal about. No one else with my credentials in NFL is going to field questions here. I have not been attacking ID. Folks should be taking advantage of an opportunity to learn about NFL, not fending off the “evilutionist.”

    After chastising someone else about appealing to authority you appeal to yourself as an authority.

    Amazing.

  116. kairos #95:

    After all this is what allow microevolution.

    Can anyone definitely say exactly how microevolution works genetically? If you start talking about alleles, just remember that the term, allele, arose before anyone knew about DNA.

    You’re arguing here about the characteristics of NFL theorems based on biology; which is what Haggstrom does. Maybe that’s not the best way to do math.

    perlopp #97:

    You misunderstand Haggstrom’s argument. The uniform distribution is over the set of fitness functions.

    First of all, it isn’t a uniform distribution over the fitness function for two reasons: (1) Haggstrom doesn’t mention fitness functions, he simply introduces a function, f, which, of course MAY be a fitness function; (2) the distribution is over the set |S|^|V|, which is a combination of both |V| and |S|.

    Second, you say I ‘misunderstand’ Haggstrom’s argument. I suppose this means that you do understand it. Please do us the kindness of presenting Haggstrom’s actual argument.

  117. Semiotic 007: “my best guess is that you simply do not understand the phrase [“competition in a bounded arena”"]. Here is an outstanding paper that…ties [the phrase] to neo-Darwinism… The author, Wirt Atmar, earned a dual Ph.D. in electrical engineering and biology, and very few people are as qualified to comment as he.”

    The paper includes some nice insights. However, in many places, it’s the typical Darwinist evolutionary interpretation, without justification of the interpretation. Because of this, statements in the paper regarding “Darwinian evolution” are suspect, if not plainly erroneous.

    Reading the paper doesn’t answer what you meant by “competition in a bounded arena.” The term “competition” — the word I say can only be a metaphor in Darwinian usage — isn’t defined. (What a “bounded arena” is accept as self-evident — although the term “arena” has connotations that aren’t legitimate, either.)

    Bottom line: The paper doesn’t address what I wrote. It’s essentially 125 kilobytes of yet more dancing.
    __________

    A quote from the paper:

    Simulated evolutionary optimization algorithms [note: earlier in the paper Atmar declares "Darwinian evolution, as a process, is an optimization algorithm"] are normally implemented in the following manner:

    Step 3: The quality of behavioral error is assessed for all members of the population, parent and progeny. One of two conditions is generally implemented: (i) the best N are retained to reproduce in the next generation, or (ii) N of the best are probabilistically retained. In either manner, the population remains size-constrained and ultimately the competitive exclusion of the least appropriate (“least fit”) is assured.

    [T]hese few steps [including Step 3] are generally characteristic of all simulated evolutionary procedures.

    “Step 3″ is the crux of the matter. It would be a simulation of Darwinian (nonteleological) evolution only if the “behavioral error” is assessed using a criterion that has been evolved nonteleologically (without any kind of built-in goal). Otherwise, it’s teleological evolution — by intelligent design.

    Another quote, same idea:

    Genetic algorithms are basically a proper simulation of Darwinian evolution. A population of trials is mutated and the best N are retained at each generation.

    This is simply wrong. Again, this would be true only if the “best” are determined by an adaptive landscape that has been evolved nonteleologically. This is not the case for any known useful genetic algorithm.

  118. Semiotic 007 #107:

    If one ignores the objections of Wolpert and English (quoted in 68) and models biological evolution in terms of fitness functions on genotypes, then almost all theoretically- possible fitness functions “contain more information” than the observable universe registers (at most 10^90 bits, according to a paper Dr. Dembski likes to cite). Thus the probability of almost all theoretically- possible fitness functions is zero, contradicting the assumption of a uniform distribution.

    Two questions: (1) If the probability of almost all theoretically-possible fitness functions is zero, isn’t this a problem for Darwinism, and not ID? (2) Instead of a uniform distribution, the fitness function, as you’re looking at it, would still be amenable to the Dirac Delta function, which means the fitness function exists, but that it is incredibly thin. Again, isn’t this a problem for Darwinism, not ID?

  119. PaV(116),

    (2) the distribution is over the set |S|^|V|, which is a combination of both |V| and |S|.

    First of all, |S|^|V| is not a set, it is a number. The set in question is the set of functions from V to S and there are |S|^|V| such functions. Uniform distribution simply means that we consider all of them to be equally likely which Haggstrom argues is not reasonable in a biological context. As he writes, the uniformity assumption implies that changing a single nucleotide is just as bad or good as rearranging the entire genome. Think of yourself: you have a certain genotype and a certain associated fitness (whatever that means). Now change a single nucleotide in some unimportant locus. Under the uniform distribution, you have as much belief in the function that changes your fitness to 0 as the one that does not change your fitness at all. That’s not biologically reasonable. Moreover, there is no reason to claim why that insight would automatically imply design.

    As I understand it, that sums up Haggstrom’s argument.

  120. Semiotic(113), DaveScot(115),

    I hope Semi will stay around for a while because I am learning from his expertise and hope to learn some more. I have no idea who Semi is or what his alleged credentials are, but I find his posts educational, in particular when it comes to nonuniformity and NFL. Please guys, don’t get into a quibble but let us keep learning.

  121. 121

    To all,

    David Wolpert and I have been acquainted for a number of years. Last summer I contacted him by email and asked him to explain why he had reversed himself on the matter of whether the NFL theorems apply to biological evolution. His response was that the statements in “William Dembski’s treatment of the No Free Lunch theorems is written in jello” (quoted in #61) and “Coevolutionary Free Lunches” were both correct. He added a supercilious “What don’t you understand?” and said no more. I saw no point in responding.

    Does anyone here find it obvious that both statements are true?

  122. Semiotic 007:

    What do you make of Meester’s final irony:

    —–“If one wants to argue that there need not be any design in nature, then it is hardly convincing that one argues by showing how a well-designed algorithm behaves as real life is supposed to do.”

    That statement seems reasonable to me. Apparently, you disagree with it. Could you explain why? Can you tie in what you are saying to the big picture.

  123. #99 perlopp

    Not being an expert on NFLT, I am not aware of any other cases than the uniform distribution. Maybe they exist, maybe not, but in order to claim much too restrictive you should be able to at least come up with some other example

    Please consider the following case, in which the fittness function f:V->S is chosen according to a heavily non-uniform distribution.
    Let us consider the subset M of all the f’s having the following characteristics:

    a) for almost all the |S|^|V| points we have that f(x) is very low or 0;
    b) there are only a maximum of v clusters (adjacent or linked points) in the solution space in which f(x)>>0, where each cluster is constituted by a max. of w neighbor points, and v*w

  124. #100

    Pointing out that an authority who made an assertion REALLY, REALLY, REALLY is an authority and REALLY, REALLY, REALLY did make the assertion does not legitimize an appeal to authority:

    Other people have already answered about. I like particularly the answer by Dave. :-)

    The original NFL theorems assume uniform distributions on functions. …
    functions is zero, contradicting the assumption of a uniform distribution.

    It’s here your mistake.

  125. #99 perlopp

    Sorry, message 122 wasn’t complete because of a “less than” sign. I repost it.

    Not being an expert on NFLT, I am not aware of any other cases than the uniform distribution. Maybe they exist, maybe not, but in order to claim much too restrictive you should be able to at least come up with some other example

    Please consider the following case, in which the fittness function f:V->S is chosen according to a heavily non-uniform distribution.
    Let us consider the subset M of all the f’s having the following characteristics:

    a) for almost all the |S|^|V| points we have that f(x) is very low or 0;
    b) there are only a maximum of v clusters (adjacent or linked points) in the solution space in which f(x)>>0, where each cluster is constituted by a max. of w neighbor points, and |S|^|V|>>v*w (in a 2-D solution space this means to have a huge flat fittness landscape with only few high and sharp pinnacles).

    Now let us suppose that f is not chosen according to a uniform distribution among |S|^|V| possibilities, but it is chosen (in a whichever way, uniform or non uniform) only among the set of f’s belonging to M. This means that only f’s belonging to M will be chosen while all the other f’s have P(f)=0; a very non-uniform distribution. But in this case it is apparent that NFLT still hold. Q.E.D

  126. perlopp #119

    I understand his argument in exactly the same way. But I think he’s wrong.

    In my post, #55, I lay out the argument for why he’s wrong. I hope you’ve read it.

    How about a little history here. When gel electrophoresis first came onto the scene, mainly the 60′s I believe, it afforded the first chance for scientists to extract proteins rather easily. So scientists began comparing proteins from differing individual organisms. The sequence of a.a.s was then a glimpse of the corresponding sections of DNA. What did scientists expect? They expected homogeneity. Why did they expect homogeneity? Because Darwinian theory, viz., selection, dictated that across a species the DNA should remain the same or the creature would die. Well, it turned out that proteins from organisms from the same species varied considerably at many a.a. locations. It was because of this ‘surprise’ discovery that the Neutral Theory of evolution developed, effectively dropping the whole notion of selection. Bottom line, don’t use Darwinism to argue for or against anything scientific.

    Because of neutral mutations, fitness functions have to incorporate the reality that the majority of nucleotides along the stretches of coding DNA can change into whatever they like. What does this generate in ‘fitness space’——-clustering.

    As I say, Haggstrom’s math looks fine. But he shouldn’t argue against the use of NFLT in evolution based on bad biology.

  127. kairos #125

    Isn’t your argument basically a mathematical version of Dembski’s “displacement problem” in that to find the target–the fittest location, you have to know which “f” to use; i.e., f belonging to M?

  128. semiotic

    Does anyone here find it obvious that both statements are true?

    I find it obvious that both could be true but not obvious that they both are indeed true – they could be wrong but they are not contradictory. One is an unflattering critique of Dembski’s application of NFL to biology and the other is a general position statement on NFL and biology. It appears Wolpert thinks Dembski didn’t do a good job applying NFL to biology (the main objection is lack of quantification and precision hence “jello”) but that doesn’t mean Wolpert rejects NFL as applicable in biology. Indeed Wolpert holds it is quite applicable to biology. In more of an aside he says there are free lunches in coevolution but that no (or at least too few) coevolutionary situations arise in biology to make coevolution and NFL relevant to it and then reiterates that the original (not coevolutionary) NFL theories are indeed relevant in biology.

    What don’t you understand?

  129. kairos(125),

    You don’t have NFL in your example as searching adjacent points beats blind search.

  130. #127 PaV

    Isn’t your argument basically a mathematical version of Dembski’s “displacement problem” in that to find the target–the fittest location, you have to know which “f” to use; i.e., f belonging to M?

    That is the abvious consequence. But IMHO what’s rally important is that NFLT isn’t constrained to uniform distribution but it’s appliable in a wider way, so that Haggstroem’s criticism falls just before to start.

    Moreover consider that the subset M I have defined is directly appliable to all the typical situations in Biology. So, all the anti NFLT Haggstrom’s arguments based on genome clusters would be anyway not significant.

  131. PaV(126),

    Let me try again; here’s the bare bones of Haggstrom’s argument:

    Dembski says that we only have two choices:

    (a) uniform distribution, which is when NFLT applies(b) design

    Haggstrom argues against this dichotomy by pointing out that the uniform distribution is unreasonable and that there is nothing mysterious about it. Doesn’t take much biology to make that point I think.

    Now, you seem to claim that NFLT does apply in biology, yet you talk about “clustering” which sounds like you do not believe premise (a). I’m trying to understand your point but it’s not easy.

  132. kairos(130),

    But your eaxmple does not work. Besides, Semitoic gave a reference to necessary and sufficient conditions for NFL earlier.

  133. #129 perlopp

    You don’t have NFL in your example as searching adjacent points beats blind search.

    No, this isn’t the case. To have a whichever advantage in searching adjacent points it is necessary that at least one of the points involved in the search are out of the flat landscape. But the probability that this does really occur is put to 0 by the condition |S|^|V|>>v*w; in fact it’s a >> involving thousands and more magnitude orders between the cardinalities of the two sets.

  134. kairos(133),

    The NFLT says that the probability of any particular sample is the same regardless of algorithm. In your example, compare blind search to adjacent-point search with the first point chosen randomly in each case. Now consider a sample of two consecutive high fitness values (“out of the flat landsacpe”). With blind search you must find such a value twice. By independence, the probability is p^2 for some very small p. With adjacent-point search, you need to find it once which has probability p but the conditional probability in the next step is much higher than p; thus beating blind search.

  135. perlopp (130);

    “Now, you seem to claim that NFLT does apply in biology, yet you talk about “clustering” which sounds like you do not believe premise (a). I’m trying to understand your point but it’s not easy.”

    Haggstrom’s premise is that if one or two nucleotides of a genome change, then the fitness for that genome falls to zero. If the fitness falls to zero with such a small amount of change, then, given the known mutation rate for genomes, nothing should exist. Ergo, NFL theorems cannot apply to the biological realm.

    Here’s where he is hugely wrong. Most of the coding portions of the genome readily accept change without loss of fitness. We call that neutral substitution, or neutral mutations. If a protein is coded by a sequence 1,000 bases long, then 600 of them could become whatever it wants and life goes on. Does this prove Haggstrom to be correct? No. In post #55, I do two things. First I point out that we already have an example, courtesy of Behe, which as gone through more cycles of reproduction over the last 50-60 years than all of mammalian species have gone through since they began 65 million years ago; i.e., the malarial parasite. Ergo, its genome has gone through a ‘blind search’ of epic proportions. The whole while it has been in a titanic struggle with chloroquine. It has built up a resistance. How has it done so? Did it eliminate a gene? Did it manufacture a gene? In this tremendous ‘blind search’ for a solution to chloroquine, what did it come up with? The answer: two a.a. substitutions. Second, I do the math, and show that, as a rough calculation, the two substitutions represent a fitness function of around 1 in 10^150.

    When you put this all together, this means that the genome is capable of two things: (1) it is capable of ‘clustering’ to a great degree around ‘neutral sites’ of the genome, AND, (2) it has fitness functions for certain parts of the genome that fall off at almost an infinite angle.

    What does this mean? It means that the genome can endure ‘neutral’ mutations quite easily, but that it cannot endure mutations in more critical parts of the genome.

    Thus the genome CAN endure known mutation rates while also fulfilling the conditions that NFLT , per Haggstrom, require. And, we have evidence of that in the case of P. falciparum, the malarial parasite.

  136. It is so frustrating for me to read an article like this and see that people are actually enlightened by the authors obvious points.

    The computer simulations are lest we forget “computer simulations”- nothing less nothing more. Now it was always laughable to me that that people tried to use computers to justify one of the most complex processes known. We don’t even know all that much about the genetic code as it currently stands so why then do so many people claim to be sold on the theory of evolution via NS/RV based on a simple computer simulation?

    It is the same EXACT principal at work in the Global Warming hysteria crowed. This is a principal that I call “The Wanna Believe Principal.” People wanna believe that global warming is happening and they wanna believe that everything evolved universal common ancestry via the Darwinian pathway so what do they do? – The go out and prove it.

    The only problem is that problems of this magnitude are not easily provable. Massive amounts of variable and historical data must be explained and taken into account if and when we are to program a simulation of long history. But even if all of the mathematicians and philosophers get together and are convinced that they have a good enough set of data and a god enough way to arrange that data “systematically” to prove their already preconceived bias – the question arises by what means can the natural history of the world be transferable to a computer simulation?

    The main reason that computers are not a transferable means f analogy from real history to simulation is that computers are logical devices. Computers are “communicable” and therefore have the ability to built into them to channel or facilitate the change necessary to get complexity and variation. They are based on the limbic system of the brain or the universal language of mathematics. Computers run on a system of 1s an 0s. Anyone can ad 1+1-1+2 and get 3. 1 to 3- novel SC! Hurray! But unfortunately as Gödel taught us mathematics is not a true description, let alone an explanation for the objective universe- it is incapable of proving itself. Chance is not like a computer. To put this more logically – change can be “artificially simulated” by a computer but “chance cannot artificially simulated a computer.” And not only that but chance cannot simulate a computer! Which is what is required to explain the origin of the experiment to begin with. Therefore it is fair to assume that the experiment is very helpful to the ID movement in that it shows that the natural unguided chance based universe of Darwin is incompatible with the Designed universe of theology and science fiction. It takes a very clever designer to design chance and even a more clever one make it do anything.

    Now I have said before that NFL is the best book that I have ever read- and it was a bit of chance ironically that lead me to reading it in the first place as I ordered it from barns and noble without looking into the book at all of the advanced logic and mathematics involved, nonetheless, I plowed through it and was absolutely amazed. NFL is to me THE heart of the ID movement and its theoretical research project. As it has been said- that nothing makes sense outside of the theory of evolution – I would like to change that statement slightly to “Nothing makes sense outside of the theorems of NFL.”

    Dembski, Mad Props to you!

  137. PaV(135),

    1. Do you believe in premise (a), that fitness functions are best described as having a uniform distribution? After all, that is the assumption in the NFL Theorem.

    2. The malaria paraiste has not done blind search. Blind search means that a new sequence is chosen randomly, thus, an offspring would have a genome that is completely unrelated to the parent.

  138. PaV(135),

    I re-read your post and think I’ve found the source of your misunderstanding. You write

    Haggstrom’s premise is that if one or two nucleotides of a genome change, then the fitness for that genome falls to zero.

    That is not his premise; it is what follows from the premise in the NFL Theorem. What Haggstrom actually writes is that the NFL premise means that “changing a single nucleotide is just as bad as putting togehter a new genome from scratch and completely at random.” Again, he does not say that this is the case but that this is an obvious consequence of assuming a uniform distribution over fitness functions, which is the assumption in the NFL Theorem. Your entire post in fact argues against this NFL assumption, yet you seem to still argue the relevance of NFL.

  139. Also in reference to the above speculative mathematics- i remind you all that the subejects they you are trying to apply NFL to are infact no fully understood. NFl is as it stands right now a philosophical argument against evolution without a guiding intelligence. It is in the process of being aplied in the ID research. I think the theory is still just a theory- and one i might ad that has not be disproven. But to me it is an absolutly beautiful revelation about intelligence and its inextricable link to naturalistic origins.

  140. #134

    The NFLT says that the probability of any particular sample is the same regardless of algorithm. In your example, compare blind search to adjacent-point search with the first point chosen randomly in each case. Now consider a sample of two consecutive high fitness values (”out of the flat landsacpe”). With blind search you must find such a value twice. By independence, the probability is p^2 for some very small p. With adjacent-point search, you need to find it once which has probability p but the conditional probability in the next step is much higher than p; thus beating blind search.

    There’s an equivication here. I’m not speaking about abstract NFLT; you forgot that from this point of view no NFLT would actually exist in the world; as clearly stated in wikipedia:

    “NFL is physically possible, but in reality objective functions arise despite the impossibility of their permutations, and thus there is not NFL in the world. The obvious interpretation of “not NFL” is “free lunch,” but this is misleading. NFL is a matter of degree, not an all-or-nothing proposition. If the condition for NFL holds approximately, then all algorithms yield approximately the same results over all objective functions.”

    It’s just this matter of degree that we are addressing and in this sense your argument, although correct in theory, doesn’t hold in practice. In fact, you have correctly stated that in one case we have p^2 and in the other one something like p/k (with k being a integer whose value depends on the number of neighbors). However the value of p is a depressing |S|^-|V|, something that can easily be less than 10^-1000000. In the real world that’s no difference between p^2 and p/k. IMHO it’s here that the argument of Dembski has been really misunderstood.

  141. kairos(140),

    In the acronym NFLT, “T” stands for “Theorem” and there is no equivocation as it clearly refers to the result by Wolpert & Macready. Your qualifier “abstract” has no meaning in this context. As for the Wikipedia quote, your suggested example is still very far from “approximately.” I have no idea whether Dembski has been misunderstood in the way you claim but unless this discussion is going to be about the NFL Theorem by Wolpert & Macready and its consequences, I have nothing to add.

  142. perlopp #138:

    You say that I go wrong when I say this:

    “Haggstrom’s premise is that if one or two nucleotides of a genome change, then the fitness for that genome falls to zero.”

    Here’s Haggstrom’s footnote #11 (which I’ve taken from my post #55):

    “In the hypothetical scenario that we had strong empirical evidence for the claim that the true fitness landscape looks like a typical specimen from the model (7), then this evidence would in particular (as argued in the next few paragraphs) indicate that an extremely small fraction of genomes at one or a few mutations’ distance from a genome with high fitness would themselves exhibit high fitness. It is hard to envision how the Darwinian algorithm A could possibly work in such a fitness landscape.”

    How is this any different from saying that one or two mutations away from a given high fitness genome, you’ll NOT find high fitness. When he says that, at a distance of only one or two mutations away, you’ll only find an “extremely small fraction” of high fitness genomes, this means that it is nearly zero. How else do you define an “extremely small fraction”? You know, 1/10^150 is an “extremely small fraction”.

    Have you perhaps misunderstood what Haggstrom is saying?

  143. #141

    In the acronym NFLT, “T” stands for “Theorem” and there is no equivocation as it clearly refers to the result by Wolpert & Macready. Your qualifier “abstract” has no meaning in this context. As for the Wikipedia quote, your suggested example is still very far from “approximately.”

    T stands for theorem but it’s a theorem concerning computational science not pure Maths. This is meant by the wiki citation about which I think you should think more. Concerning the last sentence I don’t see how you can reasonably say that my example “is still very far from approximately” after I have proved that from a computational point of view (that is what actually matters in function optimization) |S|^(-2|V|) (|S|^-|V|)/k do both denote unreachable points.

    I have no idea whether Dembski has been misunderstood in the way you claim but unless this discussion is going to be about the NFL Theorem by Wolpert & Macready and its consequences, I have nothing to add.

    That’s your choice but by saying “and its consequences” you have constrained yourself to discuss the consequences of NFLT where they are significant for the problem we are discussing, i.e. in the real world. And in the real world your argument doesn’t hold.

  144. kairos(143),

    1. NFLT is a proven true mathematical statement about the equality of two quantities. Your distincion between “pure math” and “computational science” is irrelevant.

    2. Your suggested example uses a distribution over fitness functions that cannot be said to be approximately uniform (far from it). If you still believe that it is approximately uniform, please present a good argument.

    3. Yes, the NFLT as stated by W&M and its consequences. Nothing else. That’s what is being discussed here, as started with the link to Meester’s paper and his references to Haggstrom.

    4. Happy New Year!

  145. PaV(142),

    Again, that is not Haggstrom’s premise. That is the premise of the NFL Theorem.

  146. PaV(142),

    I’d like to redirect you to my post 137 as you still have not replied. I think your answer to 1. is material to how our discussion will proceed. As for 2., there is no question, just a comment that you may have misunderstood the concept of blind search.

  147. kairos(143),

    When you say

    And in the real world your argument doesn’t hold

    what argument are you referring to?

  148. #144

    1. NFLT is a proven true mathematical statement about the equality of two quantities. Your distincion between “pure math” and “computational science” is irrelevant.

    Oops. Irrelevant? :-)

    2. Your suggested example uses a distribution over fitness functions that cannot be said to be approximately uniform (far from it). If you still believe that it is approximately uniform, please present a good argument.

    Uniform? And where did I say so? I said just the opposite for the ezample and argued NFLT (in the real world obviously).

    3. Yes, the NFLT as stated by W&M and its consequences. Nothing else. That’s what is being discussed here, as started with the link to Meester’s paper and his references to Haggstrom.

    Sorry, but the title of the thread is “Evolution and the NFL theorems”; and Evolution did happen in the real world, not in a world of Platonic ideas.

    4. Happy New Year!

    Happy (future) |S|^|V| year; in the abstract world obviously :-)

  149. kairos(148),

    1. Yep, irrelevant in the context. A theorem is a theorem. To say that it is “concerning” this, that, or the other bears no relevance to its validity.

    2. Your Wikipedia quote states that
    If the condition for NFL holds approximately, then all algorithms yield approximately the same results over all objective functions.

    With this I agree. However, in your example the condition for NFL does not hold anywhere near approximately (to which you now seem to agree).

    3. Apology accepted!

    4. Indeed. :)

  150. #149

    “1… A theorem is a theorem. To say that it is “concerning” this, that, or the other bears no relevance to its validity.”

    No. :-)

    “2….NFL does not hold anywhere near approximately (to which you now seem to agree).”

    No. :-)

  151. kairos(150),

    From Wikipedia: While chronos is quantitative, kairos has a qualitative nature.

  152. All,

    Here is my understanding of the argument so far.

    Haggstrom, and those on here following his example (Semi 007, perlopp), seem to argue that: 1) the uniform distribution of P(f) is ESSENTIAL to the NFL theorems. (Without uniformity of P(f), the theorems do not hold.) 2) Biological fitness functions pertaining to any given organism are not distributed uniformly. Therefore 3) the NFL theorems do not apply to biology.

    If this is a straw-man, please forgive me, and present an equally simple presentation of your argument. (If that is the case, then please disregard what follows.)

    Now, it seems that Haggstrom and the others are only focusing on one aspect of the NFL theorems: one algorithm over the set of all optimization problems (F), distributed by P(f). There is also a second way in which they hold, namely, on one problem over the set of all algorithms (see Macready Wolpert 1997 and referenced paper [5] from that paper.) I’d lke to see this second view addressed as well.

    From my initial reading of (1997), it does not seem that the uniformity assumption is all that critical. The authors themsleves say in the paper that the uniform P(f) is only given as way of an example, but that it is not a pathological (unusual) case. I have only read the paper once and haven’t worked through the mathematics yet, so I’m going from my initial (albeit recent) reading.

    In the 1997 paper, the authors also repeatedly claim that problem specific information must be included and utilized in the search, or else the resulting performance is the same as random search. This, they say, is how algorithms can do better than random search (by utilizing knowledge of the search space) which backs up Dembski/Marks’ work on active information as well as Meester’s claims.

    As I said, I still have a few papers to read and work through the relevant maths, so until then, I’ll keep my comments light.

    I love the discussion, however. Perhaps Dr. Dembski can take a few minutes to chime in as to why the uniformity issue is not a fatal flaw to biological application? (He never seems to comment on the really interesting threads, but I’m sure everyone would benefit from his take.)

    Atom

  153. perlopp #137:

    As to 1: That’s how the NFL theorem is defined. As you say, it’s one of the assumptions. So, no, I don’t have a problem with the function used in the NFL having a uniform distribution. (Now I’m understanding a uniform distribution as a probability distribution having N elements of equal probability, the sum of which is 1 over the length of the discrete interval the sum of the N elements is taken over.)

    2. I’m not talking about the malarial parasite doing a ‘blind search’ over the entire genome space of all organisms. Rather, I’m confining myself to the structure of its surrounding ‘fitness space’. This space is searched for in a blind, that is, random fashion. Now whether that search has been conducted long enough to find a satisfactory target is irrelevant. What is relevant is that the search so conducted by the malarial parasite, is more thorough than the vast majority of animal species has ever conducted. The result of that search was that it could increase its fitness relative to chloroquine only through two a.a.s of one protein being changed. Thus the ‘fitness function’ of that genome, at that particular point within all of genome space, per my calculation, is 1 to the 10^-155. It’s almost zero; but not zero. However, if you went three a.a.s away from the malarial parasites’ genome, the ‘fitness function’ there would be, indeed, zero. Ascertaining this, given what Haggstrom says in his footnote #11, validates Dembski’s use of the NFLT. This means that one CANNOT traverse genome space via ‘blind search’. Darwinism is thus refuted. Design must be inferred per the Explanatory Filter. “Game. Set. Match.”

  154. PaV(153),

    1. The question is if you think the uniformity assumption applies to the biological problems we are discussing here.

  155. Atom(152),

    A nice summary that I think adequately describes the arguments by Haggstrom and that I agree with. There is no claim that uniformity is ESSENTIAL, but it is the assumption of the theorem so for now that’s what we have (I’ll check out Semiotic’s reference when I find the time). But regardless of what the most general assumptions are, they cannot cover situations with the type of clustering you see in our biological applications; obviously nearest-neighbor type searches there beat blind search.

    When you read W&M, the theorem is not stated explicitly assuming a uniform distribution but the result regarding equality of the two sums can be interpreted in that way and Haggstrom’s probabilistic proof of NFLT is very illustrative. I don’t know if Dr Dembski has realized this connection before.

    As for the “second view,” W&M point out that the theorem in that setting is “less sweeping” which is certainly true if you look at it. If you want to apply it, you’d have to be very careful with details. This version also has a uniformity assumption, this time over the set of algorithms. Again, I think the application to biology is flawed as we can’t choose algorithm randomly but are restricted to the Darwinian process of mutation etc.

    I think Haggstrom makes it perfectly clear that the uniformity assumption is fatally flawed but I’ll be glad to hear counterarguments.

  156. Thanks perlopp. I only have one more comment for right now.

    You wrote:

    But regardless of what the most general assumptions are, they cannot cover situations with the type of clustering you see in our biological applications; obviously nearest-neighbor type searches there beat blind search

    I agree that clustering occurs in the search space of genomes (as Haggstrom’s point on neutral mutations shows) and that local neighborhood hill-climbing algorithms are the correct choice for this structure. But why the nice correspondence of algorithm choice to search space structure?

    Since if we choose randomly a given algorithm for this search space, on average, we’re just as likely to choose a poorly suited algorithm (less efficient than random search) as we are a well suited algorithm (more efficient). If we know nothing about this clustering, then we’re likely to achieve rougly random search efficiency, since we do not take any advantage of this extra information. It is like trying to locate a word in a dictionary without knowing that the words are ordered alphabetically; without that knowledge, you’re just as well off using any algorithm or random blind search. But with that information, you can find any word in log_2(N) tries (rather than N/2, on average, tries.)

    This is the point I make about needing extra information to narrow your choice of search algorithms for a given space. From my reading, Wolpert/Macready make this same point. It is also the point that Meester makes in why programmers using that information cannot be said to be models of dumb sheer luck, and how Marks/Dembski begin to quantify their Active Information.

    Your thoughts are welcomed.

  157. PS by “nice correspondence”, I mean lucky, fortuitous, and (obviously) well-suited

  158. without that knowledge, you’re just as well off using any algorithm or random blind search. But with that information, you can find any word in log_2(N) tries (rather than N/2, on average, tries.)

    Clarification: I do not mean that all algorithms will work equally well on this search space (I already hinted that binary search works incredibly well); rather, I mean that if you choose an algorithm randomly, on average, you are likely to acheive close to random performance by your choice.

  159. 159

    Dembski (2002) had nothing to say about NFL results other than those in W&M (1997). (Dembski made the same mistake that Meester did — he did not survey the NFL literature. He gives evidence in recent papers of knowing a great deal more about NFL than he did in 2002.) Everyone should keep in mind that Haagstrom is criticizing Dembski’s (2002) argument. Some of you are introducing your own ideas about NFL and arguing as though Dembski availed himself of them in 2002. Haagstrom is criticizing what Dembski said in 2002, not what he might say now, and not what you have to say now.

    If you model biological evolution in terms of a time-varying function from genotypes to fitness values, then Wolpert and Macready’s (1997) Theorem 2 is possibly applicable, but not Theorem 1, which applies to static functions. (Does anybody here want to say that what constitutes fitness does not change as the environment changes?) Theorem 2 effectively assumes that the fitness function at time t “predicts nothing” about the fitness function at time t+1. That is, the fitness “landscape” is entirely “made over” at each time step. If the fitness landscape in your model of biological evolution generally does not change catastrophically (in the sense of catastrophism) from one time to the next, then Theorem 2 does not apply.

    You might choose to model evolution over an interval in which adaptive pressures change little — I’m not sure of the biological relevance, but I’ll allow the possibility. To apply Wolpert and Macready’s (1997) Theorem 1, you would have to say that any fitness function is as likely as any other, and this is to say that the fitness function is almost certainly, though not certainly, algorithmically random (or very nearly so). That is, you would have to accept implicitly that the fitness landscape is almost certainly as jagged and incoherent (nearly everywhere) as you can imagine. Everyone modeling biological evolution believes that there is usually some sort of coherence in the fitness landscape — people speak of plateaus, slopes, ridges, etc. If you expect to see features like those, then you do not believe that the assumptions of Theorem 1 are satisfied.

    The necessary and sufficient condition for NFL did not appear until 2004. But now that we have it, ad hoc arguments like kairos’ in 125 are inappropriate. If you want to establish that there is no free lunch, then you must prove that p(f) = p(f o j) for all fitness functions f and for all permutations j of the domain of fitness functions (space of genotypes). This was established by two different proofs in three papers that passed independent peer reviews. In other words, there is no wiggle room.

    My comments are being held for moderation, so forgive me for overloading this one. A major reason much of what I say about NFL seems alien is that Wolpert and Macready, the authorities, never cited anyone else’s work on NFL until 2005 — and then it was at the behest of a reviewer. People often explore the literature by chaining backwards through the reference lists of papers. Since 2005, references to NFL publications other than Wolpert and Macready’s have increased dramatically in number. Some important NFL results from years ago are only now gaining attention.

  160. PaV(154),

    A. What does the following mean?

    the ‘fitness function’ of that genome, at that particular point within all of genome space, per my calculation, is 1 to the 10^-155.

    What is your fitness function? Why is it low when it ought to be high? The two a.a. changes you mention are beneficial, thus increasing fitness, whatever it may be.

    B. Design must be inferred per the Explanatory Filter

    ?????

    C. “Game. Set. Match.”

    Doubt it. More like “Advantage Haggstrom”

  161. Atom(156),

    But why the nice correspondence of algorithm choice to search space structure?

    Way outside my field of expertise, but it is the way it works isn’t it? There are no “choices” of algorithms, targets, or fitness functions. The malaria parasite that PaV talks about acquired resistance by mutation and reproduction. What other choices did it have? As far as I know, nobody claims that Intelligent Design was needed for resistance to develop. Rather than asking “why” one may ask “why not”?

    Again, you must be careful in applying the “second view” of NFL as there are additional assumptions regarding algorithms and samples. Not as clean-cut as “first view” NFL.

  162. 162

    I’d like to add that you definitely should expect coherence (structure, regularity) in the fitness landscapes of models of biological evolution, because algorithmically-random fitness functions cannot arise in physical reality. Those functions “contain” more information than the observable universe can “hold.” That is, only for unrealistically small genomes do algorithmically-random fitness functions require less than 10^90 bits (Seth Lloyd’s bound on the information storage capacity of the observable universe, not taking gravitational degrees of freedom into account) to encode.

    Let’s say we have a uniform distribution on fitness functions that are “simple” enough (low enough in Kolmogorov complexity) for physical realization. Assume that the necessary and sufficient condition for NFL is not satisfied. As pointed out above, “not NFL” does not mean “free lunch in optimization.” Whether some algorithm is generally superior in optimization performance to another under the distribution I described is, to my knowledge, an open question.

  163. 163

    Atom, perlopp, and others,

    You have to be careful when you talk about the “search space structure” and the “fitness landscape.” In Wolpert and Macready’s NFL framework, there is no topology associated with the search space — not even an adjacency relation. A search algorithm may associate topology with the search space. (We humans must do just that to speak of the fitness landscape.) This is a form of prior information (perhaps misinformation) about the search problem.

    The “fitness landscape” is not really an NFL concept, and I mentioned it above only because people here seem to find it intuitive. A search algorithm is not given information on the structure of the search space as an input. Any such information must be embedded in the search algorithm itself. It amounts to prior belief about “what kind” of functions will be input.

  164. (For the record:) A correction:

    Above (117) I wrote,

    the term “competition” — the word I say can only be a metaphor in Darwinian usage — isn’t defined [in the Atmar paper].

    Actually, I suppose he can be considered to have defined it implicitly — in one of the example quotes I gave, no less:

    [T]he best…are retained to reproduce in the next generation…and ultimately the competitive exclusion of the least appropriate (”least fit”) is assured.

    Thus, according to Atmar, “competition” is equivalent to “being successively retained”, i.e., undergoing the process of ‘natural selection’. So, again:

    “[T]echnically, there is no competition in Darwinian theory… [It's use] can only be considered metaphorical…

    (Thanks, Semiotic 007.)
    __________

    metaphor – a figure of speech in which a word or phrase literally denoting one kind of object or idea is used in place of another to suggest a likeness or analogy between them (Merriam-Webster)

  165. 165

    Atom (152):

    In the 1997 paper, the authors also repeatedly claim that problem specific information must be included and utilized in the search, or else the resulting performance is the same as random search. This, they say, is how algorithms can do better than random search (by utilizing knowledge of the search space) which backs up Dembski/Marks’ work on active information as well as Meester’s claims.

    But in 2000 we learned that almost all functions are algorithmically random, and that random search almost always locates a good value rapidly: Optimization Is Easy and Learning Is Hard in the Typical Function. There is no way to obtain information about a given algorithmically-random function without searching the function. And what kind of “design” procedure depends upon performing the task for which an algorithm is being designed? Algorithm design is futile in the face of randomness. But random search is effective when the function is algorithmically random. The first result along these lines (developed without the notion of algorithmic randomness) appeared in 1996. Wolpert and Macready knew about it, but never mentioned it, perhaps because it challenged their exposition of the NFL theorems.

    The 1997 NFL results of Wolpert and Macready are of theoretical importance, but they don’t characterize search in the physical world, where functions are usually algorithmically-compressible in the extreme (far from algorithmically random). I don’t think anyone knows whether some algorithm is generally superior to others in optimization of highly- compressible functions. If it should turn out that random search is not the “average” search, as it is in theory, then Marks and Dembski’s definitions of intrinsic, extrinsic, and active information in terms of random search perhaps will not make the sense they do now. My current research suggests that you might want to save your stone tablets.

  166. 166

    Please note my comments at 162 and following that spent some time “awaiting moderation.”

    j @ 164: I’m pleased to connect on the matter of competition. Sorry to miss 117. It’s hard to keep up with this thread. Certainly Atmar takes the neo-Darwinian paradigm as given. I did not intend for the article to serve as justification of the paradigm, but to provide a more sophisticated view of simulation of evolution. The unfortunate aspect of the example is that it has an engineering, rather than biological, slant.

    The thing I hoped Atmar would make clear is that we can put abstract evolutionary principles to test without simulating biological organisms. If memory serves, he indeed emphasizes the principles, as opposed to the implementation details.

  167. 167

    j (117):

    It would be a simulation of Darwinian (nonteleological) evolution only if the “behavioral error” is assessed using a criterion that has been evolved nonteleologically (without any kind of built-in goal).

    A fitness function merely says how good individuals are. The function may be obtained by measurements of physical individuals, and it should be clear in such a case that the measurements are in no way determined by human purpose. When a human encodes a function and supplies it as input to an implementation of an evolutionary algorithm, the teleos of the human does not infect the function.

    I have been on the merry-go-round with this issue before, so allow me to take a radically different tack:

    I recently used two methods to randomly generate many highly compressible functions. Random search consistently outperformed simple enumeration of candidate solutions, and a (1+1)-EA [evolutionary algorithm] consistently outperformed random search.

    The only teleology here is that I tried to make the distribution of functions uniform on a set of highly-compressible functions. I can’t prove that I succeeded, but there is no doubt about my goal. Success in function optimization requires nothing like “teleological” functions.

  168. #151 perlopp

    “From Wikipedia: While chronos is quantitative, kairos has a qualitative nature”

    Good joke; but it’s inappropriate here. It’s me who have noticed that your argument about p^2 and p/k lacks any quantitative meaning in the real world ;-)

  169. #159 Semiotic

    “The necessary and sufficient condition for NFL did not appear until 2004. But now that we have it, ad hoc arguments like kairos’ in 125 are inappropriate. If you want to establish that there is no free lunch, then you must prove that p(f) = p(f o j) for all fitness functions f and for all permutations j of the domain of fitness functions (space of genotypes).”

    But you know that in this sense no NFL should be present in the real world (I and T did specify in their paper that no situation for NFL would arise in the real world). The problem is: provided that in every real search we cannot do more than, say, 10^80 iterations, what’s the sense in discarding my example on the base of differences in P’s that could never be effective in the real world?

  170. I am curious how recent research CTC Code relate to this discussion.

    Including latest research there are three independent codes with DNA, Histone and CTC. How many more patterns will be found that are not currently in evolutionary models? And how do these models account for such non-trivial repeats. Does this favor a telic or non/telic process? Finally, what does it say in any relation to NFL?

    Kairos,

    Your note about computational process I think must be acknowleged. NFL is just one aspect in an ID world. Error checking cannot be accounted for by blind evolutionary processes.

    Life appears with computational design characteristics of logical and/ors, that conserve core processes, but allow variation on the edges. And I always thought of compression algorithms as a sign of intelligent input.

    From Science daily…

    But what was the development that permitted this advance in gene usage? The RNA polymerase II has developed a structure composed of repeats of a 7 amino-acid sequence. In humans this structure – termed “carboxyterminal domain” or CTD – is composed of 52 such repeats. It is placed exactly at the position where RNA emerges from RNA polymerase II. In less complex organisms the CTD is much shorter: a worm has 36 repeats, and yeast as few as 26, but many single-cell organisms and bacteria have never developed an obvious CTD structure.

    reference link:
    http://www.sciencedaily.com/re.....094106.htm

    It appears the “mechanisms” of life are arrived at by conditional logic and precise placements thru a core of patterned “meta-codes” that interact and/or overlay each other depending on developmental sequences, functional purpose, and conditional error processing.

    Can these be considered layers of Meta-codes? If it is a code, then life can be unlocked.

    Original papers…
    TRANSCRIPTION:
    Seven Ups the Code

    Jeffry L. Corden
    http://www.sciencemag.org/cgi/...../5857/1735

    hattip: creationsafaris.com

    Metacodes of Life, sounds like a very good project, or book title.

    Newton believed he would find order in the Cosmos and the solar system. Now, ID must focus on order within.

    I personally think a greater focus of ID research can be in unraveling similar structures like Histone and CTD. I think this is just the beginning. The very tip of knowledge unfolding on what will be found layers of significant grand orders, not “illusions.”

  171. 171

    kairos,

    I and T did specify in their paper that no situation for NFL would arise in the real world

    Thanks very much for telling me that. They devote just a few sentences to the observation, and I had no recollection it.

    At the end of A No-Free-Lunch Theorem for Non-Uniform Distributions of Target Functions, Igel and Toussaint write,

    The probability that a randomly chosen distribution over the set of objective functions fulfills the preconditions of theorem 5 has measure zero. This means that in this general and realistic scenario the probability that the conditions for a NFL-result hold vanishes. [...] It turns out that in this generalized scenario, the necessary conditions for NFL-results can not be expected to be fulflled.

    This is all they say on the matter. It’s a clever and concise argument against “NFL in the world,” but I’m not sure of the logic of supposing that the probability distribution on functions is itself random. I’d be interested to hear what Dr. Dembski makes of this double stochasticity.

    Even if you buy the reasoning of Igel and Toussaint, their conclusion is not really the end of the story. Despite the all-or-nothing sound of “no free lunch,” NFL is a matter of degree. There are distributions on functions for which algorithms differ at most slightly in their distributions on performance values. English works this out in excruciating mathematical detail in another of the papers of 2004 that prove the necessary and sufficient condition, No More Lunch: Analysis of Sequential Search. The mass of equations makes me want to vomit, but figures 1, 3, and 4 are the only graphical depiction of generalized NFL I have seen. (They illustrate something Marks and Dembski emphasize, namely that a search algorithm “shuffles” the function.) English emphasizes that most functions have exorbitant Kolmogorov complexity — an observation complementary to that of Igel and Toussaint.

    If p(f) is the probability that a human will refer to function f, I can give a persuasive argument (not really a proof, because no one knows much about p) that the distribution p is not even approximately one for which there is NFL. The gist is that there are common f for which almost all f o j cannot arise in practice. The situation is messier with fitness functions “in” biological evolution, because they are hypothetical constructs.

  172. 172

    kairos (169):

    The problem is: provided that in every real search we cannot do more than, say, 10^80 iterations, what’s the sense in discarding my example on the base of differences in P’s that could never be effective in the real world?

    Sorry, but I don’t understand what you’re asking.

  173. 173

    Michaels7 (170):

    Finally, what does it say in any relation to NFL?

    Nothing. What are your impressions of Meester’s paper?

    NFL is just one aspect in an ID world.

    The field of evolutionary informatics, which Marks and Dembski are committed to exploring, has its origin in NFL. Dr. Dembski has given a lot of attention to simulation of evolution. Thus the topics of this thread are worthwhile.

  174. 174

    Oops.

    If p(f) is the probability that a human will refer to function f, …

    I should have said explicitly describe instead of refer to. I can refer to functions with no physical realization, but if I explicitly describe a function, the description is a physical realization. To make the notion of explicit description concrete, think in terms of writing a program that computes the function.

  175. Here is the critical passage from Haggstrom’s paper:

    Thus—and since fitness landscapes produced by (7) are extremely unlikely to exhibit any significant amount of clustering—we can safely rule out (7) in favor of fitness landscapes exhibiting clustering, thereby demonstrating the irrelevance of Dembski’s NFL-based approach. Note in this context that it is to a large extent clustering properties that make local search algorithms (such as A) work better than, say, blind search: the gain of moving from an x , an element of V, to a neighbor y with a higher value of f is not so much this high value itself, as the prospect that this will lead the way to regions in V with higher values of f. (p.13)

    If we’re going to argue, then let’s argue about this.

    His critical remark is that a genome one mutation away from a genome of high fitness will, on average, be zero; hence, no “clustering.”

    In the paragraph right above where the quoted remark is taken, Hoggstrom says that no one would believe that a ‘fitness landscape’ wherein a genome one step away, or one mutation, away from a high fitness genome would have a fitness of zero since this would mean, e.g., that a human genome that experienced one mutation somewhere along its length would not be capable of producing life. Yet that is what the ‘independent’ distribution of NFLT seems to imply. Thus, realistic fitness functions can have nothing to do with a uniform distribution and NFLT.

    I think Haggstrom is wrong, and for the following reason:

    Let’s start with the space of all possible genomes. That space has cardinality 4^3,000,000,000 per Haggstrom. Let’s just round it off to 10^1,000,000,000 elements. This is roughly the space of all permutations of length 3,000,000,000 nucleotides.

    In Nature, there are how many species? A trillion. Ten trillion. Let’s take a hundred million, for arguments’ sake. And let’s say that each one of these “high fitness”(=“life”) genomes has a trillion similar permutations that are also “high fitness” (= “life”). We know that all of these genomes exist in the genome space above. Haggstrom would tell us, however, that none of these genomes are next to one another in genome space; therefore it’s not a realistic comparison. Well I propose to make it realistic.

    How shall we do that? By “clustering” each of the trillion viable genomic permutations around each of the 100 trillion living genomes. If we mentally try to visualize what’s going on, we can look down on a sea of two-dimensional space. At each location, that is, each point, of this two-dimensional space we find a permutation of a 3,000,000,000 long genome. As we look down onto this 2D space, these 100 trillion “high fitness” genomes, along with each of their trillion “high fitness” permutations, are randomly dispersed on this plane. What we’re going to do is to “pull together” all of these trillion of “high fitness” permutations to form a cluster. (After all, they’re ‘independent’ of one another) We end up with 100 million clusters, consisting of one trillion permutations. We could have, admittedly, “clustered” all 10^25 (100 trillion x one trillion) together. But, if we were to do a blind search for just that one cluster, it would be much harder to find than having 100 trillion “clusters” (of a trillion permutations) throughout the space of all possible genomes.

    Now in this configuration of genome space we have “clustering”; in fact, we have it to a staggering degree: viz., one trillion viable permutations per genome. So, if the human genome were to experience a mutation anywhere along its length, the likelihood of it not being viable would be 1 in a trillion.

    So, again, we have the space of all possible genomes within which are to be found, randomly (again, giving the best possibility of being found by search), 100 trillion “clusters” of a trillion permutations. Once we’ve pulled all these permutations together and formed 100 trillion “clusters” of a trillion permutations each, then the space, G, of all possible genomes is smaller by roughly 10^25 genomes. But 10^25 represents 1/4,000,000,000 of G, leaving G essentially unaffected in size.

    Now, what we have left is a uniform distribution of size 10^1,000,000,000 among which are to be found generously realistic “clusters” of genomes for every living being imaginable. The odds of hitting the target, that is, any one of the 100 trillion “clusters” of genome permutations, through blind search is 10^25/10^1,000,000,000= 1 in 10^4,000,000.

    You can’t argue that the “clustering” I propose has in any significant way changed the uniform distribution of G, the space of all possible genomes. Nature must navigate this way using, per Haggstrom, Darwin’s algorithm A (reproduction-mutation-selection) to find its way through this uniform distribution. But since it is a uniform distribution, we know that it’s no better than ‘blind search’, and we know that G is to Vast for blind search to work.

    This is where the Explanatory Filter, that Dembski describes, would tell us that since randomness cannot explain the “discovery” of living genomes, then design is involved.

  176. PaV(171),

    You confuse uniform distribution over the genome space with uniform distribution over the space of fitness functions. It is the latter that is relevant to the NFL Theorem.

    Double fault. Game Haggstrom. New balls.

  177. 177

    Pav (175):

    You can’t argue that the “clustering” I propose has in any significant way changed the uniform distribution of G, the space of all possible genomes.

    NFL Theoerms 1 and 2 of Wolpert and Macready (1997) implicitly assume uniform distributions on spaces of all possible functions (static and time-varying, respectively), not a uniform distribution on the search space (here, the space of all possible genomes). My best guess as to the source of your confusion is that Dr. Dembski often predicates that the search target is distributed uniformly on the search space. That’s not equivalent to the assumption of a uniform distribution on functions with the search space (genomes) as their domain. Does that make sense to you?

    I did in fact argue in 159, in which I blur the “trees” in an attempt to make the “forest” visible, that if function f is drawn uniformly from the set of all functions from genomes to fitness values, then f is almost certainly algorithmically random. If you imposed a topology on the space of genomes prior to drawing f, then the fitness landscape is almost certainly disorderly in the extreme. If you can find order in f, then the orderliness you have observed generally can be exploited in algorithm to compress f. But then f is not algorithmically random (by definition).

    As I said before, when you say that you expect to see regularities in randomly drawn functions, you are saying the random distribution is not uniform. If you define or transform the topology of the space of genomes to obtain regularity after drawing f, then you are, loosely speaking, adding information.

  178. Sorry, I meant PaV(175), not (171). Anyway, my comment regarding PaV’s confusion was corroborated by Semiotic(177).

  179. perlopp #176:

    “You confuse uniform distribution over the genome space with uniform distribution over the space of fitness functions.”

    The conditions I impose on S render the set S ‘essentially’ a uniform distribution with the value of all the f(x)’s being zero. The NFLT apply.

  180. Semiotic #177:

    “NFL Theoerms 1 and 2 of Wolpert and Macready (1997) implicitly assume uniform distributions on spaces of all possible functions (static and time-varying, respectively), not a uniform distribution on the search space (here, the space of all possible genomes).”

    The confusion comes a little bit from Haggstrom. He uses the sets V and S in two different ways. The first way he uses them is to generate the set of all genomes being generated from V, the set of nucleotides, and S, some subset of R, which is, in the case he uses, 3 billion. Later on, he uses V as the set of all genomes so generated and applies a fitness function to generate S, the set of all fitnesses of human genomes. In my post at #175, G should not be the space of all genomes, but S, the space of all genome fitnesses given the function f. But with that correction, I believe the argument still stands.

  181. I have to correct my correction in #179. With the clustering I impose, G (or, more properly V) essentially remains a uniform distribution over genome space, and, as I state in #178, once the fitness function is applied, S, remains essentially a uniform distribution with the fitness value being zero over the entire set.

    As I stated before, the Dirac delta function can be applied. Simply gather all of the “clusters” of the 100 trillion genomes that have “high fitness”, i.e., at or extremely close to 1, into one point. Instead of taking the distribution over an interval, take it over that one point. In the real world, genomes that function can be thought of as “spectral lines”, to borrow an analogy from physics. Just as spectral lines just are, so too, fit genomes just are.

  182. PaV(179,180),

    Haggstrom consistently uses the notation V for the domain and S for the range of a function. The set of functions from V to S is denoted S^V and it is over this set the uniform distribution is assumed in the NGLT. You are constantly confusing different sets with each other and also the sets themselves with their cardinalities. Haggstrom’s paper is actually very pedagogical if you take the time to read and understand (even if you disagee with his conclusions).

  183. PaV,

    I’ll try to explain how it works. There is a domain V consisting of all possible DNA sequences, thus V is what you call the genome space. A fitness function is a function from V to some other space S, f:V–>S. As Semiotic has pointed out, “fitness” is a hypothetical construct but we can undestand intuitively what it means. To make it easy, let us take S={0,1} (only two points) where 0 means low fitness, not likely to survive and 1 means high fitness, likely to survive.

    A fitness function now maps each sequence in V to a point in S. If the number of sequences is N, there are 2^N different such functions. The uniform distribution that is the premise of NFLT means that we consider all these functions to be equally likely. For example, there is one function that maps all sequences to the point 1. This function says that all sequences have high fitness. I think we can agree that this is not a reasonable choice of fitness function but the premise in NFLT says that we consider it just as likely as any other function. In fact, your entire discussion implies that you consider fitness functions of a certain “diraquesque” type are much more likely than others. Thus, you are arguing for a very nonuniform distribution over the set of fitness functions (the set S^V) and the premise in NFLT is not satisfied.

    The more general premise of permutation invariance is not a rescue as it essentially implies that fitness is unaffected by any rearrangement of the genome.

  184. All:

    Been busy elsewhere.

    I think the best answer to the dilemmas and issues in this thread is to pause, take a walk, then come back and think about the issues in light of basic common sense.

    Pardon a few semi-random thoughts, to stimulate such:

    1 –> First, indeed it is always logically possible that we can have nicely structured phase spaces and/or algorithms that are tuned to them. Given the complexity of the phase spaces in question of this nice type and the functionally specified, complex — AKA “active” — information in the associated algorithms, what is the most reasonable explanation of such if we see it happening but don’t know through direct observation the sources of the space and the algorithms?

    2 –> We are in the end dealing with the real world. The one in which Kenyon’s biochemical predestination thesis on protein sequences went down in flames when the actual typical patterns of proteins was examined. [AND, BTW, what would be most likely responsible for a world in which in effect the laws of chemistry had "life" written into them?]

    3 –> On Bob O’H's [in 3]: the whole approach of calculating CSI of proteins is flawed too, because those probabilities can’t be calculated either. I observe that first, the information carrying capacity of 20-state chained digital elements is not a mystery to calculate: 20^N for an N-length chain (the underlying logic of this information capacity calculartion is obvious). And, Bradley in his recent remarks on Cytochrome C summarises that in fact there is some failure to take up the fuoll space, i.e the amino acid residues are not equiprobable in fact empirically, as the leters of English are not equiprobable:

    Cytochrome c (protein) — chain of 110 amino acids of 20 types

    If each amino acid has pi = .05, then average information “i” per amino acid is given by log2 (20) = 4.32

    The total Shannon information is given by I = N * i = 110 * 4.32 = 475, with total number of unique sequences “W0” that are possible is W0 = 2^I = 2^475 = 10^143

    Amino acids in cytochrome c are not equiprobable (pi ? 0.05) as assumed above.

    If one takes the actual probabilities of occurrence of the amino acids in cytochrome c, one may calculate the average information per residue (or link in our 110 link polymer chain) to be 4.139 using i = – ? pi log2 pi [TKI NB: which is related of course to the Boltzmann expression for S]

    Total Shannon information is given by I = N * i = 4.139 x 110 = 455.

    The total number of unique sequences “W0” that are possible for the set of amino acids in cytochrome c is given by W0 = 2^455 = 1.85 x 10^137

    . . . . Some amino acid residues (sites along chain) allow several different amino acids to be used interchangeably in cytochrome-c without loss of function, reducing i from 4.19 to 2.82 and I (i x 110) from 475 to 310 (Yockey)

    M = 2^310 = 2.1 x 10^93 = W1

    Wo / W1 = 1.85 x 10^137 / 2.1 x 10^93 = 8.8 x 10^44

    Recalculating for a 39 amino acid racemic prebiotic soup [as Glycine is achiral] he then deduces (appar., following Yockey):

    W1 is calculated to be 4.26 x 10^62

    Wo/W1 = 1.85 x 10^137 / 4.26 x 10^62 = 4.35 x 10^74

    ICSI = log2 (4.35 x 10^74) = 248 bits

    He then compares results from two experimental studies:

    Two recent experimental studies on other proteins have found the same incredibly low probabilities for accidental formation of a functional protein that Yockey found

    1 in 10^75 (Strait and Dewey, 1996) and

    1 in 10^65 (Bowie, Reidhaar-Olson, Lim and Sauer, 1990).

    4 –> That looks to me suspiciously like an a classic a priori probability based on observed patterns to me [incrementing from the well known Laplace [etc] principle of indifference [cf Robertson's use of it in developing informational approaches to statistical thermodynamics, e.g. app 1 my always linked], that one assigns uniform likelihood where there is no reason to infer to the contrary; of course one handicaps in light of such observations], i.e the key element in calculating information and informational entropy. [BTW, is anyone here prepared to argue that English text is most likely explained as chance + necessity on grounds that the distributions of the letters are non-uniform and that for instance there are key clusters of likelihood, i.e redundancy such as "qu" etc?]

    5 –> PaV [175] has therefore put the evident bottomline very well:

    If we mentally try to visualize what’s going on, we can look down on a sea of two-dimensional space. At each location, that is, each point[I would say cell -this is a discrete space!] , of this two-dimensional space we find a permutation of a 3,000,000,000 long genome. As we look down onto this 2D space, these 100 trillion “high fitness” genomes, along with each of their trillion “high fitness” permutations, are randomly dispersed on this plane. What we’re going to do is to “pull together” all of these trillion of “high fitness” permutations to form a cluster. (After all, they’re ‘independent’ of one another) We end up with 100 million clusters, consisting of one trillion permutations. We could have, admittedly, “clustered” all 10^25 (100 trillion x one trillion) together. But, if we were to do a blind search for just that one cluster, it would be much harder to find than having 100 trillion “clusters” (of a trillion permutations) throughout the space of all possible genomes.

    Now in this configuration of genome space we have “clustering”; in fact, we have it to a staggering degree: viz., one trillion viable permutations per genome. So, [per model just proposed] if the human genome were to experience a mutation anywhere along its length, the likelihood of it not being viable would be 1 in a trillion.

    So, again, we have the space of all possible genomes within which are to be found, randomly (again, giving the best possibility of being found by search), 100 trillion “clusters” of a trillion permutations. Once we’ve pulled all these permutations together and formed 100 trillion “clusters” of a trillion permutations each, then the space, G, of all possible genomes is smaller by roughly 10^25 genomes. But 10^25 represents 1/4,000,000,000 of G, leaving G essentially unaffected in size.

    Now, what we have left is a uniform distribution of size 10^1,000,000,000 among which are to be found generously realistic “clusters” of genomes for every living being imaginable. The odds of hitting the target, that is, any one of the 100 trillion “clusters” of genome permutations, through blind search is 10^25/10^1,000,000,000= 1 in 10^4,000,000.

    You can’t argue that the “clustering” I propose has in any significant way changed the uniform distribution of G, the space of all possible genomes. Nature must navigate this way using, per Haggstrom, Darwin’s algorithm A (reproduction-mutation-selection) to find its way through this uniform distribution. But since it is a uniform distribution, we know that it’s no better than ‘blind search’, and we know that G is to Vast for blind search to work.

    This is where the Explanatory Filter, that Dembski describes, would tell us that since randomness cannot explain the “discovery” of living genomes, then design is involved.

    And, BTW, IMHO [NB, Mark, 12], that bottomline looks a lot like the point I made way back at the top of the thread. Sorry if ti is painful, but it does seem to me well-warranted, given the summary just given vs statements from the authors in question like Hagstom’s:

    “My debunking of some dishonest use of mathematics in the intelligent design movement

    .

    If the sort of summarised estimates above are “dishonest,” kindly show me why and how.

    GEM of TKI

  185. Semiotic 007 (166): “The thing I hoped Atmar would make clear is that we can put abstract evolutionary principles to test without simulating biological organisms. If memory serves, he indeed emphasizes the principles, as opposed to the implementation details.

    Are you confused? This directly contradicts your stated intention at (84). You provided a link to it to try to teach me what “competition in a bounded arena” means:

    “my best guess is that you simply do not understand the phrase [“competition in a bounded arena””]. Here is an outstanding paper that…ties [the phrase] to neo-Darwinism…

    I then showed that the paper actually supports my statement that competition doesn’t really exist in Darwinian theory, not your statement that competition “is essential to evolutionary theory.”

    (Besides, I don’t deny that important conclusions of principle can be drawn from abstract evolutionary simulations, so there would be no reason for you to try to make this clear to me. — I simply know that entirely baseless conclusions are often drawn by Darwinists. E.g., Avida, etc.)

    Semiotic 007 (167): “A fitness function merely says how good individuals are. The function may be obtained by measurements of physical individuals, and it should be clear in such a case that the measurements are in no way determined by human purpose.

    It is clear that characteristics of (most) physical organisms aren’t determined by human purpose, but it’s an unfounded assumption that they aren’t determined by any purpose whatsoever. Whether physical evolution is teleological or nonteleological is an unresolved and contentious issue. (Didn’t you don’t know that?!)

    Semiotic 007 (167): “I recently used two methods to randomly generate many highly compressible functions. Random search consistently outperformed simple enumeration of candidate solutions, and a (1+1)-EA [evolutionary algorithm] consistently outperformed random search.

    It’s impossible to judge the importance of this statement without seeing a concrete example. Could you please provide an example of equivalent or similar work, so that we can determine for ourselves the extent of the teleology and nonteleology in your method? Have you audited your work using the Marks/Dembski metric of active information?

  186. #175 PaV

    iIf, as it seems, your explanation is quite similar to my example I agree on your claim.

  187. #171 Semiotic

    Even if you buy the reasoning of Igel and Toussaint, their conclusion is not really the end of the story. Despite the all-or-nothing sound of “no free lunch,” NFL is a matter of degree.

    This is just my point. I. and. T proved an iff condition for NFLT under the non-uniform assumption and argued correctly that this condition should not be verified in the real world. But real world isn’t only a matter of not having all the possible functions permutations. It’s also matter of strict bounds on the probabilities of what can really happen in the world. A whichever event with P equal to, say, 10^-1,000 isn’t in the real world statistically different from an event with P equal to, say, 10^-1,000,000, because no one will really occur and their real P will be 0.
    It’s in this sense that also non-uniform distributions can imply NFLT in the real world.

    English emphasizes that most functions have exorbitant Kolmogorov complexity — an observation complementary to that of Igel and Toussaint.

    That’s correct.

    If p(f) is the probability that a human will refer to function f, I can give a persuasive argument (not really a proof, because no one knows much about p) that the distribution p is not even approximately one for which there is NFL. The gist is that there are common f for which almost all f o j cannot arise in practice.

    But have you taken into account the uniformity that is produced by the real P lower bound (for example let’s take the value proposed by Dembski 10^-150)?

    The situation is messier with fitness functions “in” biological evolution, because they are hypothetical constructs.

    That’s true but, as correctly stated by PaV, The data provided by Behe in EoE give enough confidence on the fact that decent searches are out of the possibilities of RM+NS in biology

  188. kairosfocus(184),

    5 –> PaV [175] has therefore put the evident bottomline very well

    Except that PaV has completely misunderstood the premises of the NFLT. As you agree with him, you may not have gotten it right either. See my post (183) for an explanation of the setup for NFLT. Semiotic(177) also addressed PaV’s confusion.

  189. Just a little follow-up to my remark about the Dirac Delta function (and as a way of visualizing minute “fitness” landscapes):

    Properly speaking, the Delta function is not a function; it’s a convenience that allows scientists to sum the probabilities that come up in QM in such a way as to incorporate them into integrals.

    The essence of the idea is much like I’ve stated. A uniform distribution is an equal probability distribution over an interval. The area (integral) underneath the interval has to add up to 1. With the Dirac Delta function, however, one “integrates”, i.e., takes the “area,” not underneath an interval, but rather underneath a point. Putting it that way, you can see that, properly speaking, this, geometrically, can’t be done. But, nature begs to differ. (As with “spectral lines” and much of QM) So, to get a grasp of the Delta function picture yourself taking a ‘point’ and pulling it upwards towards positive infinity. “Stretching” the ‘point’ in this way causes the point to shrink its width in proportion to the length it’s being stretched to. The total area must still equal 1 however. So, remembering we’re dealing with a probability density at a single point, if, for example, the probability at that single point is 1/100, we, then, must “pull” this single point upward to 100 the value of 100. The area, as you can see, now equals one.
    ____________________________

    Now, here’s the connection to “fitness” landscapes.

    I mentioned in an earlier post, that examining the characteristics of a certain protein in the sequence space of its functional unit, scientists found the fitness space of the protein to be something on the order of 10^-84, meaning that the protein could tolerate only 1 in 10^84 possible nucleotide exchanges, or permutations, in the critical area.

    Well, when this kind of “fitness landscape” is drawn, it is commonly shown as some kind of combined set of ‘hills’ that falls off into the plane out of which they arise. This picture you commonly see, is nothing but a fiction. It’s not meant to be a fiction; it’s just that it isn’t being thought through carefully enough. Let me show what I mean.

    Let’s say you have a 19” monitor you’re looking at this on. Imagine a upward directed curve starting out from the bottom left of your monitor, rising to a peak which is way above the center of the screen, but that then falls off towards the far right corner, leaving a gap at the top of the screen. (We don’t know how ‘high’ this curve rises, but we know it’s much more than our screen can handle.) When the “fitness space” of a protein is 10^-84, that would mean that if what you see on the screen represents this “fitness space”, then proportionately, you would be larger than the universe by huge orders of magnitude. Here’s my calculation for the diameter of the “visible” universe:

    {[15 x 10^9 years (age of universe)] x[ 365 days/yr x 24 hrs/ day x 60 min./hr x 60 sec/min x 300,000 meters/sec (the speed of light)] x 2 (radius to diameter conversion)=(2.838 x 10^23 meters }

    You’re two meters; the screen is a half meter. So, roughly speaking, you would have to be 8 x 10^61 times bigger than the “visible” universe to be able to see the function across the screen the way I described it. Conclusion: only God can see this fitness function in this fashion. To draw the fitness function for such a protein, that is, as “falling off” in a curve like way, is simply a fiction (or to play God). It’s a straight line, of thickness 1/10^84 of whatever scale you’re using (i.e., effectively invisible), ‘stretching out’ above whatever line you’ve drawn as a baseline by 10^84 times the scale you’re using (effectively infinite). You might as well say that it is a line of infinitely small breadth, and infinitely long length; the closest thing to it would be something along the lines of the “spectral lines” of atoms.

  190. 190

    j@185 (and all, as always):

    I resisted getting into a debate of an off-topic point. But you insisted, and I responded by linking to an on-topic paper that stood to be of interest to everyone reading the thread. Only ellipsis in your quote of my comment (#84, you might have mentioned) makes things seem otherwise. Here I’ve emphasized some text you omitted:

    Here is an outstanding paper that not only ties “competition in a bounded arena” to neo-Darwinism, but also gets us (pretty please) on-topic: Notes on the Simulation of Evolution. The author, Wirt Atmar, earned a dual Ph.D. in electrical engineering and biology, and very few people are as qualified to comment as he.

    I say now to everyone, explicitly, that the confluence of statistics, thermodynamics, optimization, learning, and biology in the paper is something very rare. As regards this thread, what is most important is that Atmar addresses simulation of evolution in the abstract. The downside is that the paper is 15 years old, and that it has an engineering slant in places.

  191. Perlopp, 188:

    A] Re:

    PaV has completely misunderstood the premises of the NFLT. As you agree with him, you may not have gotten it right either . . .

    Neat dismissal- by – driveby- labelling attempt.

    Now, kindly deal with the real-world issue that starts from [a] getting to a plausible prebiotic soup thence to [b] the observed functionally specified complex information of life systems, and onward, [c] body-plan level biodiversity. [You will note that I studiously avoid abstract discussions of NFLT etc, cf. my always linked for a, b, c in reverse order.]

    In particular, to anchor all of the above to scientific issues:

    1 –> the AVERAGE [i.e. typical/ reasonably expected] performance of unintelligently directed or structured “searches” across v. large configuration spaces. [These are the ones relevant to evolutionary materialist scenarios for OOL and body-plan level biodiversity.]

    2 –> The required co-adaptation of config space and search algorithm to consistently out-perform the average, i.e the issue of active information/ functionally specified complex information as the condition of confining a search to the neighbourhood of archipelagos of success, and hill-climbing to optimisation etc.

    3 –> Application of same to claimed prebiotic environments that credibly gets us to a feasible prebiotic soup, thence by chance + necessity only to the observed FSCI of life systems at cellular level without exhaustion of probabilistic resources of the cosmos, and without metaphysical artifices such as quasi-infinite arrays of sub-cosmi.

    4 –> The commonplace observation that in all cases where we do directly know the causal story, FSCI as just outlined is the product of agency, e.g. the posts in this thread. [NB: these happen to be high-contingency digital data strings that are functionally specified and complex, where perturbation beyond rather narrow bounds gets to non-funcitonal nonsense as a rule.]

    5 –> Now, put the same insights from above into analysing the likely effect of the attempted construction of algorithms and the associated design of codes by chance processes on the gamut of the observed cosmos. [In case you don't understand my point, cf my always linked, App 1 sect 6, here.]

    Now on NFLT as a context of mathematical debates . . .

    B] NFLT and the origins of biologically relevant CSI:

    1] PL, 183: The uniform distribution that is the premise of NFLT means that we consider all these functions to be equally likely.

    Actually, the context of this is best understood from say Harry Robertson’s Statistical Thermophysics, as I cite in my always linked, from his analysis on the deep link between statistical thermodynamics and information theory concerns:

    . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . .

    [deriving informational entropy . . . ]

    S({pi}) = – C [SUM over i] pi*ln pi . . . .

    [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .[pp.3 - 6]

    S, called the information entropy, . . . correspond[s] to the thermodynamic entropy, with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context [p. 7] . . . .

    Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [p. 36.]

    [Robertson, Statistical Thermophysics, Prentice Hall, 1993

    --> Such considerations, of course, are highly relevant to the move from pre-biotic physics and chemistry to life-systems by chance and necessity only.

    --> They also underscore the relevance of the Laplace [etc] criterion that if one has no information to specify another distribution of probabilites of states, a uniform distribution is indicated. On this, we can see the great success of stat thermodynamics.

    –> When we do have reason to assign a different distribution — i.e we have more information about the system, then we assign divergent probabilities.

    –> Indeed, on Cytochrome C as I cited in 184, Bradley does just that following Yockey et al. [And, PL, that is part of why I cited this case, in answering to Bob O'H.]

    2] A Wiki FYI:

    In computing, there are circumstances in which the outputs of all procedures solving a particular type of problem are statistically identical [GEM note -- i.e on avg no-body does better than anybody else -- no free lunch] . A colorful way of describing such a circumstance, introduced by David H. Wolpert and William G. Macready in connection with the problems of search[1] and optimization,[2] is to say that there is no free lunch. Cullen Schaffer had previously established that there is no free lunch in machine learning problems of a particular sort, but used different terminology.[3]

    To pursue the “no free lunch” metaphor, if procedures are restaurants [they prepare meals on order to pre-defined procedures, reliably, at competitive costs and charge to make a profit] and problems are menu items [i.e the problem to be solved by the procedure is as just specified] , then the restaurants have menus that are identical except in one regard — the association of prices with items is shuffled from one restaurant to the next [here there is an averaging process . . .] . For an omnivore who decides what to eat only after sitting at the table, the average cost of lunch does not depend on the choice of restaurant [i.e he can apply intelligence to pick a good meal at a good price in a specific situation] . But a vegan who goes to lunch regularly with a carnivore who seeks economy pays a high average cost for lunch [the vegan pays for both lunches in effect!] . To methodically reduce the average cost of lunch, one must use advance knowledge of a) what one will order and b) what the order will cost at various restaurants. That is, reduction of average computational cost (e.g., time) in problem-solving hinges on using prior information to match procedures to problems.[2][3]

    In formal terms, there is no free lunch when the probability distribution on problem instances is such that all problem solvers have identically distributed results. In the case of search, a problem instance is an objective function, and a result is a sequence of values obtained in evaluation of candidate solutions in the domain of the function. For typical interpretations of results, search is an optimization process. There is no free lunch in search if and only if the distribution on objective functions is invariant under permutation of the space of candidate solutions.[4][5]

    Now, of course, Wiki is not exactly an ID-friendly source, so I cited it to show that the basic ideas are as WD did put them. [I am a great fan of getting the basics right then looking at the elaborations.]

    In the context of OOL, Evo Mat advocates need to get to life on the gamut of the observed cosmos by chance + necessity only, without running into the sort of maximal improbabilities WD pointed to and which PaV admirably sums up.

    Then, they need to get to the body-plan level biodiversity and required increments of 100′s of millions to thousands of millions of DNA base pairs, without similarly exhausting probabilistic resources.

    That requires in effect finding a short-cut: [a] life is written into the underlying chemistry of pre-biotic environments and [b] RV + NS mechanisms for creating body-plan level functional bio-information without agency exist.

    Just to start with on these, kindly let us know PL, where the empirical evidence for inferring to such based on observations exists.

    3] Dirac Deltas . . .

    PaV, actually, they had to rewrite the definition of a function to accommodate the Dirac Delta! (It was too useful to be dismissed.)

    PL, PaV is pointing out and underscoring that when we look at archipelagos of functionality on the grand scale of the config space, they are so finely located that they act as though they are weighted points.

    Random searches starting at arbitrary points are overwhelmingly likely to miss them.

    Agents, of course, use active intelligence to narrow down tot he neighbourhood of such islands and chains of islands, and so can access them with far greater likelihood of success.

    Thus, the routine ability of agents to get to FSCI such as the posts in this thread.

    GEM of TKI

  192. PaV(189),

    A uniform distribution is an equal probability distribution over an interval.

    That is one special case. However, in NFLT the uniform distribution is not over an interval but over a function space. You might want to consider DaveScot’s previous advice to “read a little more and write a little less.” I recommend posts 182 and 183.

  193. kairosfocus(191),

    PaV misunderstands & you agree with PaV => you misunderstand. Applied formal logic.

    Anyway, I recommend my post 183 that explains that the uniform distribution is over the function space and nothing else. The confusion among many here may stem from he fact that we are not using probabilities to describe something stochastic but rather to describe our degree of belief in the different functions (a so-called Bayesian perspective which may in itself be questioned in which case the entire discussion is pointless). Your discussion about thermo-dynamics may be brilliant for all I know but it is nevertheless irrelevant for the NFLT. As Semiotic has pointed out, the most general possible premise for NFLT is permutation invariance which essentially means that a very fit individual remains very fit after any rearrangement of its genome. If you believe that applies in biology, you can apply the NFLT; otherwise, not.

  194. 194

    kairos (187):

    But have you taken into account the uniformity that is produced by the real P lower bound (for example let’s take the value proposed by Dembski 10^-150)?

    If you obtain your “real” distribution P by thresholding the given p (setting small probabilities to zero) and normalizing (i.e., to make the sum of probabilities over all functions equal to 1), then P diverges even further from the “nearest” distribution for which there is NFL than p does. Put (too) simply, to get NFL, you have to make the small probabilities bigger and the big probabilities smaller. You have suggested just the opposite. Figure 4 in the English (2004) paper might make this more intuitive.

    This leads to another reason why Dembski (2002) should not have indicated that the NFL theorems of Wolpert and Macready (1997) apply to biological evolution. For any realistic model, there are more than 10^150 functions from genomes to fitness values, and thus p(f) is less than 10^-150 (the universal probability bound Dembski used in 2002) for all fitness functions f. That is, all fitness functions are effectively impossible when you posit a uniform distribution and apply the universal probability bound.

  195. 195

    PaV, kairos, and others:

    I remind you that the criticism is of the argument Dr. Dembski made in 2002, which referred only to the NFL theorems Wolpert and Macready published in 1997. Other proven NFL results and your own ideas about NFL are irrelevant. Dembski’s (2002) argument is Dembski’s (2002) argument is Dembski’s (2002) argument.

    To argue now that Dr. Dembksi would have been right if he had said something he did not is… odd.

  196. Semiotic(194),

    I disagree with your second paragraph. I’m no follower of Dembski but here I have to defend him: He never states that any event with probability less than 10^-150 is impossible; only those that also satisfies some kind of “specification.” Obviously, if you choose uniformly from a set with more than 10^150 elements, you will get an outcome that has less than 10^-150 probability but you must get some outcome. If you shuffle two decks of cards with distinguishable backsides, there are more than 10^150 permutations but nobody, including Dembski, would argue that they are all impossible.

    I think the confusion in this thread is that too few of us understand that the uniform (or more general permutation invariant) distribution is over a certain function space and not the genome space. I’ve tried to explain it and so have you. Hopefully it helps somebody.

  197. PL, re 193:

    Especially . . .

    Your discussion about thermo-dynamics may be brilliant for all I know but it is nevertheless irrelevant for the NFLT . . .

    First, in the real world, the vital issues are strongly linked to thermodynamics and thence — from the statistical perspective — to configuration spaces, populations of possible energy and mass distributions and thence relative statistical weights of related observable macrostates.

    So, until the issues on such are credibly and forthrightly addressed, most of the above thread is so much empty metaphysical speculation. In short, I have relegated the NFLT issues to in effect at most a footnote.

    For, whether or not Dr Dembski is right on his key NFLT contentions [and, FYI he is a competent Mathematician in his own right and published his claim in a peer-reviewed monograph, so the reasonable presumption is that he knows what he is talking about until and unless it can be cogently shown otherwise], the stat thermodynamics and its probabilistic resources challenge to the alleged spontaneous generation of biofunctional information is the 800 lb gorilla that is tapping you on the shoulder.

    For, the issue is not what is LOGICALLY (or even in the barest sense PHYSICALLY) possible — which includes a tornado in a junkyard assembling a 747 by an incredibly lucky clustering of parts and forces — but what is reasonable relative to the available probabilistic resources of the observed cosmos.

    Here is famed OOL researcher Robert Shapiro in Sci Am recently on how this sort of reasoning comes across to him, on the RNA world type hypothesis [though he evidently does not see how this also equally speaks to his own metabolism first OOL models!!!]:

    The RNA nucleotides are familiar to chemists because of their abundance in life and their resulting commercial availability. In a form of molecular vitalism, some scientists have presumed that nature has an innate tendency to produce life’s building blocks preferentially, rather than the hordes of other molecules that can also be derived from the rules of organic chemistry. This idea drew inspiration from . . . Stanley Miller. He applied a spark discharge to a mixture of simple gases that were then [GEM: not now!] thought to represent the atmosphere of the early Earth . . . . Two amino acids of the set of 20 used to construct proteins were formed in significant quantities, with others from that set present in small amounts . . . more than 80 different amino acids . . . have been identified as components of the Murchison meteorite, which fell in Australia in 1969 . . . By extrapolation of these results, some writers have presumed that all of life’s building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case . . . .

    The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.

    See my point?

    That’s why I have raised this issue.

    And, that is PaV‘s underlying point, too; including on the Dirac Delta Function-like nature of even very generously large estimates of archipelagos of biofunctional states under evo mat assumed OOL conditions or body-plan level biodiversification by chance + necessity only conditions.

    Now, instead of addressing this material — and always linked, BTW — context, you went on to a further dismissal and to do that, made a — IMHCO — ludicrous, ill-founded judgement on my understanding of logic and its applicability to the real world.

    If you want to seriously evaluate my understanding of the logic of physically related mathematical considerations, the above linked on the thermodynamics issues is a good place to start. Thence you may wish to look here for more on the wider issues of reasoning and believing across worldviews options.

    Failing such, I make a fair comment: your arguments — for sadly good reason — would then come across as contempt-driven dismissive rhetoric typical of Darwinista debaters and their fellow travellers. Recall, kindly, that Dawkins is the one who has insistently championed the alleged quadrilemma that those who reject evo mat are ignorant or stupid or insane or wicked. (Fallen and fallible I freely acknowledge. But on this, to help turn down the voltage on this overheated cultural issue, kindly show me to be wrong on the merits of the wider issue before using rhetoric that, frankly, comes across as in effect dismissing me as too dumb to figure out mathematical logic and its relevance to the real world.)

    FYFI, I have designed and built systems that are based on the bridging of mathematical logic to the real world of information-bearing and using systems. So, much of my view on the utter implausibility of the Darwinista “lucky noise” thesis — and now a “successful search algorithms by chance + necessity only” thesis — stems from that experience. Not mere speculation or in- the- teeth- of- abundant- “facts” religious commitment; another convenient and commonly met with rationalist rhetorical barb.

    I repeat, in the context of our empirically anchored knowledge, only agents produce the organised complexity that manifests itself as FSCI, especially artificial languages/codes and complex algorithms that successfully use these codes to execute real-world functions.

    In effect Evo Mat thought boils down to saying that our observations are wrong and that the underlying statistical thermodynamics principles that tie to those observations are also wrong. Then, it too often dismisses the counter-challenge to SHOW that this is so, by begging the methodological — and frankly metaphysical — questions.

    That simply raises my suspicions that it is the evo mat thinkers who don’t understand — and/or refuse to face — the real issues, spewing up whatever cloud of rhetorical ink that is convenient on any given topic that will aid in escaping.

    That is a consistent pattern here at UD, and elsewhere.

    So, until trhe real-world challenges above are cogently addressed, the above on NFLT comes across — in the context of the wider cultural debate — as at most making a mountain out of a molehill.

    At worst, it is Dawkins-style Village atheist rhetoric based on a one-sided presentation of an issue.

    So, kindy understand my comments above as calling for balance and context. (And pointing out how, IMHCO, on the evidence we have again seen grudging acknowledgement on the merits disguised amidst the rhetoric of dismissal and even contempt.)

    GEM of TKI

  198. H’mm:

    Having got off my head of steam, maybe we can go back to using Wiki as a hostile witness — more on this below — i.e. one testifying against his will and/or making inadvertent but telling admissions:

    Some computational problems are solved by searching for good solutions in a space of candidate solutions. A description of how to repeatedly select candidate solutions for evaluation is called a search algorithm. On a particular problem, different search algorithms may obtain different results, but over all problems, they are indistinguishable. It follows that if an algorithm achieves superior results on some problems, it must pay with inferiority on other problems. In this sense there is no free lunch in search.[1] Usually search is interpreted as optimization, and this leads to the observation that there is no free lunch in optimization.[2] . . . . The “no free lunch” results indicate that matching algorithms to problems gives higher average performance than does applying a fixed algorithm to all . . . .

    Now, let’s think about the underlying in-principle issues, in plain English. [Math symbols can and should be read as sentences, a point I always insisted on with my students, that they write out the sentences with appropriate connectying words: in effect so they could be scanned to see if they make sense, starting with "the RHS of an = sign restates what is on the LHS . . ." i.e I strongly dissent from the mindless use of the commonplace balance scales model as I am going to look at dimensionality equivalence of quantities on LHS and RHS, etc.]

    Okay, in steps, indenting for clarity in showing my step-by-step overall structure of reasoning:

    1 –> Prof Wiki: Some computational problems are solved by searching for good solutions in a space of candidate solutions. Check. We have a configuration space and we have a chance + necessity only family of algorithms that are searching away, with multiple instantiations that are in effect a populaiton of candidates for being the fittest to survive.

    2 –> A description of how to repeatedly select candidate solutions for evaluation is called a search algorithm.. Check, i.e., e.g. competitiveness among random variants within the ecosystem that selects “the fittest on average.” (In the prebiotic world, there are difficulties but it is generally presumed by OOL researchers that some sort of chemically driven, statistical RV + NS on chemical functionality but not involving actual life-system – style biological reproduction is at work.]

    3 –> On a particular problem, different search algorithms may obtain different results, but over all problems, they are indistinguishable. It follows that if an algorithm achieves superior results on some problems, it must pay with inferiority on other problems. In other words, on average, search algors will work very well only on problems they happen to be tuned for so on average the search algors’ performance will even out.

    4 –> In this sense there is no free lunch in search.[1] Usually search is interpreted as optimization, and this leads to the observation that there is no free lunch in optimization.[2] In short, we cannot rely on any given algor to solve everything. One has to pick — or in this case, shape — horses for courses. [How to do that, in our observation, takes intelligence . . .]

    5 –> The “no free lunch” results indicate that matching algorithms to problems gives higher average performance than does applying a fixed algorithm to all . . . . H’mm: such as the underlying algorithmic architecture of RV + NS in its various prebiotic and biotic forms?

    6 –> Further to this, we know that we are dealing with organised complexity showing itself as FSCI, and that on good “exhaustion of probabilistic resources” reasons, AND observation, agency is the best explanation of successful systems that manifest FSCI. [I use this subset of CSI as it its the relevant one, cf. discussion here in my always linked, app 3.]

    7 –> That is, we have excellent reason to infer that RV + NS is a peculiarly ill-matched strategy for solving the OOL and body-plan level biodiversity problems.

    8 –> So we have very good reason to infer that if the OOL and body-plan level biodiversity problems were solved [as obviously they were!], they were solved by other algorithms, tracing to agent action. For, across causally relevant accounts, the main options are chance and/or necessity and/or agency. The Evo Mat contention is that agency reduces at OOL and body-plan biodiversity [let's abbreviate: BPLBD] levels; but this is now running head-on into the origin of FSCI hurdle.

    9 –> That is, we are back at the design inference as the best, empirically anchored explanation of OOL and BPLBD. And in so doing we saw that key principles- of- statistical thermodynamics considerations are relevant . . .

    But, Prof Wiki protests . . .

    A false inference, common in folklore and prominent in arguments for intelligent design,[7] is that algorithms do not perform well unless they are customized to suit problems. To the contrary, every algorithm obtains good solutions rapidly for almost all problems.[8] Unfortunately, almost all problems are physically impossible,[5] and whether any algorithm is generally effective in the real world is unknown.

    Talk about talking out of both sides of your mouth!

    10 –> First, in your direct- observation- based- EXPERIENCE, prof Wiki, have you ever seen a physically implemented algorithm that did not originate in an agent and was not customised by him to achieve success,usually after a fair bit of troubleshooting and debugging? [Onlookers, observe the studious silence in the just above excerpt on this point . . ..]

    11 –> every algorithm obtains good solutions rapidly for almost all problems.[8] Unfortunately, almost all problems are physically impossible . . . BUT WE’S IS BE DEALING WITH PROBLEMS THAT WERE PHYSICALLY SOLVED, YOUR HONOR, MR WIKI, SIR! (Silence, again . . .)

    12 –> And, honourable Professor, Sir, we have good statistical thermodynamics reasons to infer that RV + NS-architecture algors, however many instantiations and variants were possible across the gamut of our observed universe, cannot reasonably solve the first problem in the cascade: OOL.

    13 –> For, most honorable professor, sir, probabilistic resources are rapidly exhausted (as shown in above posts by PaV and myself just to name two; and as is of course notoriously Dr Dembski’s longstanding contention).

    14 –> whether any algorithm is generally effective in the real world is unknown So, is this not a grudging acknowledgement of a point disguised as a dismissal? Indeed, arguably it is worse: we have about 60 years of experience with algorithm-based digital computers. There are no known generally effective real-world algorithms. So, Prof Wiki is issuing an IOU here not backed up by cash in his account!

    14 –> And does it not also imply that real-world considerations are relevant to the issue of the utility or otherwise of the algorithmic architecture — the meta-algorithm if you will — proposed as the king of all real-world problem solvers: RV + NS in one form or another?

    Or, as Dembski and Marks put it in their Info Costs of NFL paper that responds to Haggstrom:

    Abstract—The No Free Lunch Theorem (NFLT) is a great leveler of search algorithms, showing that on average no search outperforms any other. Yet in practice searches do outperform others. In consequence, some have questioned the significance of the No Free Lunch Theorem (NFLT) to the performance of search algorithms. To properly assess the significance of the NFLT for search, one must understand the precise sources of information that affect search performance. Our purpose in this paper is to elucidate the NFLT by introducing an elementary theoretical framework for identifying and measuring the information utilized in search. The theory we develop [GEM: so, they are EXTENDING and APPLYING the NFL framework . . . in light of elaborating key concepts in it] shows that the NFLT can be used to measure, in bits, the fundamental difficulty of search, known as “endogenous information.” This quantity in turn enables us to measure, in bits, the effects of prescribed implicit knowledge for assisting a search, known as “active information.” Such knowledge often concerns search space structure as well as proximity to and identity of target. Active information can be explicitly teleological or can result implicitly from knowledge of the search space structure. The evolutionary simulations Avida and ev are shown to contain large amounts of active information.

    Professor Wiki, I want my money back for this course!

    (Oops, it’s “free” — guess I shouldn’t expect to get the full straight dope easy for free . . . ?

    After all:

    there’s no free lunch!)

    GEM of TKI

  199. PS: Kairos, we gotta get together sometime!

    [Onlookers, Kairos and Kairosfocus are two very different persons. I am a J'can by birth and ancestry whose family has lived in several Caribbean territories, a physicist in my education base, an educator and strategic- management- in- the- context - of sustainability- of- development- in- a world- of- accelerating- and- transformational- change thinker in my work and service activities, and an Evangelical Christian [cf. this invited public lecture] living out here in Montserrat, where my wife hails from. I have come to use kairos as a key concept in light of its appearance in Paul’s Mars Hill address, circa 50 AD in Athens, as recorded in Acts 17:24 – 27. K, what about you?]

  200. kairosfocus(197),

    Whence comes all your anger? I only tried to help you understand the premise of the NFLT. Thermo-dynamics may the impose uniform distribution on various configuration spaces but that is not relevant to NFLT. In NFLT, the uniform distribution is over a function space. I have alrady explained it so I will not repeat myself. I have no interest in debating for debate’s sake, only tried to explain a point that most here seem to have misunderstood.

  201. perlopp #183:

    I’ll try to explain how it works. There is a domain V consisting of all possible DNA sequences, thus V is what you call the genome space. A fitness function is a function from V to some other space S, f:V–>S. As Semiotic has pointed out, “fitness” is a hypothetical construct but we can undestand intuitively what it means. To make it easy, let us take S={0,1} (only two points) where 0 means low fitness, not likely to survive and 1 means high fitness, likely to survive.
    A fitness function now maps each sequence in V to a point in S. If the number of sequences is N, there are 2^N different such functions.

    Two points here: (1) you use N as the number of sequences. Why didn’t you use |S|^|V|? Because then you would have ended up with |S|^|S|^|V which involves the same symbol for two sets with completely different cardinalities|; which makes my point about Haggstrom’s confusion exactly. (2) To say that 2^N functions are involved is a confusing way of stating what I see as happening. What I see is this: you have ONE function, f, and this function f maps every one of the N elements of V onto S. And I don’t see 2^N elements in S, but N. After all, we’re talking about a function that is able to assess (hypothetically) the fitness of a given genomic combination. It’s going to be either 0 or 1, giving a sum of N 0’s and 1’s; how can you come up with both values for the same genome using the same function? Here’s the problem as I see it: When you say, “If the number of sequences is N, there are 2^N different such functions”, this is way off the mark. What you mean is that there are 2^N possible results: i.e., phase space of S has 2^N different possible configurations with each of 2^N functions giving a different result. This, it seems to me, is the more sensible, less confusing, way of speaking about it.

    The uniform distribution that is the premise of NFLT means that we consider all these functions to be equally likely.

    I disagree with this statement. The uniform distribution means that the “weighting” of each of the N elements is the same.


    For example, there is one function that maps all sequences to the point 1. This function says that all sequences have high fitness. I think we can agree that this is not a reasonable choice of fitness function but the premise in NFLT says that we consider it just as likely as any other function.

    Doesn’t this serve to identify that there is a problem with what you’re presenting? What do I mean? The NFLT says that no algorithm will do any better than a blind search. If all the elements of S are 1, the search would be over immediately; or, to put it another way: what would you be searching for if they’re all 1’s?

    In fact, your entire discussion implies that you consider fitness functions of a certain “diraquesque” type are much more likely than others.

    No, that’s not what I’m arguing. Each of the |S|^|V| elements have probability of 1/|S|^|V|. What I argue is that if you “cluster” a trillion of these probabilities, the probability (or improbability) of finding the cluster is hardly different at all from finding a single element. Instead of the probability of any element, that is, 1/ 10^3,000,000,000, the probability of the “cluster” would be 10^12/10^3,000,000,000, or 1/10^250,000,000. Both these improbabilities are Vast. The NFLT, or to put it another way, the ability to find these “clusters” using an algorithm that works better than a blind search, remains essentially unaffected by “clustering”. So you have two things at once: NFLT applying, and a “clustered” genome space that permits all kinds of mutations without an extreme loss of function.

    Thus, you are arguing for a very nonuniform distribution over the set of fitness functions (the set S^V) and the premise in NFLT is not satisfied.

    I think I’ve already answered this in an implicit way. Explicitly, we already know from nature the genomes that have fitness. We take these 100 trillion genomes, along with one trillion permutations of this genome that are known to be fit because of the neutral theory, and we do an infinite search, using infinite resources (because, after all, we’re playing God) to locate each and every one of them in S^V. Each of these, roughly, 10^25, genomes are then sorted into a 100 trillion “clusters”, with each of the trillion permutations of the genome of a particular species being “clustered” with it. All the other genomes are meaningless, with fitness value of ‘zero’. So they’re completely interchangeable. So, we now ‘randomly’ “cluster” all the remaining genomes into “clusters” of one trillion. Each of the genomes has probability 1/|S|^|V|. So each of these “clusters” will now have an equiprobability of [1/ |S|^|V|] x 10^12 instead of the 1/|S|^|V|. In our case, that means (10^12) x 10^-3,000,000,000 or 10^-250,000,000. All equiprobable. It’s a uniform distribution in the domain of f. And NFLT tells us ANY algorithm, including the Darwinian algorithm A (reproduction-mutation-selection), has no better chance of finding ANY of these “clusters” than a blind search does. And, of course, no one but God has such resources.

  202. PaV(201),

    (1) you use N as the number of sequences. Why didn’t you use |S|^|V|?

    Because that is not the number of sequences. N denotes the number of sequences. You can also use the cardinality notation |V|, I just thought N looked nicer. At any rate, this is the number of elements in the domain V. The number |S|^|V| you mention is the number of functions from V to S. In my example, |S|=2, |V|=N, |S|^|N|=2^N.

    Each of the genomes has probability 1/|S|^|V|

    Same consistent misunderstanding. It’s not a probability distribution over the genome space that is relevant to NFLT but a probability distribution over the space of functions from the genome space to the fitness space.

    PaV, I’d like to end our discussion here as this is getting quite pointless.

  203. 203

    PaV (201):

    The uniform distribution that is the premise of NFLT means that we consider all these functions to be equally likely.

    I disagree with this statement. The uniform distribution means that the “weighting” of each of the N elements is the same.

    But you’ve just agreed, using terminology you find more intuitive. Probability distribution functions on discrete sample spaces are sometimes referred to as probability mass functions (in distinction to probability density functions on continuous sample spaces). The total mass of 1 is apportioned to (distributed among) the elements of the sample space. When the mass is distributed uniformly over the sample space, all elements have equal “weighting,” which is to say that all elements are “equally likely.”

  204. #193 kairosfocus

    [Onlookers, Kairos and Kairosfocus are two very different persons. I am a J’can by birth and ancestry whose family has lived in several Caribbean territories, a physicist in my education base, an educator and strategic- management- in- the- context - of sustainability- of- development- in- a world- of- accelerating- and- transformational- change thinker in my work and service activities, and an Evangelical Christian [cf. this invited public lecture] living out here in Montserrat, where my wife hails from.

    Hi kairosfocus. I instead am European, a place where the dispute on ID has become a little warmer since two years but it’s very far from being a hot argument. Instead my interest on the argument of telelogy in the world is quite old and it’s enforced by my specific expertise in electronics and computer science.

    I have come to use kairos as a key concept in light of its appearance in Paul’s Mars Hill address, circa 50 AD in Athens, as recorded in Acts 17:24 – 27. K, what about you?]

    In my case I was inspired by 2 Chor 6,2:

    “legei gar KAIRW dektw ephkousa sou kai en hmera swthrias ebohqhsa soi idou nun KAIROS euprosdektoV idou nun hmera swthriaV”

  205. perlopp #192:

    PaV: “A uniform distribution is an equal probability distribution over an interval.”

    perlopp: That is one special case.

    You then suggest to me that I follow DaveScot’s advise to “read more, and to write less.”

    Two things: (1) some of your posts don’t show up immediately, so when I respond there is literally nothing to ‘read’; (2) Here’s a link to what Wikipedia has to say about a uniform distribtuion. It’s a ‘definition’, in fact. Please tell me where it says anything about sets in NFL theorem: http://en.wikipedia.org/wiki/U.....rd_uniform

  206. 206

    PaV (201):

    The NFLT says that no algorithm will do any better than a blind search.

    Neither of Theorems 1 and 2 of Wolpert and Macready (1997) says that. The theorems imply that every algorithm has average performance over all functions (equally weighted) identical to that of random search. That all algorithms perform identically on particular constant functions is no more a contradiction than that some algorithms perform differently on non-constant functions.

    Random search is the average search, as Wolpert and Macready (1997) point out, and this is how Marks and Dembski treat it in their analytic framework. In fact, random search is equivalent to randomly (uniformly) selecting a deterministic search algorithm and then applying it. Thus for a typical fixed function, about half of all deterministic algorithms “do better” than random search is expected to do, and about half do worse. This is why, if you arbitrarily select an algorithm to apply to a given function, the active information is generally as likely to be negative as positive.

    It seems to me that some IDists have heard so long that random search is inefficacious that they are having a hard time grasping that it is average. To put things more simply, if you design a search algorithm to solve a problem you misunderstand, then random search will likely outperform your designed algorithm. Furthermore, almost all problems (functions, actually) are disorderly in the extreme, and thus are in no ordinary sense understandable. The probability of uniformly drawing a function for which design is meaningful is very small. That is, the choice of search algorithm is almost always arbitrary. A single execution of random search is equivalent to an arbitrary choice of algorithm. Furthermore, random search is almost always efficacious — when the function is algorithmically random, almost all search algorithms are efficacious.

  207. #194 Semiotic 007

    If you obtain your “real” distribution P by thresholding the given p (setting small probabilities to zero) and normalizing (i.e., to make the sum of probabilities over all functions equal to 1), then P diverges even further from the “nearest” distribution for which there is NFL than p does.

    It’s not so for the basic condition Sum p=1 and the thresholding to 0 are two separate and non-overlapping operations.
    a) The first operation involves the rough probabilities associated to each x belonging to the solution space; here we can suitably use the theoretical p’s associated to each x of any given fittness function f, where x belongs to V.
    b) The second operation involves the use of the p’s to suitably compare the search possibilities of different algorithms, in particular random vs hillclimbing ones. Here and only here it’s necessary to correctly characterize what happens in the real world. And in the real world in a solution space of cardinality, say, 10^1000,000 the probability to “hit” a subset with cardinality, say, 10^100 can be put 0 because UPB=10^-150>>10^-999,900.

    Put (too) simply, to get NFL, you have to make the small probabilities bigger and the big probabilities smaller. You have suggested just the opposite. Figure 4 in the English (2004) paper might make this more intuitive.

    NO, I haven’t suggested so.

    This leads to another reason why Dembski (2002) should not have indicated that the NFL theorems of Wolpert and Macready (1997) apply to biological evolution. For any realistic model, there are more than 10^150 functions from genomes to fitness values, and thus p(f) is less than 10^-150 (the universal probability bound Dembski used in 2002) for all fitness functions f. That is, all fitness functions are effectively impossible when you posit a uniform distribution and apply the universal probability bound.

    Perlapp has already explained the difference, difference that is also present in my argument

  208. semiotic 007 #195:

    To argue now that Dr. Dembksi would have been right if he had said something he did not is… odd.

    Did Haggstrom claim in 2002 that NFLT don’t apply to biology, and did Dembski respond to this argument in 2002 when writing NFL? Of course not. Isn’t it odd to expect Dembski to have addressed an arugment before it was ever made? We’re simply arguing here for what is rather obvious from an intuitive standpoint. This isn’t an argument that Dembski couldn’t make himself, it’s simply an argument he wouldn’t bother taking the time to make, unless for some reason he needed to.

  209. perlopp #196:
    I disagree with your second paragraph. I’m no follower of Dembski but here I have to defend him: He never states that any event with probability less than 10^-150 is impossible; only those that also satisfies some kind of “specification.”
    The Gettysburg Address has 272 words. At four letters/word, that is 1088 letters. The probability of this coming about by chance is 1 in 28^1088; this is well below that of the UPB of 10^-150. But, I assure you, it exists. What is “impossible” is not that the Gettysburg Address exists, but that it came about simply by chance. But you see, what’s impossible for blind chance, is easily possible for intelligent agents. BTW, since intelligent agency is involved, yes, indeed, the Gettysburg Address is a “specification”.

  210. 210

    PaV (204):

    Please tell me where it says anything about sets in NFL theorem

    The domain of all functions in Wolpert and Macready (1997) is finite set X (in script), and the codomain is finite set Y (also in script). The set of all functions from X to Y is Y^X, and |Y^X| = |Y|^|X|. That is, the set of all functions from X to Y is finite, and a probability distribution over that set is discrete, not continuous. You have pointed us to an irrelevant Wikipedia article.

    While we’re on the matter of discrete versus continuous, let’s note also that functions with domain X are discrete, and thus your repeated invocation of the continuous Dirac delta function is inappropriate. You might want to read about the Kronecker delta.

  211. All,

    It was said earlier:

    This leads to another reason why Dembski (2002) should not have indicated that the NFL theorems of Wolpert and Macready (1997) apply to biological evolution.

    Now, the more I have thought about this and read both the primary literature and the Dembski/Marks publications, the more I think that the above misrepresents their point.

    Correct me if I’m wrong.

    Everyone is aware that there are a given set of search problems within genome space, which is a subset of all possible search problems; there are also many “target” regions which specify functionality islands in this space and there are biologically relevant fitness functions, which again, is a subset of all possible fitness functions. Now, I don’t think Dembski, Marks or anyone else is arguing that all possible fitness functions are instantiated or even that it has conclusively been shown that the match of random-walk evolutionary search to the subset of biological search problems results in performance on the same level of efficiency as random blind search; to argue such I think is misleading or dishonest. From my reading of Demsbki, he is NOT saying that over the biologically *relevant* search problems and fitness functions an NFL situation occurs in which we should expect Darwinistic search to perform as random blind search would. He is, however, saying that NFL applies to the general problem of finding the right search algorithm for your particular search problem. The more correct information you have about the problem at hand, the better you are able to find a suitable algorithm that outperforms random blind search.

    So if biological search does indeed outperform random blind search on average over the various fitness landscapes then we can ask what are the chances we found this match of algorithm to search space structure by chance? The answer is quantifiable and this is the direction the “Active Information” framework approaches the question from.

  212. Semiotic 007 (190): “I resisted getting into a debate of an off-topic point. But you insisted, and I responded by linking to an on-topic paper that stood to be of interest to everyone reading the thread…

    The pointing out of the inappropriateness of teleology in computer modelling/simulation of Darwinian evolution is on topic. Criticism of your belief that competition (which is inherently teleological) is essential to Darwinian evolution is therefore entirely germane to the topic.

    Your mention of getting on topic occurred after other, true diversions (comparison of quotation of Darwin to Biblical exegesis, denigration of Darwin’s beliefs regarding his own theory, (unnecessary) explication of the (obvious) practical limitations of CFD, etc.)…

    Semiotic 007 (190): “…Only ellipsis in your quote of my comment (#84, you might have mentioned) makes things seem otherwise. Here I’ve emphasized some text you omitted…

    At (185), I did provide the comment number [84] for easy reference:

    This directly contradicts your stated intention at (84).

  213. Semiotic 007,

    The Wikipedia article describes a uniform distribution. If you want to talk about a discrete uniform distribution , then why not call it a discrete uniform distribution. Either way, discrete or continuous, the idea is rather obvious, isn’t it, equi-probability?

    I understand the Kronecker delta function quite well, thank you. I’ve studied some tensor calculus.

  214. 214

    kairos (207):

    To say that there is NFL for P is to say that P(f) = P(f o j) for all functions f and for all permutations j of the domain of functions. Suppose that p(f o j) is below threshold and that p(f) is above. If you set P(f o j) = 0 and P(f) = p(f), then

    P(f) – P(f o j) > p(f) – p(f o j).

    Loosely speaking, you’ve moved further from the equality necessary for NFL, not closer. And this does not involve normalization. But doesn’t an improper distribution bother you? The sum of P(f) over all f is less than 1 unless you normalize in some fashion.

  215. 215

    PaV,

    From 205:

    PaV: “A uniform distribution is an equal probability distribution over an interval.”

    perlopp: That is one special case.

    You responded by linking to the article on that special case. From 215:

    The Wikipedia article describes a uniform distribution.

    There’s a reason you get a disambiguation page when you search for “uniform distribution.” There are separate articles for the discrete and continuous cases, hinting that what perlopp says might be true. I find it very hard to believe you thought of the title “Uniform distribution (continuous).” Did you not reach the article by way of the disambiguation page?

    If you want to talk about a discrete uniform distribution, then why not call it a discrete uniform distribution.

    Because the intended audience of an article in an engineering journal does not need to be told both that the set of functions is finite and that a probability distribution on that set is discrete. And the intended audience certainly does not need to be reminded that a probability distribution function maps a set to real numbers.

    I understand the Kronecker delta function quite well, thank you.

    But the trick is in knowing when to apply it, and not the Dirac delta. I have worked with graduate students who scored 800 on the math GRE, back before the standardization changed and a perfect score was rare, and who could neither formulate nor evaluate novel mathematical arguments to save their lives.

  216. Perlopp:

    Re 200: Kindly look at your choice of language again!

    I think you will see that I have pointed out that the decisive issue is not the strawman-burning debates over NFLT terms and conditions — on which BTW, it seems IMHCO that WD does much better than his critics allow (as is tiresomely usual on ID-related matters) — but the realities of the statistical thermodynamics principles anchored challenges as yet unanswered by the evo mat advocates on OOL and BPLBD. (and on cosmological fine-tuning too, cf always linked). In steps:

    1 –> Kindly note for instance my cite from Harry Robertson above.

    2 –> For, there, he aptly points out the informational significance of probabilistic distributions, and applies that to infer the link between the informational issues and the thermal/energetic ones.

    3 –> In that context, issues of vast — far beyond merely astronomical — configuration spaces and isolated islands of functionality within them become decisive.

    4 –> Which is what PaV pointed out, and which is what I abstracted as decisive.

    Unless and until evo mat advocates can cogently address this and show empirically that FSCI can and does credibly arise from chance and necessity on the gamut of our cosmos and/or that there is good empirical reason to infer to a quasi-infinite multiverse, they are guilty of resort to empty, ad hoc metaphysical speculation to rhetorically prop up a factually seriously challenged worldview. One they pretend is “scientific.”

    (I hardly need to reiterate that we know that FSCI is routinely produced by intelligent agents.)

    In that already stated and discussed and linked context, I think I have reason to be less than amused to see dismissive nonsense like claims that I do not understand basic mathematical [or general] logic, whether on NFLT or otherwise.

    As touching NFLT, you will see that I have discussed the Wiki summary, as it helps make my own point clear: there is a plain link to the statistical thermodynamics principles issues lurking in the background so soon as one addresses inference to design related issues.

    I guess a further excerpt from Wiki on “your” point may help you see where I am coming from:

    The original no free lunch (NFL) theorems assume that all objective functions are equally likely to be input to search algorithms.[2] It has since been established that there is NFL if and only if every objective function is as likely as each of its permutations.[4][5] (Loosely speaking, a permutation is obtained by shuffling the values associated with candidates. Technically, a permutation of a function is its composition with a permutation of its domain.) NFL is physically possible, but in reality objective functions arise despite the impossibility of their permutations, and thus there is not NFL in the world.[11]

    The obvious interpretation of “not NFL” is “free lunch,” but this is misleading. NFL is a matter of degree, not an all-or-nothing proposition. If the condition for NFL holds approximately, then all algorithms yield approximately the same results over all objective functions.[5] Note also that “not NFL” implies only that algorithms are inequivalent overall by some measure of performance. For a performance measure of interest, algorithms may remain equivalent, or nearly so.[5]

    The reason that almost all objective functions are physically impossible is that they are incompressible, and do not “fit” in the world.[8] Incompressibility equates to an extreme of irregularity and unpredictability. All levels of goodness are equally represented among candidate solutions, and good solutions are scattered all about the space of candidates. A search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution.[8]

    Thus, we may take it in steps again:

    5 –> That last statement is particularly illuminating of the force of my point: “A search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution.”

    6 –> But, is that relevant to this case, where we are looking at UPB and config-space anchored probabilistic resource exhaustion on the gamut of the whole observed cosmos? [And BTW Dawkins' "foam" of "billions" of sub-cosmi in his recent debate with Lennox to rhetorically get around the probabilistic resource exhaustion issue evades the fact that we would have to be looking at a quasi-infinite array -- and one without a shred of empirical evidence, i.e it is a metaphysical ad hoc claim, not a scientific one!]

    7 –> In slightly more details: when we go beyond UPB [more than 500 - 1,000 bits worth of information storage capacity to hold the relevant information in the systems of interest], and are on stat thermo-d principles dealing with credibly isolated islands of fucntionality within the resulting vast config spaces, we have no good reason to infer that on the gamut of the observed universe any RV + NS-architecture chance + necessity based search will have even the slenderest ghost of a chance of coming near just one functional solution, much less the cluster of fucntional ones required to account for either OOL or BPLBD!

    That is what I discussed in details through the always linked microjets and nanobots thought experiment here.

    In short, we come right back to WD’s main point; namely, that what is going on in the relevant situations [including "Methinks," Avida and Ev] search algorithms under the general rubric evolutionary computing, become more than “average” because active information is added in the design process of the algorithm. And such active information comes from intelligent agents, i.e. the NFL in applied context is pointing to the importance of agency in solving problems of finding isolated islands of functionality in vast configuration spaces.

    (Which comes full circle to the remarks in OP and in my comment no 1 on grudging acknowledgement in the guise of claimed refutations; which we can augment with the point that sometimes the concession of the key issue is not acknowledged but directly implied or entailed on bringing to bear relevant factors.)

    I think you will therefore appreciate my bottom-line: Cho man, do betta dan dat!

    GEM of TKI

  217. 217

    Atom (211):

    So if biological search does indeed outperform random blind search on average over the various fitness landscapes then we can ask what are the chances we found this match of algorithm to search space structure by chance? The answer is quantifiable and this is the direction the “Active Information” framework approaches the question from.

    Yes, Marks and Dembski have recently worked with information measured on instances, not distributions. (Incidentally, I interpreted their work as an attempt to go straight, and took some flak from colleagues.) Their approach has seemed reasonable to me, though I’ve never felt sure what to make of it. Now I can see that if NFL does not hold, then their analytic framework has a problem.

    The problem is that if, say, a (1+1)-EA is generally superior to random search for a uniform distribution on the set of all functions f in Y^X with sufficiently low Kolmogorov complexity for physical realization, then Marks, Dembski, and others may fall into misinterpretation of positive active information for the EA on a particular instance. The expectation of the EA’s active information over all low-complexity functions would be positive as a consequence of the finitude of the observable universe. It would not be due to design.

    Marks and Dembski should hope the ideas shaken loose by this discussion don’t help me complete a proof I’ve been struggling with since the summer. I wouldn’t say offhand that the anticipated theorem (supported by extensive numerical experiments) would demolish their framework, but some repair would be necessary.

  218. Kairos:

    Excellent! Indeed, it is the right time . . . if we are paying attention.

    Atom:

    Good summary, as per usual.

    (BTW, how is the ever-lovely Mrs Atom? My LKF is fighting a dose of the usual “London” flu due to the annual wave of UK based visitors for the Christmas festival here.)

    I add only that we should note that on the gamut of the observed cosmos, blind search based on random walks starting at arbitrary initial points in genomic config space will hopelessly fail on average and all but absolutely. [That is we are looking at a soft, probabilistic resources exhaustion impossibility -- as is typical of stat thermodynamics, e.g the stat thermo-d form of 2nd Law of Thermodynamics, not a logical-physical hard impossibility.]

    More broadly, the only “searches” that are empirically known to succeed in finding functionally specified hugely isolated domains within such large config spaces [cf PaV's excellent image] are intelligently directed ones based on domain knowledge.

    Thus, we have no good reason to imagine that searches constucted onthe RV + NS architecture will work on the gamut of our observed cosmos.

    And, of course in principle every instantiation of a chance-based molecular chaining in the plausible prebiotic soups is an instantiation of a search algorithm in the family.

    Are all such equiprobable?

    Ans: no, in fact one major point of the point of the classic TMLO study in the earlier chapters is that it is not easy to get to a plausible prebiotic soup at all, and that in such a soup the preferential reactions lead AWAY from chaining of life-relevant macromolecules. The resulting chemical equilibria on the relevant macromolecules for getting to life as we know it are such that it is simply utterly unlikely for such molecules to form individually on the scale of a planet full of prebiotic soup of exteremely generous concentrations in the relevant precursors.

    Much less, in clusters that just happen to be so spatially fitted together that life functionality can emerge by chance acting on the known laws of physics and chemistry.

    And of course the speculative quasi-infinite foam of subcosmi is ad hoc metaphysics to try to rhetorically blunt the force of the empirically anchored evidence, not serious science. So, we see a conundrum on OOL for evo mat thought.

    On BPLBD, we see that the various RV mechanisms boil down to needing to generate even more FSCI — e.g. 100 mn + bases to get to a plausible first arthropod as Meyer pointed out in that PBSW paper, adn within the gamut of the earth, not the cosmos as a whole. At least on NDT — and panspermia on major animal and plant groups would be an even more interesting admission that intelligent agency was involved in origin of species than anything we have seen to date.

    That is, the way in which in the real world the relevant claimed RV + NS search algors credibly operate, is to be inferior to the average if anything, i.e even more hopeless than raw random search! (Note my stress on getting to grips with how abstract theorems and concepts anchor down to the real world!)

    So, we are right back to the core points made long since by WD, and even TBO.

    GEM of TKI

  219. 219

    PaV (208):

    This isn’t an argument that Dembski couldn’t make himself, it’s simply an argument he wouldn’t bother taking the time to make, unless for some reason he needed to.

    Come one, now. Everyone makes mistakes. Dr. Dembski did not survey the NFL literature when working on No Free Lunch. If he had, he would not have emphasized that a search algorithm should be expected to perform poorly unless matched to the instance (function). That random search performs well under the uniform distribution was established in 1996, the year before Wolpert and Macready’s first NFL article appeared. A fair number of researchers have known since 2000 that optimization is easy in the typical (algorithmically random) function.

    The 1996 argument is based on high-school math. The 2000 argument is based on advanced math that Dr. Dembski knows well. Dr. Dembski was capable of making the arguments himself. But the arguments are most definitely not ones “he wouldn’t bother taking the time to make, unless for some reason he needed to.”

    No one’s asking you to eat crow. Marks and Dembski have put new stuff out there. It’s interesting, and not at all gelatinous.

  220. KF:

    Yes, agreed that if we begin at a random point we have no hope of finding the first cluster of functionality, thus PaV and your points are indeed very valid concerns.

    As for Mrs. Atom, she is doing quite well, as gorgeous as ever. I hope your wife feels better soon!

  221. 221

    Atom (222):

    Yes, agreed that if we begin at a random point we have no hope of finding the first cluster of functionality, thus PaV and your points are indeed very valid concerns.

    If you say that fit genotypes in some sense cluster in the genomic spaces (no one has specified topology) of various species, then you are saying that the fitness functions were almost certainly not drawn uniformly. Almost all functions from genotypes to fitness values are disorderly in the extreme.

  222. #216 Semiotic

    To say that there is NFL for P is to say that P(f) = P(f o j) for all functions f and for all permutations j of the domain of functions. Suppose that p(f o j) is below threshold and that p(f) is above. If you set P(f o j) = 0 and P(f) = p(f), then

    P(f) – P(f o j) > p(f) – p(f o j).

    But this doesn’t occur in the second (real world) operation. It’s only here that0 is assigned to all p’s.

    But doesn’t an improper distribution bother you? The sum of P(f) over all f is less than 1 unless you normalize in some fashion.

    This is not the case if the normalization for sum p =1 is performed before. This kind of distinction between precise (theoretically true) and approximate (valid in the real world) is quite common in engineering. For a good example you can see to the way the density functions for elelctrons and holes n8x9 and p(x) are treated within the depletion region of a pn junction.

  223. 223

    kairos,

    You are constructing P from given p, a proper distribution. The sum over all f of p(f) is 1. Thus it makes no sense to normalize first.

  224. Atom:

    I see, indeed, on Mrs Atom!

    My Little Kairosfocus [so he has called himself on seeing my online activities!] stayed home yesterday from School, fighting the flu; but by the afternoon was having fun with a remote-control car and lenses, eager to figure them out. [You should have seen his astonishment on observing a Fresnel lens and its flat, corrugated appearance -- how does this one work, it is not like the other ones? Anybody got a good simple 9 yo level explanation on that one?]

    [He has been fascinated with how pinholes and lenses form real images and has been contrasting the brightness and wondering why. Of course, all of this is relevant: so much for the slanderous notion that believing in God is a "Science-stopper" -- science is rooted in our in-build intense desire to understand the mysteries of our world, and to use the results to do something interesting or advantageous. Guess who put that there, and put us in a world set up for exploration . . .? For further reference, kindly read the General Scholium to Newton's Principia, e.g. the excerpt here. of course, Newton's first major investigation was on "Opticks," and inter alia led him to invent what we know as the Newtonian reflector telescope, as he despaired of solving the aberrations and dispersion of light problems that characterised refracting telescopes. BTW, ever noticed how N. is hardly ever brought up as an exemplar of science these days? And yet, he is indisputably the greatest of all scientists, ever.]

    Now, on more direct points:

    1] agreed that if we begin [to search for islands of fucntionality in a config space for the genome, considered as a physical-chemical system to emerge from a "plausible" pre-biotic soup -- cf PaV at 175 and again at 189 etc, as well as my always linked] at a random point we have no hope of finding the first cluster of functionality

    Not just that we start from an arbitrary initial point, but that we are using algorithms that are based on RV + NS to get to [1] OOL [genome ~ 300 - 500 k], thence, [2]BPLBD [genome ~ 100 mn], and onward, [3] organisms with reasonably reliable mental functions and using genomes of order 3*10^9. All, to happen in a cosmos of scope ~ 10^80 atoms and say 15 BY.

    In short, Evo Mat advocates — in our day; back in C19, they thought life was about as sophisticated as a bowl of jello-like “protoplasm” so simple RV + NS mechanisms seemed plausible — need an algorithm based on chance + necessity only, that is dynamically and probabilistically capable of doing that in that sort of scope.

    To date, after 150 years of trying and delivering various promissory notes and just-so stories in the name of “science” — watched the astonishingly hollow and weak performance of Dawkins with Lennox on the weekend . . . — the Evo Mat school of thought has failed to deliver. And, once one sees the config space scale and search issues, one easily sees why, on grounds long since forming the base for the highly successful discipline in science commonly known as statistical thermodynamics!

    Contrast, the empirically known, commonly observed, ability of intelligent agents to use knowledge of the possibilities of configurations and the underlying discoverable framework of lawlike natural regularities, to construct systems exhibiting organised complexity reflecting FSCI as a signature of their work. Thart is, we DO know a dynamically and probabilistically competent source for the genome.

    Just, it does not sit well with the worldview preferences and agendas of the school that happens to dominate in science institutions in our time. SO, that school keeps on issuing just-so stories and promissory notes, which on the implications of stat thermo-D [and the real-world applications of NFL] keep falling flat.

    So it is time to collect on the IOUs, and declare intellectual bankruptcy.

    2] SEMIOTIC, re 213: first, a Complaint

    Semiotic, there is a reason why after having had to deal with plagues of spam and harrassment that I reserve my name from general discussions on blogs. (I have given adequate information for those who legitimately need to contact me, as say several commentators at UD have.)

    I ask that you kindly respect this, and address issues on the merits instead of indulging in puerile personalities.

    And kindly note that “fools for arguments use wagers.”

    Not to mention, that I have a moral objection to gambling in any form, one that I have made a public record of — through co-hosting a live, call in programme here in my land of residence when casino gambling was put on the agenda by powerful forces in the community as a “solution” to our post-volcano economic woes [by some of the same ones who ignored the warnings when something could have been done in advance to reduce our vulnerability . . .] — and have paid a price for so doing.

    If you refuse to refrain from personalities, I will make my complaint to the authorities here at UD loud and clear.

    Understood?

    Now, on substance . . .

    3] Let X = {0, 1}^64. Also let F be the set of all functions from X^5 to X. . . . . If algorithms to search functions in F are written as binary Turing machine descriptions (strings over {0, 1}) for a fixed universal Turing machine U, and the S-th cell of the tape from which U reads descriptions is set to 2 immediately prior to the operation of U, then there is no probability distribution on F for which all algorithms have identically- distributed sequences of observed values when presented to U. . . . . I made the preceding proposition somewhat informal. I claim that a similar statement (to be included in a written agreement) will be proven a theorem in a journal in mathematics, science, or engineering within the coming three years, and that its negation will not

    Has it ever dawned on you that chance + necessity acting by themselves on a space-time, matter-energy only cosmos are on excellent basic thermodynamics grounds dynamically and/or probabilistically incompetent on the gamut of our observed cosmos to synthesise a code system and algorithms such as you have just characterised — e.g. it is well beyond the UPB limit of 500 – 1,000 bits of information- storage capacity, once we begin to unpack what phrases like “Turing Machines” [i.e. general purpose computing device] mean, etc?

    So, here is my counter-proposal, Semiotic: Kindly simply provide a plain explanation relative to evo mat premises that shows that the contentions I have made in my always linked, App 1, and/or that PaV has pointed out in 175 as I have excerpted, are not cogent to the issues of [a] real-world OOL, [b] origin of BPLBD, and [c] origin of an embodied organism capable of reliable reasoning tha tis not plagued by the dilemmas implicit in say Crick’s infamous statement:

    The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.

    Free Will is, in many ways, a somewhat old-fashioned subject. Most people take it for granted, since they feel that usually they are free to act as they please. While lawyers and theologians may have to confront it, philosophers, by and large, have ceased to take much interest in the topic. And it is almost never referred to by psychologists and neuroscientists. A few physicists and other scientists who worry about quantum indeterminacy sometimes wonder whether the uncertainty principle lies at the bottom of Free Will.

    … Free Will is located in or near the anterior cingulate sulcus. … Other areas in the front of the brain may also be involved. What is needed is more experiments on animals, … [The Astonishing Hypothesis: The Scientific Search for the Soul, Charles Scribner's Sons, New York, NY, 1993, pp. 3, 265, 268.]

    As to wagers, no money needs be on the table, just demonstrated ability to think clearly and address issues cogently on the merits across comparative difficulties; instead of on personalities rooted in red herrings leading out to oil-soaked strawmen burned to cloud and poison the atmosphere with noxious smoke.

    Or, has it ever dawned on you that there is a reason why I remain utterly unimpressed with the debates on NFL etc that you have put up?

    And, why I have therefore quite deliberately chosen to respond at the basic 101 level that a reference in succession to the paragraphs of the Wiki article on NFL will permit? [The showing off of mathematical virtuosity by using techniques and concepts that good old Prof Harald Neiderriter -- in his favourite orange and green Dashiki and duly brown sandals, toting that old brown leather briefcase with the alarm clock set on the desk promptly at 5 minutes past the hour -- taught us on UWI Mona Campus back in M100 in the 1970's, is not the only relevant consideration, in short!]

    Speaking of Wiki . . .

    4] NFL article: “A search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution.”

    As I noted in 218, point 5, this is key.

    a –> For, therein lieth the issue of probabilistic resource exhaustion and the resort to a quasi-infinite quantum foam of sub-cosmi to try to evade its force through a materialistic form of the anthropic principle.

    b –> In short, the point has actually long since been conceded, and in the peer-reviewed lit too:

    c –> Namely, there is not a good reason to believe that on the gamut of the observed cosmos, RV + NS and extensions thereof under evo mat models of origins, can credibly locate bio-functional forms and to cluster them into living cells and organisms of widely divergent body plans [the functionality targets forming the relevant set of objective functions to be searched out by search algorithms resting on RV + NS], in the relevant configuration space defined by the organic chemistry and associated thermodynamics of plausible pre-biotic environments.

    d –> But, such an extension of the scope of search to incorporate an unobserved [and probably inherently unobservable] quasi-infinite scale is a resort to speculative, empirically un-anchored metaphysics, not science.

    e –> Thus, we are now not in the province of scientific methods, but most strictly in that of philosophy, and so the comparative difficulties approach across live option worldviews is the relevant one.

    Further to this, we then have no excuse to use words like “science” to censor out due consideration of ALL live option alternatives and issues in the relevant phil, however broadly we may define these. (That is, for instance, we should broadly/generically identify and examine materialistic, theistic, and pantheistic views as the main options, in education and in popular or semi-popular discussion of such phil topics and issues.

    A modicum of basic phil would then allow us to discuss with basic understanding what our options of thought are; and what challenges they each face — all worldviews bristle with difficuties. So, we can then make informed and balanced, not manipulated choices.)

    5] Kairos, 223: distinction between precise (theoretically true) and approximate (valid in the real world) is quite common in engineering. For a good example you can see to the way the density functions for elelctrons and holes n8×9 and p(x) are treated within the depletion region of a pn junction.

    Precisely!

    GEM of TKI

  225. #220 Kairosfocus:

    Excellent! Indeed, it is the right time . . . if we are paying attention.

    And for ID certainly the time will be more and more favourable.

    #224 Kairosfocus:

    Kairos, 223: “distinction between precise (theoretically true) and approximate (valid in the real world) is quite common in engineering. For a good example you can see to the way the density functions for elelctrons and holes n8×9 and p(x) are treated within the depletion region of a pn junction.”

    Precisely!

    I don’t know who semiotic 007 is, but I suppose:

    a. He/She is a Maths Prof in some University working on abstract Maths for optimization problems (I have some idea about). This could explain his/her reaction to arguments about approximation for problems in the real world.

    b. Independently of his/her competence in a specific field, what did he/she say about you don’t show a great personality. I am sorry for him/her.

  226. Kairos:

    Thanks.

    And thanks for the Mt 5 reminder to pray for those who act like that.

    May God help him/her.

    GEM of TKI

    PS: Semiotic, maybe it is a bit forward of me to suggest, but forgive me if I say perhaps you would benefit from reading this session of my intro to phil course.

  227. Semiotic 007,

    I think we’re talking ‘apples and oranges’ here. I was talking explicitly , or, I should say, I had in mind, Haggstrom’s argument when I made the statement you quote.

    I’m in no position to agree, or disagree, with your assessment of Dembski’s NFL argument. But my impression is that you might be overstating things when you say that random searches can easily reach optimization, for this is the gist of the dilemma: Dembski says that in using some kind of fitness function that we’re dealing with a “displacement problem’. Information is needed, and it is being provided, essentially, by some other kind of ‘search’, one larger than the first. But, since intelligent agents are involved in the search for a fitness function, a certain amount of improbability can be overcome, and is. Is that what you’re referring to? I don’t know. But, I think that the statement you made concerning Dembski should be applied to you. Why haven’t you made the argument that Dembski was wrong, pointed it out in a clear way, and then allowed him to respond?

    As to the Dirac function and Kronecker delta function, I didn’t get 800 on the GRE, but it’s rather obvious that if you’re doing calculus you use the one, and when you’re doing linear algebra, you use the other. My use of the Dirac delta function had application since I was dealing with ‘fitness landscapes’, and not with the ‘fitness functions’ AI is fond of. We, here at UD, often ‘see’ fitness functions; and my point was that they’re really a fiction when it comes to protein configuration space.

    Finally, since we’re talking about the Kronecker delta function, why, if there’s a more apt way to do it, do you show, as does Haggstrom, the function f: V?S when the real interest is in the entire set of such functions? Shouldn’t the proper notation have the subscript i, e.g., underneath the f ? It seems to me that’s what Einstein notation would indicate.

  228. Semiotic 007,

    I think we’re talking ‘apples and oranges’ here. I was talking explicitly , or, I should say, I had in mind, Haggstrom’s argument when I made the statement you quote.

    I’m in no position to agree, or disagree, with your assessment of Dembski’s NFL argument. But my impression is that you might be overstating things when you say that random searches can easily reach optimization, for this is the gist of the dilemma: Dembski says that in using some kind of fitness function that we’re dealing with a “displacement problem’. Information is needed, and it is being provided, essentially, by some other kind of ‘search’, one larger than the first. But, since intelligent agents are involved in the search for a fitness function, a certain amount of improbability can be overcome, and is. Is that what you’re referring to? I don’t know. But, I think that the statement you made concerning Dembski should be applied to you. Why haven’t you made the argument that Dembski was wrong, pointed it out in a clear way, and then allowed him to respond?

    As to the Dirac function and Kronecker delta function, I didn’t get 800 on the GRE, but it’s rather obvious that if you’re doing calculus you use the one, and when you’re doing linear algebra, you use the other. My use of the Dirac delta function had application since I was dealing with ‘fitness landscapes’, and not with the ‘fitness functions’ AI is fond of. We, here at UD, often ‘see’ fitness functions; and my point was that they’re really a fiction when it comes to protein configuration space.

    Finally, since we’re talking about the Kronecker delta function, why, if there’s a more apt way to do it, do you show, as does Haggstrom, the function f: V->S when the real interest is in the entire set of such functions? Shouldn’t the proper notation have the subscript i, e.g., underneath the f ? It seems to me that’s what Einstein notation would indicate.

  229. 229

    PaV,

    First, let me apologize for allowing to spill over to you my annoyance with someone charging into Gish gallops and hurling bricks at the research area near and dear to me, knowing that my responses are “awaiting moderation.”

    Why haven’t you made the argument that Dembski was wrong, pointed it out in a clear way, and then allowed him to respond?

    I don’t think the NFL stuff is central to ID. In a forthcoming publication on ID, I focus on the arguments from irreducible complexity and specified complexity. I have little to say about Dembski’s No Free Lunch, which I believe is written in jello. In all honesty, I decided that to treat it as representative of Dembski’s thought, when I had read his later writings, would have been to set up a straw man. In other words, people here are defending ideas that I declined, out of fairness, to pin on Dembski.

    It’s interesting that you bring up displacement, because “Searching Large Spaces” was the first work of Dembski’s that I respected — which is not to say that I agreed with his analysis. Wolpert and Macready (1997) indeed seem not to have thought carefully about how a practitioner would match an algorithm to a “problem.” I think Dembski’s observation that there was an implicit search for an effective search algorithm was acute. It is entirely appropriate to ask how one gains information about which algorithm to apply. My objections to Dembski’s analysis are complex, and I will not go into them here. In any case, the “Practical Free Lunch” theorem in 232 effectively says that the space of search algorithms is much, much smaller in practice than in theory, and this means that Dembski’s displacement analysis is in terms of a model that does not fit physical reality. Accounting for the information practitioners use to select effective search algorithms is no less interesting a problem, however.

  230. 230

    PaV (230):

    Finally, since we’re talking about the Kronecker delta function, why, if there’s a more apt way to do it, do you show, as does Haggstrom, the function f: V?S when the real interest is in the entire set of such functions? Shouldn’t the proper notation have the subscript i, e.g., underneath the f ?

    Could you point me somewhere in particular in the paper? I’m not following you.

    The NFL literature refers to “needle in a haystack” (NIAH) functions instead of Kronecker delta functions. The needle is a good value, and the hay is a range of bad values (often a single value). Sometimes there are multiple needles, but never many. The sense of “good” and “bad” depends upon whether the objective is minimization or maximization. The good and bad values are not necessarily 0 and 1. Perhaps it makes sense now when I say that analysis of optimization calls for something a bit more general than the Kronecker delta.

  231. MODERATORS:

    Official complaint against the anonymous poster at UD known as Semiotic 007.

    1] As you may have observed, yesterday in the 30th Dec 07 thread on NFL theorems, I complained to the above identified commenter at UD, that he had improperly published my real, full name [I have used my initials previously, in defence of myself from spam and harrassment].

    2] Note, like other UD commenters have, he could easily have used the responsible approach and simply emailed me at contact emails maintained at my reference web site. He did not, and has not. [NB: I have found that using initials publicly and keeping my direct contact in a separate reference site is effective in keeping spam within reasonable levels, at least currently. Hopefully, the spambots will not get significantly more effective, at least for now.]

    3] Now, too, since it is well-known that anonymity is often used by those open to consider ID or ID proponents in defence of themselves from being Sternberged or Gonzalezed respectively, this “outing” attempt must be considered a serious offence in intent if not effect.

    4] Further to this, overnight I find a follow up post addressed to my self by handle, in which Semiotic 007 is demanding a signed statement from me.

    5] Nowhere is there the faintest trace of regrets or apology for action which is plainly improper, lending further support to the conclusion that it is intentional and calculated to do actual harm, in wanton and willful disregard for duties of reasonable care. Namely, it is a TORT.

    6] Worse, he now “demands” for me to submit to him — from the manner of behaviour, this sex is likely — a notarised signature that he declares intent to post on the Internet, i.e. an open invitation to identity theft. (And that in a context where I had already expressed concerns about Internet security.)

    [So is the nonsense of posting a bet of US$25,000 on a minor matter, on which I have repeatedly stated that I hold to be irrelevant and have also stated with circumstantial details that I have a serious objection to anything that even smells of gambling.]

    7] MODERATORS: I therefore ask that you bear this in mind in dealing with Semiotic 007, and request that you take appropriate action in defence of the privacy of your commenters. For those of us who take Matt 5 – 7 seriously, let us pray for this man that he will repent and seek the blessed transformation of life that flows from that. In the meanwhile in the interests of Justice on the principles of Rom 13:1 – 10, the Moderators at UD as those in a position of governance here at UD, have a duty to protect us from harm stemming from improper, irresponsible or ill-willed behaviour.

    8] On the substantive matter, onlookers can see for themselves that I hold that he debates over the ideal-world mathematical nuances largely irrelevant to the real world of model reliability and validity. I do so as one experienced in real world electronics and related systems similar to what Kairos raised], as well as in the even more messy world of management models and applications:

    1 –> For, as noted in the long since public presentation accessible through my reference site here, ALL models [save prototypes and the like] are false, strictly. (Observe how S was easily able to identify my name but did not take time to look at what I have to say seriously on the matter of the validity of models.]

    2 –> But, we may inspect the subtleties that lurk in the logic of implication . . .

    3 –> Namely, P => Q asserts only the truth of the IMPLICATION, a certain logical connexion in which we have that NOT-[P and NOT-Q], so that IF P holds, then Q holds, and P cannot hold unless Q holds.

    4 –> But equally, as my all-time favourite Math prof, the “famous” Harald Neiderriter of Austria,was so fond of teaching: Ex falso quod libet. From what is false, we may freely infer to what is true [or in some cases false!]

    5 –> Thus we come to what Kairos underscored about model validation, ands what PaV used effectively in the above. Namely: the strictly false may be the reliably useful [as a model], once there is proper empirical testing and validation. For instance electronic amplifiers are commonly modelled as clusters of passive components [R -- including of course radiation resistance, C, L, M] and ideal generators [voltage or current sources], with signal grounds that take advantage of the high frequency shorting out effect of capacitors, and such models [when suitably sophisticated] hold up tpo tthe very limits of circuit theory where one has to introduce wave theory stuff.

    6 –> Indeed, by applying transmission line lumped parameter approximations, even then circuit theory insights can be extended into the zone of wave effects where components and traces are at least ~ 0.1 wavelength long, noting that typically in such media we are dealing with EM wave speeds of about 0.6 c. And of course in the near vicinity of many antennas, 90 – 95% of c is a relevant factor in adjusting antenna element length for best effect. Such experience- derived, judgemental rules of thumb of course allow us to extend the reliability of models further, and are part of the tricks of the trade of practitioners that you pay for when you directly or indirectly hire their expertise. And, similar points extend for process control or servo systems too, etc etc, even for computer architecture.

    7 –> Indeed, post Quantum and post relativity, that is what Newtonian dynamics is. Extending still further, models, theories and explanations by inference to best explanation, in general, are all of the same basic character. That is strictly, a trusted scientific theory is RELIABLE, not proved true.

    8 –> Indeed, ever since Godel the same is known to extend to Mathematics, for there is no guarantee that sufficiently rich mathematical domains are free of contradictions. And if they are free of contradictions, there are true claims that are unreachable relative to their axioms. Even scientists and mathematicians must live by faith, in short.

    So, ONLOOKERS:

    In the real world context, I look at WD’s work as pointing out that in the relevant case of interest, biofunctionality of perturbation-sensitive systems with digital storage capacity orders of magnitude beyond the “stretched” UPB, 500 – 1,000 bits. Note how I use a bit of judgement here, to take in cases where there are clusters of biofunctionality as PaV pointed out: a lot of clusters will get taken in within 10^300 cells in a config space.

    I then apply thermodynamics-based thinking, as I document in the above and in the always linked, appendix 1.

    I see in that light, that the picture PaV has painted is very apt, and that is why I used it above. Namely — and disputes over minutiae on the NFLT are irrelevant to this — RV + NS based searches that start from arbiotrary initial cells in the config space as likely to be obtained in reasonable or even very generaous prebiotic chemistry and physics deiven circumstances, are nmaximally unlikely to succeed in accessing even the biofuncitonal macromolecules sufficiently to get to any local hill-climbing that one may conceive of.

    Worse, we know that biomolecules are operative in precisely organised clusters, under algorithmic control. As the microjet assembly thought expt identifies, that is three further stages of serious expansion of the relevant config space.

    But by then the matter is in effect academic — we long since know that the only alternative to agency is a quasi-infinite cosmos as a whole qwith some sort of quantum bubble foam in which there are thousands or millions or far more orders of magnitude of sub universes with physics and prebiotic soups scattered at due random. And BTW, as Robin Collins asks ever so astutely, where did the universe- making- machine come from to do all of this convenient stirring of parameters and soups with highly convenient ingredients?

    All, duly unobserved.

    So, on inference to best, empirically anchored, explanation across comparative difficulties on factual adequacy, coherence and explanatory elegance, I have long since seen that the evidence points to an Intelligent Agent with MORAL certainty.

    So, when I see WD and Marks arguing that the issue is that in praxis we see that active interference is inserted into such hypothetical searches to reach to functionality that rises significantly above zero, that is obvious. Indeed, that raises a direct echo of the empirical findings noted by TBO in TMLO 25 years or so ago, namely that absent undue experimenter intervention, experiments on OOL go nowhere significant, that just shows me a side-light from Math on why, and from thermodynamics too.

    When they then use the NFLT type model and extends it to provide a metric on the active information supplied, that seems very reasonable indeed. Then, when I see this quantitative analysis easily take apart Dawkins’ Weasel, and the more sophisticated Avida and Ev, I see why there is an intent to now use a red herring leading out to an oil soaked strawman and ignite it tio distract attention, cloud the atmosphere and poison it.

    Semiotic 007 now extends this to me, by seeking to “out” me and evidently to thus expose me to being Gonzalezed and/or at least subjected to spamming and possible identity theft were I so foolish as to provide him with a notarised signature.

    Finally:

    Semiotic, I was not born yesterday and you show yourself utterly unworthy of respect or trust. (That does not remove the duty to pray for you under Matt 5 – 7, and so: “may God grant you the grace of penitence and reform.”)

    GEM of TKI

  232. Semiotic 007:

    On page 9, Haggstrom writes: “The basic NFL theorem involves an average over all possible functions f.”

    If it is an average, then we should write something like Sigma, i=1 to N of f(sub i)/N; but this, then, implies that we should use f(sub i) rather than a simple f, it would seem. If you’re going to call all your functions f, there should be a way of distinguishing one f from another, right? That said, however, since the cardinality of the sets f is mapping can be so huge, I suppose you just simply drop the (sub i) since you can’t iterate, practically, over that large a number of elements. So, it seems, that the lack of a (sub i) is an indicator of the futility of searching for such an f(sub i), and an harbinger of the NFL. Nonetheless, it takes a little getting used to.

  233. semiotic007 is no longer a member and the offending comments were removed.

  234. Dave

    Thanks for the attention.

    I appreciate your removal of the unnecessary reference to me by personal name.

    A real pity that Semiotic had to resort to personalities and attempted outing; there could have been a useful discussion.

    I wish he could have simply apologised and allowed the discussion to move on from there, with a fdue balance between the issues of mathematical niceties and the real-world considerations of modelling and validation — thence, of what we may call: useful reliability.

    GEM of TKI

  235. I watched this conversation unfold and it seemed to me that there was a disconnect since Semiotic seemed focused on the problems of software engineering and everyone else on biological reality. Now Semiotic made an “interesting” claim that no one jumped on (unfortunately, it appears it was deleted since it was part of an offending comment). He briefly mentioned how the (presumably) software-based programs that generate information would exceed the UPB (duh) then he claimed that an algorithm furthered by natural processes should be expected to perform better (or something to that effect). No justification was given, but I found that assertion to be more interesting than anything else being discussed.

  236. Hey. Get over it. It’s a fact. Computers simulating evolution create information just like iPods create music.

    Gloppy

  237. Patrick:

    I too am sorry to see the conversation end as it did. I wish it had not — and despite having had to complain of tort, and before that having had to point out through the hostile witness, Wiki, that there was more to the story than we were being given by the ones tightly focussed on whether NFLT strictly holds in relevant real-world contexts.

    To put a similar case, at a very crude level, pi is not strictly speaking equal to 22/7, but that is often “good enough for government work.”

    Similarly, NFLT probably does not hold strictly in the sort of situation we were facing, but it is probably true that no “blind” algorithm will do significantly better than an arbitrary “pick a config at random” in finding the FUNCTIONALLY SPECIFIED DNA configs of life [much less the other components and organisation of a cell], starting from any plausible or generous pre-biotic soup. That is why I said in 184 above that PaV put his finger on the money in his comment in 175:

    If we mentally try to visualize what’s going on, we can look down on a sea of two-dimensional space. At each location, that is, each point[I would say cell -this is a discrete space!] , of this two-dimensional space we find a permutation of a 3,000,000,000 long genome. As we look down onto this 2D space, these 100 trillion “high fitness” genomes, along with each of their trillion “high fitness” permutations, are randomly dispersed on this plane. What we’re going to do is to “pull together” all of these trillion of “high fitness” permutations to form a cluster. (After all, they’re ‘independent’ of one another) We end up with 100 million clusters, consisting of one trillion permutations. We could have, admittedly, “clustered” all 10^25 (100 trillion x one trillion) together. But, if we were to do a blind search for just that one cluster, it would be much harder to find than having 100 trillion “clusters” (of a trillion permutations) throughout the space of all possible genomes.

    Now in this configuration of genome space we have “clustering”; in fact, we have it to a staggering degree: viz., one trillion viable permutations per genome. So, [per model just proposed] if the human genome were to experience a mutation anywhere along its length, the likelihood of it not being viable would be 1 in a trillion.

    So, again, we have the space of all possible genomes within which are to be found, randomly (again, giving the best possibility of being found by search), 100 trillion “clusters” of a trillion permutations. Once we’ve pulled all these permutations together and formed 100 trillion “clusters” of a trillion permutations each, then the space, G, of all possible genomes is smaller by roughly 10^25 genomes. But 10^25 represents 1/4,000,000,000 of G, leaving G essentially unaffected in size.

    Now, what we have left is a uniform distribution of size 10^1,000,000,000 among which are to be found generously realistic “clusters” of genomes for every living being imaginable. The odds of hitting the target, that is, any one of the 100 trillion “clusters” of genome permutations, through blind search is 10^25/10^1,000,000,000= 1 in 10^4,000,000.

    You can’t argue that the “clustering” I propose has in any significant way changed the uniform distribution of G, the space of all possible genomes. Nature must navigate this way using, per Haggstrom, Darwin’s algorithm A (reproduction-mutation-selection) to find its way through this uniform distribution. But since it is a uniform distribution, we know that it’s no better than ‘blind search’, and we know that G is to Vast for blind search to work.

    This is where the Explanatory Filter, that Dembski describes, would tell us that since randomness cannot explain the “discovery” of living genomes, then design is involved.

    However on generation of “information” in one sense, that is ever so easy: flip a coin 1,000 times and you have a sequence that is unique to one part in 2^1,000. That is it is complex in the sense of very highly contingent — you would be ever so unlikely to match that particular string of coins again on the gamut of the observable universe, over its lifespan.

    But, to specify the string of coins, we would have to basically list it out.

    But, now, suppose I were to tell you that he string of coins specifies the first 250 digits of pi in binary coded decimal, ignoring the decimal point: 31415926539 . . . and on for 250 digits. That is, in 8421 BCD, with dashes to show the digits: 0011 – 0001 – 0100 – 0001 – 0101 – 1001 . . .

    Now, the string is not only unique, but also functionally specified, as just described, i.e plug it into the area calculation for the surface of a sphere and it will give the right answer. That functionality can be simply and briefly described [and replicated through a series for pi, at will].

    That is, we see here functionally specified, complex information. We can even specify a cluster of functional near-equivalents, e.g will give pi to within .0001% or whatever is useful. BTW, such a specification will of course preserve a certain part of the pi-string very tightly indeed, and will allow the rest to vary as it wills. For the rest is much less important to the function.

    We could even extend this: we can allow hill-climbing to pi-250 if the first hit is close enough to count to a required precision. But, that would not help an arbitrary coin toss get near enough to count in the sea of all possible configs. And, if we rigged the coins so that the first toss will to high probability be within teh target zone, that too will be because we have intelligently intervened to shift the distribution of the random variable sufficiently far away form “uniform” that we cna now say we have fed in an increment of acrtive information.

    And tha tis what WD and Marks did intheir recent work on NFLT and evolutionary computing — quantified how much information tha tis functional has been fed into Dawkins’ “Methinks” and Avida and Ev.

    It turns out that if you are able to do significantly better than random selection across the whole config space, for a sufficiently rich space to be relevant to say OOL or OOBPLBD, you have committed an act of intelligent design.

    That is exactly the sort of thing that TBO pointed out in TMLO — the first technical level ID work — twenty-five years ago when they came up with a metric for investigator interference with the chemistry in pre-biotic scenarios; and again the point is that if you are above the threshold of success, you are outside the credible framework of what unaided blind nature in plausible pre-biotic scenarios will do.

    Somebody is trying to tell us something, if we are only listening . . .

    GEM of TKI

  238. #235 Patrick

    Now Semiotic made an “interesting” claim that no one jumped on (unfortunately, it appears it was deleted since it was part of an offending comment). He briefly mentioned how the (presumably) software-based programs that generate information would exceed the UPB (duh) then he claimed that an algorithm furthered by natural processes should be expected to perform better (or something to that effect).

    Unfortunately I’ve not found this point. Could you please restate roughly what the argument is?
    I know that in the past some critics did claim that code generation (having in mind gene duplication obviously) would be an easy way to increase CSI. Was this S. argument? In this case this would simply show a very typical misunderstanding of what CSI concept really means.

    PS for Kairosfocus. Please excume my ignorance, but what does GEM of TKI stand for?

  239. Hi Kairos [and Patrick]:

    First, any thoughts on using pi-250 as a usefgul model complete with hill-climbing?

    [BTW, the fact that the coin can go to binary codes that BCD does not use both brings in points that are very binarily close to functional points that are not, and brings in a non-uniformity on the bit patterns, i.e not all of the set from 0000 to 1111 is used. That means that the 1's and 0's will not express the same amount of information!]

    Thanks

    GEM of TKI

    PS: Kairos, GEM are my initials, the meaning of which is easily enough accessed through the always linked (and links to you above that Semiotic unfortunately decided to abuse); indeed, in hand-drawn stylised form it is a form of my initials-style signature. TKI is the short form of my consultancy personality and organisation — I am involved in a loose regional network. The Kairos Initiative.

  240. #239 Kairosfocus

    First, any thoughts on using pi-250 as a usefgul model complete with hill-climbing?

    A possible criticism could point on the choice of the precision according to which the first hit could yield the start of hillclimbing. Somebody could say: ok we cannot add to the algorithm a formula for Pi but here we are in the mathematical world and we aren’t constrained by measure precision (as it’s the case in real world example; so, we have a fittness function that does tell us how much the hit is “good”. Why don’t start from a whichever point and use hillclimbing without any constraint on the precision? So the example is OK if we state explicitly that that precision is due to the use of “rel world” fittness functions, for example the direct measure on a circle.
    With this additio it’s an interesting example that is somewhat similar to what I meant (for real world examples). I would suggest that the example could be made less bound to our specific mathematical notation by using directly the binary representation of Pi instead of the BCD one. this would mean to search in a solution space with S={0,1}. It’s also a good example because , as you have already stated, computation of Pi can be expressed in a very short way (i.e. with very high specificity), for example by providing the code for computing the Gregory-Leibniz series: Pi=4*(1 – 1/3 + 1/5 – 1/7 + …).

    GEM are my initials, the … TKI is the short form of my consultancy personality and organisation — I am involved in a loose regional network. The Kairos Initiative.

    I beg your pardon; I didn’t understood.

  241. Unfortunately I’ve not found this point. Could you please restate roughly what the argument is?

    Unfortunately, the deleted comment is not in google cache either… The impression I got was that S. believed that by its very nature that algorithms that are carried out by naturally occurring processes should perform better than software-based programs. I find this assertion odd since to my mind the constraints of nature either are too wide or too narrow and not at all balanced like a well-designed GA. It should be a rather rare event that an environment provides a balance and the variation provides functionally positive mutations. So on balance I would expect nature to perform worse than even a poorly designed GA. I was hoping to ask for his justification for his assertion.

Leave a Reply