Home » Darwinism, Evolution, Informatics » Dawkins’ WEASEL: Proximity Search With or Without Locking?

Dawkins’ WEASEL: Proximity Search With or Without Locking?

On pp. 47-48 of THE BLIND WATCHMAKER, Richard Dawkins gives two runs of his WEASEL program (note that there were typos in both initial seeds — one had 27 characters, the other 29 whereas they should have 28; I’ve corrected that). Here are the two runs using the Courier typeface, which assigns equal width to each character (spaces are represented by asterisks):


WDL*MNLT*DTJBKWIRZREZLMQCO*P
WDLTMNLT*DTJBSWIRZREZLMQCO*P
MDLDMNLS*ITJISWHRZREZ*MECS*P
MELDINLS*IT*ISWPRKE*Z*WECSEL
METHINGS*IT*ISWLIKE*B*WECSEL
METHINKS*IT*IS*LIKE*I*WEASEL
METHINKS*IT*IS*LIKE*A*WEASEL

Y*YVMQKZPFJXWVHGLAWFVCHQXYPY
Y*YVMQKSPFTXWSHLIKEFV*HQYSPY
YETHINKSPITXISHLIKEFA*WQYSEY
METHINKS*IT*ISSLIKE*A*WEFSEY
METHINKS*IT*ISBLIKE*A*WEASES
METHINKS*IT*ISJLIKE*A*WEASEO
METHINKS*IT*IS*LIKE*A*WEASEP
METHINKS*IT*IS*LIKE*A*WEASEL

These runs are incomplete. The first, according to Dawkins, required 43 iterations to converge, the second 64 (Dawkins omitted the other iterates to save space).

As you can see, by using the Courier font, one can read up from the target sequence METHINKS*IT*IS*LIKE*A*WEASEL, as it were column by column, over each letter of the target sequence. From this it’s clear that once the right letter in the target sequence is latched on to, it locks on and never changes. In other words, in these examples of Dawkins’ WEASEL program as given in his book THE BLIND WATCHMAKER, it never happens (as far as we can tell) that some intermediate sequences achieves the corresponding letter in the target sequence, then loses it, and in the end regains it.

Thus, since Dawkins does not make explicit in THE BLIND WATCHMAKER just how his algorithm works, it is natural to conclude that it is a proximity search with locking (i.e., it locks on characters in the target sequence and never lets go).

Interestingly, when Dawkins did his 1987 BBC Horizons takeoff on his book, he ran the program in front of the film camera:

www.youtube.com/watch?v=5sUQIpFajsg (go to 6:15)

There you see that his WEASEL program does a proximity search without locking (letters in the target sequence appear, disappear, and then reappear).

That leads one to wonder whether the WEASEL program, as Dawkins had programmed and described it in his book, is the same as in the BBC Horizons documentary.

In any case, our chief programmer at the Evolutionary Informatics Lab (www.evoinfo.org) is expanding our WEASEL WARE software to model both these possibilities. Stay tuned.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

355 Responses to Dawkins’ WEASEL: Proximity Search With or Without Locking?

  1. Is it possible that the code is the same, but other parameters, such as the population size and mutation rate, were changed between the book and the TV show? Right at the end of the video clip showing the simulation, you can see that the generation count was 2485. That is a very diferent result that less than 100 generations in the runs summarised in the book.

  2. William Dembski:

    That leads one to wonder whether the WEASEL program, as Dawkins had programmed and described it in his book, is the same as in the BBC Horizons documentary.

    They both work, so what does it matter?

  3. That leads one to wonder whether the WEASEL program, as Dawkins had programmed and described it in his book, is the same as in the BBC Horizons documentary.

    Why not just ask him?

  4. 4

    Stay tuned? You bet…

  5. Dr. Dembski:

    Thus, since Dawkins does not make explicit in THE BLIND WATCHMAKER just how his algorithm works, it is natural to conclude that it is a proximity search with locking (i.e., it locks on characters in the target sequence and never lets go).

    Actually, Dawkins described his algorithm very clearly. What he didn’t tell us is the parameters he used, namely mutation rate and population size.

    When I coded Dawkins’ algorithm, I chose a mutation rate of 5% and a population size of 50, just because the values struck me as reasonable (although 5% would be quite high in a biological context.) It turned out that these values rendered the reversion of correct letters highly improbable. The math to bear this out would be ugly but doable.

    Why, then, is it natural to conclude that Dawkins implemented latching, didn’t mention it in his description of the algorithm in TBW, and then removed it before the 1987 video? Doesn’t it seem more likely that Dawkins’ parameters were such that the reversion of correct letters was highly improbable, and that he used a different set of parameters for the video?

  6. Gentlemen: If Dawkins is tuning the parameters differently for the program as described in the book and for it as exhibited in the BBC documentary, isn’t he in effect using a different program?

  7. Actually, on looking at the video, I don’t think that Dawkins necessarily changed his parameters. It appears that the screen in the video is cycling through the whole population, not just showing the winner. In that case, reversion of correct letters is occurring in the sense that correct letters get mutated, but not necessarily in the sense that selected winners contain reverted letters.

    In summary, there is no evidence that Dawkins used a latching mechanism in his 1986 algorithm, and the 1987 video constitutes evidence that he did not.

  8. I am still looking for the book “The Blind Watchmaker”- I have it on order through the local library so until I read it again I can’t say what Dawkins did in the book.

    That said I have offered my opinion on why the “locking” is only apparent-

    That is in each generation the only “survivor” is the one that is closest to the target.

    Before the next generation the parent is the closest.

    Now given that the mutation rate is 4% (about 1 letter change per offspring per generation), this means there is also a 96% chance there won’t be any change.

    So once a match is found and deemed “closest to the target” then all surviving offspring should be at the minimum equal to the parent and at best some degree closer to the target.

    Therefor to see if any letters are “locked” one then has to look at ALL of the REJECTED offspring.

    That is if one cannot get a hold of the ORIGINAL code that Dawkins used in BW.

  9. I disagree with R0b on whether Dawkins changed parameters. The number of generations is the evidence he did. I think the video parameters are a smaller pop and perhaps higher mutation. I think the result is a more “videogenic” simulation, since you see lots of visual change on the screen.

  10. Another way of saying is that the letters are “locked into place” by the laws of probability.

    That is given some “correct” mutation rate and the “correct” number of tries per generation to choose from, the odds would favor at least one offspring per generation being equal to the parent- ie no change.

    The other 99 are competeing against that one to get “displayed”, and then becoming the “parent” of the next generation.

    Picture “The Price is Right” wheel with every letter in the alphabet (and a space)- except that once a parent becomes established 96 out of 100 spaces are then that letter and the other 4 are any letter but that one- wildcards.

    Then spin away- 100 spins per wheel, with 28? wheels trying to get “MeTHINKs…”

    And each time a new parent is choosen the wheels change to match it.

  11. 11

    Wesley Elsberry has just posted the following at his own forum:

    I already corresponded with Dawkins back in 2000. There was no locking of characters in any implementation he did, nor was there any description of locking in anything he said.

  12. 12

    If Dawkins is tuning the parameters differently for the program as described in the book and for it as exhibited in the BBC documentary, isn’t he in effect using a different program?

    Re-running the same program with different data or settings is equivalent to using a different program? I don’t see that.

  13. Dr. Dembski asks a very interesting follow-on question. Since Dawkins isn’t posting comments here yet, can we get Atom’s opinion about the same question re Weaselware or Dr. Dembski’s himself re MESA? All these parameter driven programs can give vastly different results depending on how the parameters are set.

  14. I don’t care if he used a different program or not, if he monkeyed with the parameters to acheive a desired result, then he is demonstrating design not chance and/or necessity.

    I also find it interesting that he admits in the video to aiming for a specified target but that evolution doesn’t do this,but then just glosses over this distinction as if it isn’t all that important. I’ve never understood why this ‘weasel’ program tells us anything about how evolution is supposed to work.

  15. 15

    DonaldM, all sorts of things could be changed between versions of a program run: the mutation rate, the population size (number of mutated versions generated from each best fit), the speed of display. These wouldn’t make the program work signficantly differently, though they might change the time it took to reach the target, the number of generations, and the way the running program looked on film.

    I don’t know as much about programming as many here, but I’m pretty sure this is right.

  16. DonaldM:

    I don’t care if he used a different program or not, if he monkeyed with the parameters to acheive a desired result, then he is demonstrating design not chance and/or necessity.

    I was the one who suggested that he changed his parameters, but I retracted that suggestion in [6]. On looking at the video, I see no evidence that he changed his parameters.

  17. R0b, Look at the generation number. There has to be a param change, probably lowering the population size significantly. That would lead to the fallbacks in good letters and a faster update of the screen, which I think was the reason for changing the params – better visuals!

  18. Pendulum, I don’t think that “tries” refers to generations — I think it refers to instances of the sequence, ie “organisms”. In the video, it succeeded in 2485 tries. If the population was 50, that would be 50 generations (50 * 50 = 2500 — apparently it doesn’t finish out the current generation when it finds the target). That’s in line with the number of generations reported in TBW.

  19. 19

    ROb, good point. I think some people have misunderstood what happens in the program:

    Each “generation” contains a number of “tries” (these can be adjusted to see how it works with changing variables). The best “try” becomes the parent for the next set of “tries” (each of which is free to mutate at any letter, correct or not). One could change the number of letters that mutated and the number of “tries” per generation without changing anything foundational about the program.

    I don’t know why people don’t seem to understand this. I don’t know beans about programming, and I understand it.

    It would be interesting to change the target while the program is running. What happens then. What if

    METHINKS IT IS LIKE A WEASEL

    became

    THINK YOU IT IS LIKE A WEASEL

    and then

    I THINK YOU LOOK LIKE A BEAGLE

    in the middle of a run, but with no other changes?

    I bet there would be no latching anywhere because the program never had or needed latching.

  20. 20

    Im wondering what the Weasel Program is supposed to evidence? I haven’t read TBW, so I’m really not sure what purpose the simulation has. Is it supposed to show that by some process of variation, a targeted “phrase” or “combination” can be reached eventually? He mentions in the video that some reward will be given if the safe’s code is partially determined, by some money dribbling out. To me, it seems that in both instances, the safe and the phrase, we already know the purpose for which we’re trying to achieve–either money or a “correct” phrase. Isn’t this information that the “search” wouldn’t have? If there is no definite end or purpose already known, the “reward” wouldn’t exist, it would be like trying to find the safe’s combination for the purpose of swimming or doing math homework, it could be any “reward”, money would be just as meaningless as both. It seems like the simulation, to me, begs the question of what it is that’s trying to be achieved, and what it is that is being “rewarded” for “correct” bits of the “puzzle”. With a puzzle you have a picture that an objective reference point can determine what it “should” look like, with this, you have nothing of the sort. And safe’s don’t reward a little if you get one part of the combination right. And the Weasel Program, already knows that there was an author’s phrase that is being approximated to. But I can’t see the analogy of an author in nature that the search is approximating to unless we have ID, and intent and purpose is driving it to that definite end. Please, I’m trying to understand the purpose of this program, and how it is supposed to be evidential at all to evolution. If evolution doesn’t have you in mind, then it doesn’t have “Shakespeare’s phrase” in mind either (metaphorically speaking), and would have no reason to keep some combination and not others.

  21. When I coded Dawkins’ algorithm…

    In other words, a set of requirements/business rules was articulated by one intelligent entity and implemented by another. The programmer selected design parameters based on what he thought was reasonable, and used an integrated set of known-to-have-been-designed infrastructure (programming language, operating system, hardware…) to code to those requirements. There were probably bugs in the early iterations (as there always are), but he knew the “correct letters” (something evolution does not and cannot know, according to the mainstream theory) and was able to trouble shoot until the desired state was obtained.

    Question: given the fact that none of the makers of this particular watch can justifiably be called blind, what conclusion are we expected to draw again?

  22. Clive, Dawkins points out the same disanalogy that you do, namely that WEASEL has a target while biological evolution does not (as far as science can tell).

    The point of WEASEL is to illustrate the contrast between cumulative selection and random sampling. Of course, WEASEL doesn’t demonstrate that the conditions necessary for cumulative selection exist in biology, and the point of irreducible complexity is to show that these conditions don’t exist in some biological cases.

  23. Can someone help me understand this:

    What exactly does this model purport to simulate? What exactly (specifically) does the METHINKS*IT*IS*LIKE*A*WEASEL represent?

    If it represents a “completed” evolution of an organism, wouldn’t each successive generation have to have to have some natural advantage to the previous one? An advantage so great that the previous generation genetic makeup eventually dies out? If not, why does that generation keep going and not the others? Without these steps the model simulates design, not randomness. But just how big does an advantage have to be that it is to the detriment to rest of the gene pool? Isn’t that what each step represents?

    This is also something I haven’t quite understood: does the model account for the variability of survival that has nothing to do with this natural advantage? In other words, the chance the organism will die before it is able to even take advantage of the mutation?

    As you can tell, I am not a scientist, but I am having trouble getting my head around these things.

  24. 24

    “Cumulative sampling” (to a statistical researcher, that is an odd term) is no more effective at coordinating functionality within separate organizations using meta information than “random sampling” is.

    Yet, this is exactly what is called for.

  25. Clive Hayden:

    What if the goal were “any valid english sentence [of length n].” So of all the individuals on a given iteration, it would keep the closest phrase to any valid sentence. That would seem much easier to attain actually, than “me thinks it is a weasel”. It would also be general, not specific, but would result in very complex meaningful sentences.

    And would it be comparable to, “Any viable biological organism.”

  26. 26

    R0b,

    “The point of WEASEL is to illustrate the contrast between cumulative selection and random sampling. Of course, WEASEL doesn’t demonstrate that the conditions necessary for cumulative selection exist in biology…”

    I’m really trying to understand the Weasel analogy then if it is not really like trying to find the phrase that involves the phrase. I may have a conceptual block here, but I’m still not getting it. I appreciate you taking the time to explain it to me.
    What is the analogy trying to show?
    What would showing it prove?

  27. 27

    JT,

    “What if the goal were “any valid english sentence [of length n].” So of all the individuals on a given iteration, it would keep the closest phrase to any valid sentence.”

    I was discussing evolution with Kris on another thread, and he said that there was no “goal” to evolution. Maybe this is my conceptual difficulty with the analogy. Apparently, evolution doesn’t have anything in “mind”, not even a “sentence”. What is the shortest English sentence supposed to show? that evolution can do such a thing by keeping parts of the sentence until all of the sentence is reached? Doesn’t that mean that evolution is familiar with “sentences”? That seems to me like saying that a person born blind mixes paints until he gets to magenta. If he didn’t already know that color, how could he attain it? I appreciate your help in making sense of this illustration for me.

  28. Clive Hayden [25]:

    Apparently, evolution doesn’t have anything in “mind”, not even a “sentence”.

    Well, apparently it has in mind “any viable biological organism”. And it has in mind “increasingly viable biological organisms”. Increasing viability would imply increasing information in an organism. adaptedness would mean the wherewithal to handle more and more situations in the environment, which would imply more and more complex organisms.

    So in the context of “Any valid english sentence”, presumably there should be a goal of longer and longer valid english sentences.

    I was discussing evolution with Kris on another thread, and he said that there was no “goal” to evolution.

    I believe it would be contended that the solutions are in the landscape and not the selection process, but this distinction may not be relevant.

    What is the shortest English sentence supposed to show? that evolution can do such a thing by keeping parts of the sentence until all of the sentence is reached?

    Well, in the Zachriel algorithm (which is actually a more refined and thus more relevant version of the weasel program) Only valid english phrases are kept as intermediary forms. Increasing fitness is measured as increasing length of a valid english phrase (or sentence).

  29. 29

    JT,

    What I’m still not understanding is the analogy between when something has been, if you will, attained, or captured, such as biological viability, and how that’s analogous to sentences. Do you mean to say that just by virtue of living and reproducing, mutating and evolving, that the English sentence is being crafted? one evolutionary step at a time? Wouldn’t that mean that there were some fixed property that the biological viability was approximating to in it’s “iterations”? What would this be? Is the analogy saying that “biological viability” (which I presume means just living, and maybe reproducing) is akin to a valid English sentence in some way? How would the analogy follow? How would biological changes amount to anything “readable” over the course of the changes?–even in their own “language”? We already know when an English sentence is being approximated to, how do we have this equivalent from biological changes–how are the changed amounting to something that we already know and can read and pass judgment on their proximity to it?

  30. Clive @25 Doesn’t that mean that evolution is familiar with “sentences”?
    Or at least English grammar. The analogy in biology would be “evolution is aware of protein chemistry”.

    Switching from magenta to yellow, bees and dandelions don’t know what we call yellow, they are just agreeing on a signal that one can make and one can look for. The dandelion blindly paints itself a color, and the bee blindly wires its brain to look for a color. If they agree, they both survive and pass down their respective recipes. over much time, they evolve a fit as close as lock and key.

  31. R0b @18, my bad. I didn’t see the “tries” wrapped on the next line. I now agree that it probably is showing every member of the population on the screen.

  32. Clive:

    “What is the analogy trying to show?
    What would showing it prove?”

    To use Dawkins’ other analogy, it shows that you can find your way up the natural staircase at the rear of Mount Improbable rather than clearing the sheer front face in a single bound. It doesn’t address the standard ID arguments that 1) there is no such smooth path requiring only manageably short leaps or 2) if there is, then that’s an improbable enough contrivance to indicate some kind of design in itself. But then, in fairness, it isn’t meant to. It’s not a sufficient case for “RM + NS”, but it is a necessary early step in one.

    (But all this is IIRC as I don’t have the book to hand.)

  33. Clive Hayden:

    Do you mean to say that just by virtue of living and reproducing, mutating and evolving, that the English sentence is being crafted? one evolutionary step at a time? Wouldn’t that mean that there were some fixed property that the biological viability was approximating to in it’s “iterations”?

    Fixed? No.

    If the goal were “any legal C program” for example, that would be a dynamic goal, not fixed. And a context-free parser (something that could identify a C Program) is not in principle a complex program.

    (BTW, If your indirect point is that there is teleology inherent in the process, I would agree with you, but it is just extremely difficult to get people to see that if they are not so inclined – just an observation.)

    How would biological changes amount to anything “readable” over the course of the changes?–even in their own “language”? We already know when an English sentence is being approximated to, how do we have this equivalent from biological changes–how are the changed amounting to something that we already know and can read and pass judgment on their proximity to it?

    I think you would have to agree that there would be potential objective standard for gauging comparitive fitness of organisms in an environment. So that if you compare two otherwise identical organisms and one has better vision due to specific refinements in its eyes. So however those refinements came about, it would be a true fact that the organism possessing them was more fit.

    The Zachriel mutagenator program can generate a third of the english dictionary in a few hours. I don’t know even how it took that long – given that in my experience it was generated what looked like a hundred words in a few seconds. But all the intermediary words are legal words themselves. And then longer and longer valid words are formed strictly through random mutations. For just a seven character word this incremental method is several thousand times faster than blind chance (because of regularities in the english language landscape). And incredibly the longer the word is, the more improvement there is over blind chance. It very quickly becomes 200,000 times faster for say a 10 character word than a 26^10 blind search.

    So now imagine you have a bunch of individual words floating around, and those that can hook up gramatically start doing that to form phrases. So then the challenge is to form phrases and sentences by combining words randomly, burt that’s no different than combining letters to form words. And then as phrases reach a certain length a period becomes an option. And then certain sentences that make sense together hook up.

    And how would we get say a children’s book “Bob’s New Hat” out of this, for example? Well presumably there would be a lot of copies of Bob and Hat floating around and maybe it becomes sort of like Mad Lib and we get a whimsical children’s book out of it.

    I go sidetracked here, but I will post this as is.

  34. What I’m still not understanding is the analogy between when something has been, if you will, attained, or captured, such as biological viability, and how that’s analogous to sentences.

    What would this be? Is the analogy saying that “biological viability” (which I presume means just living, and maybe reproducing) is akin to a valid English sentence in some way? How would the analogy follow?

    Well language seems a very apt model for biological organisms. Languges are extremely complex and have themselves coincidentally evolved in a very haphazard incremental fashion. No one’s saying that it is a perfect analogy, but what would be a better one?

    But if we’re define fitness as an increase in biological functional complexity, the relevance of an increase in meaningful sentence or word length should be apparent.

  35. Word and sentence juggling as “support” for the plausibility of biological evolution is confused and fatally flawed. There are multiple problems.

    To take one such problem as an illustration, Darwinian evolution does not evaluate by anticipating what will be useful someday. Selection is ruthlessly oriented to the present competition. However, if we imagine a world of functional sentences, the idea that isolated words can be preserved because they are part of English immediately breaks the analogy with biology.

    In English, we might mutate “of” to “off”, knowing that both are part of the “landscape” of English — i.e. subsets of the already existing fully functional language, even though by themselves they are not yet functioning as sentences. But that is not the position evolution is in. Biology has no already compiled “dictionary” to know what incomplete sentence parts might be useful to a fully functional future sentence.

    If you want to seriously attempt to perform some analogous thought experiment or computer simulation, you have to get rid of the idea that the non-functional parts (e.g. words in English) are predefined, knowable, recognizable — and therefore preservable.

    The real function of such exercises has been to bolster hope that would normally be dashed against hard reality. Behe’s The Edge of Evolution takes a serious step toward uncovering how hard that reality really is.

  36. 36

    Clive [20], I’m not sure how seriously you’re posing this question. But all languages, including English, do evolve, one step at a time. All languages have developed by means strikingly akin to biological evolution: They mutate and acquire new function beyond the control of any individual or group. While sentences are designed, languages (with the exception of oddities like Esperanto) are wholly natural.

    Here is the first poem in English:

    Nu we sculon herigean / heofonrices weard,
    meotodes meahte / ond his modgeþanc,
    weorc wuldorfæder, / swa he wundra gehwæs,
    ece drihten, / or onstealde.
    He ærest sceop / eorðan bearnum
    heofon to hrofe, / halig scyppend;
    þa middangeard / moncynnes weard,
    ece drihten, / æfter teode
    firum foldan, / frea ælmihtig.

    (The slashes represent half-line marks in Old English poetry.)

    Stuff like this evolved into modern English. How? Reproduction, variation, selection. Rinse, repeat.

    When? Over time, one step at a time. Middle English — after the Norman Conquest — is a “transitional form” between Old English and modern English. The “body plan” of modern English stabilized with the introduction of print and the normalization of spelling that was codified — but not caused — by the introduction of dictionaries.

  37. p.s. To complete the analogy, imagine someone trying to explain the existence of proteins unique to the bacterial flagellum by saying that those proteins are part of the landscape of bacterial flagellum proteins. Once mutation finds one of them, it can preserve it until the rest come along in similar fashion. Eventually a complete functioning flagellum can be constructed (after some trial and error combining the parts).

    This would be obvious nonsense. No one would seriously propose this. Yet when we substitute words and sentences, the audience can be taken in by the illusion.

    Such illusions work in part because, as users of English (or other languages), we take the language for granted. In its entirety, we are prepared to treat it as real and as natural as the physical landscape around us. We forget that without language translators (e.g. us) meaningful symbolic language could not exist.

  38. 38

    I don’t see the analogy between biological complexity and sentence complexity. The only similarity is that they’re complex. But it alludes to an implication that biology is approximating to something already in existence, such as a meaningful statement, already known. No amount of biological changes, taken together, approximates to any guide or lexicon of meaning, equivalent to language. If a wing or a feather is produced, that is not similar to using parenthesis or a semicolon, or any other literary device–for a collection of biological changes cannot be “read” and understood.

    I think there are all sorts of problems with the analogy, and teleology is inherent in it. Sheer complexity doesn’t equate to meaning, so there is no equivalent to increases in biological systems and meaning known prior to reading a sentence.

  39. The target search string is contained in the code. Therefore, no search is required.

    That’s the end of that. Why would a computer programmer search for something that he included in his code?

    I’ll tell you the answer: So he can dupe people into thinking that dumb, random searches can produce interesting, innovative results. Which they can’t.

  40. 40

    Gil, if it’s so easily dismissed, why are people blathering on about partitioned search and latching and the like?

  41. ericB

    Darwinian evolution does not evaluate by anticipating what will be useful someday. Selection is ruthlessly oriented to the present competition

    So is the Zachriel scheme.

    If you randomly find a two letter word and from that randomly find a three letter word and from that randomly find a four letter word and eventually find a 10 letter word (and accomplish all this 200,000 times faster than could be accomplished via random generation of in tact 10 letter combinations), its because there’s regularity in the fitness landscape. When you preserved that two letter word it wasn’t with a mind to finding the 10 letter word you eventually hit upon by chance. But randomly selecting that two letter word (and then the three letter word and so on) did focus you in on a 10 letter word, but it wasn’t by your own design.

    I.D. cannot require that the fitness landscape for evolution be random. If it is random, that has to be demonstrated. However, The fact that the human languages are structured to facilitate this type of random search (and also considering that human languages themselves evolved via a haphazard a process.) seems strongly suggestive as to the potential nature of biology.

    Clive Hayden:

    I don’t see the analogy between biological complexity and sentence complexity. The only similarity is that they’re complex. But it alludes to an implication that biology is approximating to something already in existence, such as a meaningful statement, already known.

    The statement “I see” has meaning on its own. It doesn’t have meaning only on the basis of some other sentence it could potentially be a part of, but not yet in existence. But by combining it with other words gives the phrase increasing specificity (i.e. complexity) and increase its fitness for a specific function. “I see dead people”.

    biological changes cannot be “read” and understood.

    It seems they are read and understood by reality itself.

    Consider an eye, but not the means by which it may or may not have emerged. Are you saying that reality cannot be allowed to determine how specific attributes of an eye’s physical configuration dictate how the eye functions and how this function bestows on the eye its highly selective advantage.

    The issue isn’t whether the Zachriel program is a perfect model of nature. Clearly however, its a vast improvent on the Weasel program, in considering the issue of intermediate functionality among many other things. Why would the Dawkins program be continually attacked for a deficiency in it that its author acknowledged from the beginning, and at the same time the Zachriel program be merely ignored.

  42. Gil:

    Why would a computer programmer search for something that he included in his code?

    To illustrate cumulative selection. That’s a no-brainer for anyone who has read the relevant section of TBW.

    I’ll tell you the answer: So he can dupe people into thinking that dumb, random searches can produce interesting, innovative results. Which they can’t.

    Apparently it’s okay on this board to divine nefarious motives without offering a speck of evidence.

    And I don’t know what you mean by “random searches”, but genetic algorithms most certainly produce interesting and innovative results. Dawkins’s algorithm, of course, produces no useful results, as the target is defined in the domain of the objective function. Useful genetic algorithms, on the other hand, find elements in the domain that meet desired criteria in the codomain.

  43. [41]:

    I can certainly see the potential for teleological arguments concerning the probability of getting a fitness landscape so structured. In fact, any such argument would reduce to the observation that if f(x) = y, then f(x) cannot be more probable than y.

  44. David Kellogg,

    If languages existed independent of intelligent agents, I could agree with you. People are not just a vehicle, like a host, and language a parasite that is along for the ride. Without people using the language, and changing it, there would be no “change” of language. You may as well say that art and medicine have “evolved.” But we know that all of these things are really perpetually created and designed.

  45. H’mm:

    Seems I am late to the party.

    A few observations, but first, a clip 9courtesy Wiki) on what Dawkins said he was trying to do, and did:

    [Citation from BW ch 3 by Dawkins, on Weasel:] We again use our computer monkey [NB: to select a specific target functional case form 28 27 state elements identifies to 1 in "only"~ 1.2 * 10^40, well below the credible challenge to get FSCI for OOL and body plan level biodiversity, which is what Hoyle -- who Dawkins was trying to rebut -- had raised], but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL. . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection [but just such single step to gain functionality is actually what is necessary for natural selection to compare on differential functionality]: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed.

    1 –> Let us highlight key admission no 1: The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.

    2 –> Admisison no 2: This [the time the PC would take to select at random something on the order of 1 in 10^40] is more than a million million million times as long as the universe has so far existed.

    3 –> So, to escape the implications of the long odds against arriving at functionality by chance [and remember, these odds are far, far, far better than those for observed bio-functionality information . . . ], Dawkins illustrates a purportedly BLIND watchmaker, by intelligently designed, targetted search that rewards detected proximity to a target of non-functional, nonsense phrases.

    3 –> So, Weasel illustrates, not the power of a BLIND watchmaker, but the capacity of an intelligent designer to find a target by a partially random search process. “Improved” versions could doubtless go all the way to GA’s that define a functionality parameter and seek to get it to peak.

    4 –> But in neitehr case would there be any proper dismissal of the key insight: Weasel-style programs instantiate intelligent design in action, not he power of some alleged BLIND watchmaker.

    5 –> Therefore, let this major point be settled in our minds, before we turn to a secondary issue: the observed lockstep of Weasel circa 1986, where letters in the output latch and we see that 200+ out of 300+ letters in the program’s output, show an evident pattern of o/p latching with no counter instances. (Note, it is the absence of counter instances that led tot he inference, far and wide that the program in question is a letter- by- letter search that rewards “success” on a letter by letter basis, which is also a very plausible reading of Mr Dawkins’ words. For,t he minimal step forward is obviously one letter’s step forward.)

    [ . . . ]

  46. 6 –> What best accounts for it? Candidate no 1 is partitioned search with explicit latching [my model T2 from previous]. No 2 is an implicit quasi-latching [my model T3]. However, it is to be noted that Apollos showed by code example that explicitly latched programs can show quasi latching if that is written in.

    7 –> So, only code will be explicitly decisive, if demonstrative proof is required; which obviously will not be forthcoming. But, we can proceed on inference to best explanation. Ont he first look, the most credible explanation is that Weasel circa 1986 latched, but was improved by 1987 to not latch.

    8 –> On charity, we will however take Mr Dawkins at his reported word, that he did not ever EXPLICITLY latch Weasel. So, the particular balance of parameters used, implicitly latched in the 1986 cases, and quasi-latched with fairly frequent flickbacks in the 1987 video. (The suggestion above that parameter tweaking for visuality advantage is an interesting suggested context.)

    9 –> On number of generations to target, we have explicit statements in the 1986 cases that the program took 40+ and 60+. By sight, especially the rate at which flickbacks reverted in the topline display of what appears to be current generation [the bottom one that runs too fast to follow by eye is obviously current mutant), we are running at a generation per ~ 1/10 second or so. So, we probably have a much higher number of generations in 1987, which runs a lot longer than 40 - 100 gens at one per ~ 1/10 sec. [A generation per 1/10 second is also consistent with the report that the Pascal version ran in 11 seconds, which would run out in ~ 100 generations at that rate.]

    ______________

    CONCLUSION: Weasel does not address the Hoyle challenge, on getting the increments of functionality required to be on the shores odf Dr Moreau’s Island Improbable, for which hill-climbing algorithms can in principle climb, step by step up the easy slope side to peak performance at the crest of Mt Improbable. Instead, it rewards closeness to target of non-functional configs of nonsense phrases. Thus, it begs the question Hoyle raised and acts as a distractive strawman from the issue of FSCI/CSI; that is, it is one of the unfortunately long line of fundamentally misleading icons of evolution. Now, charity per Dawkins’ reported statement, we can accept that the Weasel program does not explicitly latch, but instead creates output latching as seen in the 1986 printoff samples, by parameter balancing etc, i.e. quasi-latching with significant incidence of flickbacks in at least some cases. [Algor model T3 not T2 from 346 - 7 in the previous thread].

    All said and done, that leaves the bottomline issue clear: Weasel does not answer to the FSCI challenge,as it rewards non-function on proximity to target. And that we canboth see and can understand from Mr Dawkins’ direct published statements.

    Weasel — sadly — in the end only manages to be a distractive, only rhetorically persuasive irrelevancy. On substance, we still have only one credible explanation for CSI/FSCI: design.

    GEM of TKI

  47. There isn’t any such thing as “cumulative selection” in nature.

    Cumulative selection requires a target.

    In nature the “target” does not (yet) exist.

    And as I said earlier the “locking” of the letters is a product of probability.

    That is given a small enough mutation rate and a large enough sampling size (number of “offspring” per generation), letters will appear to locked in place when only viewing the chosen one- that which is closest to the target.

    That is just the way it is.

    Of course that would change if Dawkins’ original program has the parent copied unaltered in each generation, as well as others that can be altered.

  48. Challenge to evolutionists: Can you salvage Dawkins’ WEASEL words (or similar attempts) from the War Games fallacy?

    At a dramatic point in the movie War Games, a computer endeavors to crack a password code. After working some time at blinding speed, it manages to solve one character of the password. Some time later, it has solved another character, and so on.

    It does not take much reflection to realize this is nonsense. If the password is unknown, it cannot possibly determine it one character at a time, preserving and accumulating partial successes until it reaches a working password. A password works as a whole, or it doesn’t work at all. It is all or nothing.

    How are these attempts by Dawkins or others (e.g. Zachriel) at letter and word manipulation not equally nonsensical as illustrations for biological evolution?

    For instance, in the second example series of the opening post, the sequence “LIKE” appears very early and is preserved, even before it is functioning as a word, let alone a word in a functional sentence.

    The same type of question would apply to “the Zachriel algorithm (which is actually a more refined and thus more relevant version of the weasel program)” according to JT‘s description.

    What is the biologically equivalent justification for preferentially preserving partial progress toward function even before function is reached (e.g. before we reach a functional sentence)?

  49. 49

    I have noticed that in the WEASEL runs printed above, there is no instance of the letter U. Nowhere, not in almost 450 characters and spaces.

    From that observation, I propose that the WEASEL program will never change another letter to a U. The program must be designed to avoid Us.

    Silly, you say? A pointless distraction? Dawkins mentioned nothing about it?

    I. Don’t. Care.

    I don’t see a U in these partial prints of runs. Therefore, the program can allow no Us at all.

    But, you say, there are Us in the 1987 video.

    Ahh. 1987. That’s a different matter.

    If a U shows up in 1987, it must be a different program.

  50. David,

    No one cares what an English professor has to say about computer programming.

    No one.

  51. 51

    Joseph, how is the allegation of locking any more evidence-based than my allegation about the letter U?

  52. David,

    No one cares what an English professor has to say about a computer program unless tat program was written by thta English professor.

    And if all you can do is act like a little child about it, then why bother?

  53. 53

    Don’t like the message? Dismiss the messenger.

    Your qualifications are precisely what, Joseph? Please be specific.

  54. The message was derived by the messenger. And it was derived just to prove the messenger can act like a little baby.

    My qualifications:

    30 years of working with computers, writing and editing codes.

    Also, as evidenced by responses to this question, I understand perfectly what is going on with this “weasel” program.

  55. Mr Kellogg:

    before I leave for the day.

    1 –> you will see above that there is some fresh evidence, albeit of course hearsay, that Mr Dawkins states that he has not used explicit locking. Fine, this is not a courtroom: on charity, I have taken that.

    2 –> the import of that is that as Joseph says, the program implicitly latches in the case in view.

    3 –> That no “U” happens to appear is irrelevant to the observed fact that with 200+ out of 300+ letters in play, no correct letter reverts, regardless of want of functionality. (Indeed, the absence of a U is easily accounted for by the rapidity of convergence by which 2/3 of the available slots are effectively out of play. Wrong letters are being rather rapidly squeezed out by the force of the proximity rule, probably multiplied by a modest per letter “quasi-functional” mutation rate — 5% [which is utterly huge for realistic mutations in bio-forms!] means 1 – 2 letters typically vary per population member. I use “quasi-functional” as in effect any mutation is allowed into the population to compete. the nearest tot he target — regardless of want of overall linguistic funciton — is promoted, which means that the trend is to converge to the target. (The circumstances would bias outcomes sharply towards those that have 1 – 2 functional additions, and do not derange the existing ones. These circumstances are utterly unrealistic, of course. Thus, implicit locking [and with suitable shift of parameters, quasi-locking] as I discussed at T3 in my previously linked 346 – 7 in the earlier thread.)

    4 –> Thus, the targetted, proximity based search implicated by the behaviour and by Mr Dawkins’ direct statements as cited, is irrelevant to the capacity of a BLIND watchmaker such as RV + NS. Hoyle’s FSCI challenge still stands unmet.

    5 –> Similarly, as a third level matter, in the 1987 video, the program’s observed behaviour, with fairly frequent flickbacks, is significantly different from the case of the 1986 printoffs, which we have good reason to see as representative of the overall pattern of the program.

    6 –> that suggests at minimum, that probabilities and populations have been shifted, which is a material difference to the program. Such an adjustment does not obviate the force of the point at 4 just above.

    GEM of TKI

  56. 56

    Joseph, you certainly understand better than kairosfocus! :-)

    Kairosfocus, I find your writing opaque as always, but I think you may be notpologizing for your silly accusation that the program fixed letters in the first place — which people have been telling you for weeks!

    If that is out of the way, it might be possible to correct your other misunderstandings, but the return on investment is very low, so I’m not going to make that effort.

  57. EricB [48]:

    The same type of question would apply to “the Zachriel algorithm …

    What is the biologically equivalent justification for preferentially preserving partial progress toward function even before function is reached (e.g. before we reach a functional sentence)?

    The Weasel algorithm has not definition of functionality other than one particular sentence.

    The Zachriel scheme is completely different. Here is what it defines as functional:

    1)Any English word.

    2)Any group of English words that can legally appear contiguously.

    (And the longer a legal word or legal group of words is, the more adaptive it is.)

    At no state of the search is anything preserved that does not meet the above criteria of functionality.

    ———————–

    A legal sentence is an example of a legal group of words. Any contiguous subset of words in a legal sentence is also a legal group of words.

    If you accept a sentence as being functional (which evidently you do judging from your above statements ), then you have no rational basis for denying that a smaller valid set of contiguous words is also functional (albeit to a lesser degree).

    If you accept a word as functional, you have no rational basis for denying a set of words that can legally appear contiguously together in English is also functional (and functional to a greater degree than a single word).

    As far as a “biologically equivalent justification” specifically – I personally would say that the parallel to English is valid to the extent that there are “grammatical” rules governing how biological components can be configured together, and to the extent there is a hierarchical and modular organization of biological components, being built up from reusable subcomponents and the like, and to the extent that there is semantic meaning arising from the arrangement of biological components which determine what larger biological components can be associated together.

    And to reiterate, it would be reality itself that determines if a biological organism is functional. It is reality itself that parses the chemical and physical syntax and semantics of some biological configuration, and determines whether that configuration confers some selective advantage.

    I don’t really feel inclined to debate this a lot longer, though.

    Here are the websites:

    http://www.zachriel.com/mutagenation
    http://www.zachriel.com/phrasenation

    —————————

    Someone made statements previously to the effect that the reason that English is amenable to this type of incremental search (i.e. an efficient imcremental cumulative search that hits intermediary islands of functionality ) is because English is intelligently designed. Well then if you say biology is intelligently designed, then you are forced to to admit that it also is amenable to the same type of search.

  58. David Kellogg,

    If you want to get technical the program does lock in letters because of the probability thingy and the target.

    Those two factors, working together, will most certainly produce “survivors” that have at least the match that the parent had and you will never see any reverses- that is given a small enough mutation rate and a large enough sampling size.

    Take away the target and see what happens.

    Monkeys pounding away on a keyboard do not have a target- not that they know of anyway.

  59. JT:

    Someone made statements previously to the effect that the reason that English is amenable to this type of incremental search (i.e. an efficient imcremental cumulative search that hits intermediary islands of functionality ) is because English is intelligently designed. Well then if you say biology is intelligently designed, then you are forced to to admit that it also is amenable to the same type of search.

    Wrong. Not all that is intelligently designed has to amendable in the same way.

    However with ID biology does have a target and the resources to help reach that target.

  60. JT, explain to me the justification for calling the lone word “of” functional, to use one simple illustration. What is its function, apart from other words in a meaningful sentence?

    Obviously, if one claims that it is functional by mere definition, one can define anything at all to be functional.

    One might also “define” a lone protein as functional, despite the fact that its function will be future in the context of other proteins.

    To be selectable, it needs current function, not future function. The use of arbitrary definitions does not salvage an inappropriate and non-functioning analogy to biology.

  61. ericB:

    Can you salvage Dawkins’ WEASEL words (or similar attempts) from the War Games fallacy?

    I believe I addressed this in #32 above. AFAICR, the Weasel assumes the existence of a gradual path from source to target; it’s not meant to prove the existence of one.

  62. Put another way:

    Given

    1- A Target

    2- A small enough mutation rate

    3- A large enough sample size

    The output will never be less than the input if the output is “closest to the target”.

    IOW there doesn’t have to be coded statement that locks the matching letters.

    The locking is a byproduct of the program.

  63. ericB [60]:

    Obviously, if one claims that it is functional by mere definition, one can define anything at all to be functional.

    The use of arbitrary definitions does not salvage an inappropriate and non-functioning analogy to biology.

    As best I can tell, any analogy would be inappropriate to you.

    Its obvious you don’t have any definition in mind yourself as to what would constitute functionality within the context of an English Sentence.

    A computer program is a series of words too. Those words are generally speaking actual functions. If the target were a C statement as opposed to a sentence, what would be the functions in that. It hardly seems “arbitrary” to identify words and groups of words in a sentence as carrying the functional information of that sentence.

    If you have no concept in mind yourself as to what constitutes functionality in a sentence, then any objection to the weasel program for preserving non-functional states is meaningless, as to you, any intermediate state is non functional.

    JT, explain to me the justification for calling the lone word “of” functional, to use one simple illustration. What is its function, apart from other words in a meaningful sentence?
    What about the word “apple”? Any problem with that term being defined to carry functional information? If you personally want to derive a exacting set of exceptions, for example ruling out single connector words like “of” as legitimate targets by themselves then fine. Why don’t you just imagine the model with all necessary revisions that would render it sensible in your mind. Or is your potential list of objections actually endless.

  64. Oops- the locking is a consequence of the programming.

    IOW it is an inevitable outcome- relatively speaking input to output- given the proper parameters.

    There isn’t any need for a specific coding sequence to lock in any letters.

  65. JT et al:

    If we’re going to do simulations, let’s at least make them conform to the theory.

    Consider the following string of letters (which happens to be a scrambled English clause—underscores represent spaces to give you a head start if you like puzzles). This string represents some kind of biological proto-structure.

    wsoldin_je_d_swodx

    Now, consider the following “mutations” to the original, A and B.

    A wsocdin_je_d_swodx
    B wsoldin_je_d_srodx

    You, representing natural selection, do not know the target—in fact, you don’t even know what a target is, as you have no goals. Additionally, you cannot predict, cannot reason, and certainly cannot write computer code; you don’t have any lifelines and you can’t buy a vowel. All you can do is evaluate the “fitness” of each variation in the here and now. Your options are: select A, B, both, or neither.

    Which do you choose and why.

  66. ericB [60]:

    ericB:

    I think I see what you’re getting at now.

    Consider the phrase “Tomorrow, I think I will wash the car.” The sentence fragment “Tomorrow, I think I will” is certainly generally functional, and millions upon millions of sentences can start that way. In fact, while writing “Tomorrow I think I will…” I could have a change of heart and instead of writing “Tomorrow I think I will wash the car” I could write, “Tomorrow I think I will go to the beach”. So, we can imagine an eye being developed incrementally and some component that was being developed could certainly be associated with innumerable other potential organs – not just an eye.

    But your point would be that “Tomorrow I think I will” could not exist on its own – would not be a valid functional target on its own – there would have to be something with it, it could not be a target in isolation. But in what sense is the sentence “Methinks it is a weasel” functional in isolation? Could you write a post that only contained the phrase “Methinks it is a weasel?” Everyone would certainly consider that an unacceptable post and incomplete. But you implied such a sentence would be a complete function.

    And furthermore, even though the “Tomorrow, I think I will” ostensibly cannot exist in isolation, the fact is it does. Until I finish the sentence the phrase exists in isolation for a discrete interval of time. Its not as if the phrase “Tomorrow, I think I will wash the car.” materializes instantaneously. And nor am I locked into one target merely by beginning a sentence “Tomorrow, I think I will”

    As far as individual words, they do exist in isolation as well, in my mind (and I would say my brain – not some platonic dimension).

    Anyway, I have more intuition as to what you were implying now, and have a sense how I want answer it, but just don’t have any more time at the moment.

  67. JT:

    So, we can imagine an eye being developed incrementally and some component that was being developed could certainly be associated with innumerable other potential organs – not just an eye.

    Only if it was designed to do so.

    That is an eye being developed incrementally.

  68. JT, when you next have even a spare minute, SteveB has posed a fine question at 65. I would also be interested in your response.

    As a speaker of English, you are falling into the trap of unjustifiably assuming too much, based on your existing fluency in English as a whole. Without realizing the unfair edge this gives you, you are backtracking from the meaningfulness of the whole to a subset. For example, you wrote (with my emphasis added):

    63: It hardly seems “arbitrary” to identify words and groups of words in a sentence as carrying the functional information of that sentence.

    and

    66: Consider the phrase “Tomorrow, I think I will wash the car.” The sentence fragment “Tomorrow, I think I will” is certainly generally functional, and millions upon millions of sentences can start that way.

    and

    66: So, we can imagine an eye being developed incrementally and some component that was being developed could certainly be associated with innumerable other potential organs – not just an eye.

    and

    66: As far as individual words, they do exist in isolation as well, in my mind (and I would say my brain – not some platonic dimension).

    But of course, mindless Darwinian evolution cannot use a mind and it cannot select based on potential organs, just as it would be illegitimate to defend the preservation of a word or phrase in isolation based on the meaning it would have in a sentence (that doesn’t yet exist).

    You cannot legitimately have selection take out a loan against future meaning or future functionality. Yet, without realizing it or thinking about it, that is exactly what you as an English speaker with a mind are doing when you consider fragments to have meaning/function based on potential completed sentences.

    But you implied such a sentence would be a complete function.

    I am allowing, uncritically and for the sake of argument, that a complete and correct English sentence is considered to have function, so that we can examine whether Darwinian sentence building programs show us anything worthwhile — even if we grant that assumption. I am supposing, hypothetically, that a sentence might be considered to be something like a working bacterial flagellum. The question then is clearly this: How does one justify preserving all of the needed words (proteins) while we have not yet reached a working sentence (flagellum)?

    Notice that we cannot assume we have a working flagellum (sentence) and then backtrack to the idea that all the parts have function. That is illegitimate. Going in a forward direction, one must contend with finding a basis for selection when the isolated sentence parts have never yet made any functioning sentence at all.

    If such programs were made realistically comparable to real world requirements for undirected change, they would become just as feeble as Darwinian mechanisms are in practice (cf. Behe’s The Edge of Evolution).

  69. Mr Kellogg:

    Re :your silly accusation that the program fixed letters in the first place

    Mr Kellogg, all that this cited statement reveals is that you are still unwilling to look at the evidence that is staring you in the face, not least at the head of this thread. (I will ignore the ad hominemistic tone.)

    1 –> Please, first, LOOK. And, COUNT.

    (You don’t have to take my counts, do your own. Having counted, could you let us see your results: (i) how many letters appear after the first line in each run as published? (ii) of these how many are “correct”? and (iii) how many of the correct letters revert to incorrect status? And, (iv) are you able to comprehend the just stated point and questions — if not, why not?)

    2 –> I believe, you will see that in the 1986 program outputs, there are some 300 places in which letters might change.

    3 –> As pointed out, of these 200+ go correct and once they are correct, NONE reverts in the sample space. A sample space of 300 instances in which we see 200 of a phenomenon, and none of the phenomenon that would point away, is not “no evidence” or a “silly accusation” or the like. It is in fact, well within the law of large numbers, and we have no reason to believe that sampling every tenth line will correlate with the program’s behaviour such that thew results will be strongly atypical. [Do you understand what the adverted to "law odf large numbers" speaks about? Do you understand what "typical" in a sampling context is about? Do you understand that scientists will routinely publish their "best" results?)

    4 --> Thus, as I noted from December on, we can see that the output plainly has "latched" letters, and indeed, it does so in a context that Mr Dawkins admits, rewards non-functional increments towards the target. [NB: As I have pointed out since Dec last, the fact of marching towards a target without reference to functionality is enough to show that Weasel is not an example of a BLIND watchmaker -- the thesis it allegedly supports -- but of in fact intelligent, foresighted, goal-oriented DESIGN.]

    5 –> The only issue of consequence on is what mechanism gives rise to the observed latching of the output. On that question being raised as an issue in the previous thread, at 346 – 7 I put forth three alternative candidate reverse engineered algorithms:

    T1 — monkeys typing at random with hit/miss;

    T2 — explicitly letter-latched search with warmer/colder;

    T3 — implicitly letter-latched search with warmer/colder.

    6 –> Of these, T1 is by Mr Dawkins own words exactly what he is trying to provide an alternative to. this, because as he states, it will not find the target on a PC within the available search resources, most notoriously. [But, observe: the increment in informational functionality to make the Weasel sentence is far smaller than that for origin of life or of body plan level biodiversity, which are plainly beyond the search capacity of the observed cosmos. This is of course being studiously evaded and ignored.]

    7 –> As it turns out, BOTH T2 and T3 can supply the observed results, the difference being simply if one is explicit or implicit. Indeed, it is also the case that for the different 1987 behaviour, BOTH T2 and T3 can again fulfill the observed behaviour, as Apollos showed by writing a code that explicitly latches but occasionally reverts.

    8 –> Thus, strictly, only inspection of credible code would be demonstrative. But, on the recent report above that Mr Dawkins has stated that he did not explicitly latch, on the charitability principle of accepting testimony unless there is reason to specifically reject it, I am willing to agree to T3 as the current best explanation. Never mind, that I suspect that such an algorithm is probably going to struggle to get the sort of “typical” outputs as are published in the 1986 case, if the 1987 program output is also typical of full runs for the 1986 cases.

    9 –> To illustrate how implicit latching could appear, with 28 letters and a 5% per letter “mutation” rate [the parameters are unreaalistically favourable relative to the bio-world, of course . . . ], typically 0, 1 or 2 letters will change from one generation to the next: e.g. there is about a 24% chance that no letters will change for any given member in a given genration. So for a smallish to modest populations, the program will normally preserve currently correct letters or will advance a step or two. (With a bigger population and that sort of per letter mutation rate, with the closest to the target selected for, the program would probably find a case with two or even more further advancing letters in each generation, and would jump to the solution rather quickly, maybe even better than the deterministic, explicitly letter-latched search that is guaranteed to hit the target in 27 generations. [The 1987 version does not seem to do that sort of rapid convergence, just the opposite.])

    10 –> By shifting parameters a bit to slow down approach to the target, the program will be more likely to show advances but with occasional flickbacks, e.g. if double-letter changes pull one back and push one forward. But the problem here is that the 1987 flick backs then typically rather rapidly flick forward to correct state. (Hence the flickering, letter winking effect.)

    11 –> But, strictly on the evidence of the 1986 output, the simplest — most natural — explanation would be explicit latching, as we can reasonably be assured that Mr Dawkins published samples are TYPICAL of “good” runs of the program at that time. In short, latching without reversion was the typical “good” output behaviour of Weasel at that time. (All the emphasis on how Weasel does not latch is based on subsequent developments. indeed, the 1987 video of outputs that fairly frequently flick back on correct letters, as already noted, was on the order of months to a year after the 1986 published data. There are no good grounds to equate the 1986 and the 1987 outputs.)

    12 –> Algorithms implemented for subsequent versions of Weasel are irrelevant to the behaviour of the program as at 1986, as published.

    _____________

    Onlookers, we continue to see the damaging impact of selective hyperskepticism.

    GEM of TKI

  70. Joseph, re 62:

    To get implicit latching without reversions, the population as well as mutation rate actually need to be mutually tuned, so that you get steady advances without letter substitutions [one flicks back while another moves ahead].

    In short, for a given mutation rate, there is an “optimal range” of population size in which the number of mutations per member of a generation is sufficient that advance will happen, but not so large that it will converge over-rapidly relative to the sort of run lengths we see as probably good runs: 40+ and 60+.

    Also, the population has to be small enough and the rate for mutation small enough that we are unlikely to get the sort of double-mutations that would revert one while advancing another. (Observe, in 200 possible places for that, we never see it happen in the 1986 outputs.)

    Explicit latching is probably significantly easier to achieve than such tuning that achieves implicit latching or quasi-latching.

    But, if Mr Dawkins said that he did not latch explicitly, then I conclude that he most likely did some tweaking to get what he then thought was “good” results.

    And, that boils down to implicit letter latching or quasi-latching.

    GEM of TKI

    PS: One apparent feature of the 1987 output is the “winking” on the reverting correct letters in what I take to be the “generational champions” string. That is, they come back on target fairly quickly once they wander off. On average ADDITIONAL letters are going correct once every 2 – 3 or even 4 generations, to get runs of 40+ and 60+, which means that often — 1/2 or more ot the time, it is the 0-mutations case that wins the contest. At 5% rate, there will be 24% of the population in that state. (That would be suggestive of such flicked back letters being specially treated.)

  71. If it represents a “completed” evolution of an organism, wouldn’t each successive generation have to have to have some natural advantage to the previous one?

    You would think so, yes.

  72. David Kellogg –But all languages, including English, do evolve, one step at a time.

    That is a very interesting point, and you are correct. But seriously, how could language evolve without intelligence i.e. understanding and agreement that particular sounds depict particular objects?

    And of course all alphabets are created.

  73. Something else to ponder: Why does Dawkins program stop with METHINKS IT IS LIKE A WEASEL

  74. Because that was the goal of the program.

    Goal reached program stops.

  75. Here is a pair of house building illustrations of the War Games fallacy that evolutionary sentence building programs repeatedly fall into.

    Suppose one writes a program to simulate the construction of houses or other buildings, perhaps out of discovered/mutated building blocks, rather than the construction of English sentences.

    War Games Fallacy, Version #1

    In this version, the program is designed to eventually produce a specific target house. The program cheats by comparing the work in progress against the complete target house. So if it discovers that a roof piece matches at such and such distance from the ground, it can preferentially preserve it there, floating in midair, even if the supporting structure is not yet in place.

    The Dawkins WEASEL program commits this version of the fallacy.

    War Games Fallacy, Version #2

    In this version, there is no specific target house. Rather, the program has access to a generalized understanding of complete houses. (This is comparable to a program operating not according to a specific English sentence, but rather according to a dictionary of English words or phrases and an understanding of what is required to form a complete valid English sentence.)

    Nevertheless, the program is still cheating by taking advantage of this knowledge of the complete future picture. If it discovers it is appropriate in general that a roof piece should exist at such and such distance from the ground, it can still preferentially preserve it there, floating in midair, even if the supporting structure is not yet in place.

    If a program, such as Zachriel (cf. JT’s comments above) or others like it, preferentially preserves words such as “of” because they are in the dictionary and will eventually become useful in meaningful sentences (even though they are currently useless for meaningful sentence construction), then that program is committing the second version of the same fallacy.

    Notice that the distinction between pursuing a specific predefined target or a generalized predefined target makes no difference to the error. The problem is that selection is allowed to take advantage of knowing future utility.

    It is exactly like a movie in which a computer can supposedly crack a password by cumulatively selecting one character at a time — something that would be impossible to do without some kind of access to the final answer.

  76. Because that was the goal of the program.

    IOW, it was the design of the program to show how things can come about without design.

    IOW, evo–taken to the anti-ID level — is irrefutably irrational.

  77. It is exactly like a movie in which a computer can supposedly crack a password by cumulatively selecting one character at a time — something that would be impossible to do without some kind of access to the final answer.

    There is no reward for guessing 99 percent of a safe’s combination.

  78. tribune7:

    IOW, it was the design of the program to show how things can come about without design.

    The purpose of the program is simply to show that iterative randomness + selection can come to a target faster than randomness alone.

  79. Zachriel’s program has been discussed to a limited extent before:

    http://www.uncommondescent.com.....ment-90063

    http://www.uncommondescent.com.....gen-daily/

    http://www.uncommondescent.com.....ent-145598

    And since we’re discussing English-based examples I’ll copy over my previous thoughts on this subject:

    the Explanatory Filter can take multiple types of inputs (which also makes it susceptible to GIGO and thus falsification). Two are (a) the encoded digital object and (b) hypothetical indirect pathways that lead to said objects. My name “Patrick” is 56 informational bits as an object[each letter is represented by 8 bits]. My name can be generated via an indirect pathway in a GA. An indirect pathway in a word-generating GA is likely composed of steps ranging from 8 to 24 informational bits.

    Let’s say you take this same GA and have it tackle a word like “Pseudopseudohypoparathyroidism” which is 30 letters or 240 informational bits. It can be broken down into functional components like “pseudo” (48 informational bits) and “hypo” (32 informational bits). Start with “thyroid” (56 informational bits). For this example I’m not going to check if these are actual words, but add “ism”, then “para”, and then “hypo”. “hypoparathyroidism” is a functional intermediate in the pathway. The next step is “pseudohypoparathyroidism”, which adds 48 informational bits. Then one more duplication of “pseudo” for the target.

    That may be doable for this GA but what about “Pneumonoultramicroscopicsilicovolcanoconiosis” (360 informational bits) or, better yet since it’s more relevant Dembski’s work (UPB), the word “Lopado­temakho­selakho­galeo­kranio­leipsano­drim­hypo­trimmato­silphio­karabo­melito­katakekhy­meno­kikhl­epi­kossypho­phatto­perister­alektryon­opto­kephallio­kigklo­peleio­lag?io­siraio­baph?­tragano­pterýg?n” (1464 informational bits). I’m not going to even try and look for functional intermediates.

    And I’d add that none exist, although the entire word consists of functional components. So someone could argue that an indirect pathway could duplicate all of them from other words and somehow assemble them into a coherent whole.

  80. Hoki,

    The purpose of the program is simply to show that iterative randomness + selection can come to a target faster than randomness alone.

    Of course. The problem is natural selection knows nothing of targets; therefore, the whole exercise is invalid. See my question @65—which choice would you make?

  81. 81

    Patrick and Clive,

    Zachriel is about the most polite commenter in the blogosphere that I have ever come across. Why is he banned here?

  82. Hoki–The purpose of the program is simply to show that iterative randomness + selection can come to a target faster than randomness alone.

    So evolution requires a target? How would that target be determined?

  83. StephenB:

    Of course. The problem is natural selection knows nothing of targets; therefore, the whole exercise is invalid. See my question @65—which choice would you make?

    Neither obviously.

    Oh, you want a more exhaustive answer??? OK, you’re too teleological.

    How about this for an example that BETTER represents natural selection: Look at the results from the weasel program in the opposite order. I.e. start with the string “METHINKS IT IS LIKE WEASEL” (the modern “evolved” string we have found) and work your way towards a (to us) totally random string (it’s distant ancestor), as such:

    1. METHINKS IT IS LIKE A WEASEL
    2. METHINKS IT WS LIKE A WEASEL
    3. METHINKS IT WS WIKE A WEASEL

    x. HKADHFKLUWHEKJLHWEHUREKJEWHR

    Each string represents a sequence of letters that were more fit than the sequence below it under it’s CURRENT selective pressure. The trick is to realize that “METHINKS IT IS LIKE A WEASEL” was only the target (i.e. more fit) when the sequence was on the second line. The target when on the third line was “METHINKS IT WS LIKE A WEASEL” and the target when on the fourth line was “METHINKS IT WS WIKE A WEASEL”.

    In this way, the original random sequence never strived for a distant “METHINKS IT IS LIKE A WEASEL”

    Note: this is an ANALOGY, so it will be far from a perfect representation of natural selection (and evolution in general). We could always quibble about whether or not natural selection (or other evolutionary processes) actually could create a, e.g,. a human from something prokaryotish, but that is missing the point: you created a strawman version of how natural selection supposedly works.

  84. Tribune7 (#82):

    My post in 83 deals with your question. There is no real target as such (at least not one in the distant future). For natural selection there are current selective pressures.

  85. Arthur Smith (#81):

    “Zachriel is about the most polite commenter in the blogosphere that I have ever come across. Why is he banned here?”

    Can I join in asking that Zachriel may post here? (If he wants, obviously). I had a long exchange with him on another blog, and I agree that he is polite, but the most important thing is that he is very intelligent. And there is nothing better than an intelligent adversary.

    Regarding the discussion here about the weasel algorithm, I must admit that I cannot find it really interesting. Basically, I agree with Gil: an algorithm which already knows its target is of no interest for our debate here. I have always found Dawkins’Weasel a silly issue, however the algorithm may work in the details. And anyway, if we really have to discuss the algorithm, we should know it exactly (I can’t see why Dawkins has not made it publicly available). Frankly, trying to recreate an unknown algorithm which anyway does not demonstrate anything seems not a very fruitful task (although I must commend Atom for having the patience and goodwill to try that just the same).

    Zachriel’s word games are another thing. Unfortunately, I have never had the time to study them well, but I am sure that they demostrate something. They probably demonstrate that it is possible to easily explore a semantic space with an algorithm well designed to explore semantic spaces, but I am not really sure of that. But I will give only two comments here about that:

    1) The protein functional space is not a semantic space: function, in proteins, is not linked to a symbolic meaning, but to biochemical activities.

    2) Although Zachriel’s algorithm can explore the space of words and even phrases, it still cannot really generate original language and original meanings. Language and meaning are still the product of conscious beings. Machines can only copy them, remix them, and recognize them only passively. In other words, a program can recognize that a phrase has a meaning only if it already “knows” (IOW, it has been inputted in it) that it has. The program, being a machine, has no concept of meaning. It can only match something with what it already “knows” in its code. Conscious intelligent beings are all another thing (although AI enthusiasts very much like to believe differently). So, after all, even Zachriel’s programs are probably in Dawkin’s category (they already know the answer they are searching), even if they certainly are much smarter.

  86. Hoki–The purpose of the program is simply to show that iterative randomness + selection can come to a target faster than randomness alone.

    IOW, it is sort of like someone taking the time to write a program illustrating that a man will take fewer steps walking 10 feet than he would 10 miles.

    The only think definitively illustrated by Dawkins’ program is its pointlessness.

  87. tribune7:

    The only think definitively illustrated by Dawkins’ program is its pointlessness.

    Remember that an old creationist argument was that evolution was equal to a hurricane blowing though a junkyard to produce a 747. Considering how many people fail to understand the difference iterative selection can make to randomness, it is anything but pointless.

  88. Hoki — Remember that an old creationist argument was that evolution was equal to a hurricane blowing though a junkyard to produce a 747.

    Actually, that’s an atheist argument. At least Sir Fred Hoyle was an atheist at the time he made it :-)

  89. tribune7, good point regarding Hoyle. Even though he never accepted that the intelligence was God, he and Wickramasinghe were pioneers in pointing out the absurdity and the mathematical unreasonableness of undirected evolutionary explanations for biological life.

    It is an inconvenient truth that inference to intelligent agency, even among the pioneers in our time, was not confined to theists.

  90. Hoki:

    Considering how many people fail to understand the difference iterative selection can make to randomness, it is anything but pointless.

    How many more people fail to understand the difference it can make for a program to illegitimately “assist” its selections by looking at future utility?

    You are right to point out that actual selection works according to current pressures. Did you think that anyone doubts this? Or that anyone doubts that this type of selection is real?

    The problem is that programs such as Zachriel (or WEASEL) create an illusion of heightened effectiveness that does not work within the confines of true natural selection.

    But the fact that real natural selection helps (when there are incremental advantages to select) is not questioned.

  91. A key aspect of Weasel, et al. is that they are population based searchers. If you dial down the population size to 1, Weasel is the same as random search. The benefits of selection only appear when the population is set higher.

    What is surprising about Weasel is not that it reaches the goal, but how fast it reaches the goal, even for very small population sizes.

  92. Pendulum

    Actually, if Weasel were to bump up mutation rate and population size, within the performance capacity of the PC, it would outrace deterministic search by getting double, triple and so on finds in each generation. (That is because configs that would be way out in the skirts of the population would begin to show up in “random” samples, and by the distance to target metric, would consistently win and advance.)

    That is, Weasel is utterly unconnected to the real world, where the issue is that targeted search that rewards non-functional configs based on mere increments in proximity to a target, have nothing to do with the alleged designing capacities of chance variation plus natural selection across competing FUNCTIONAL sub populations.

    GEM of TKI

  93. hoki:

    There is no real target as such (at least not one in the distant future). For natural selection there are current selective pressures.

    No target = no search. Just the “necessity” of survival.

  94. Pendulum: “The benefits of selection only appear when the population is set higher.

    What is surprising about Weasel is not that it reaches the goal, but how fast it reaches the goal, even for very small population sizes.”

    On the contrary, it is not surprising at all once one realizes that WEASEL selection is not based on true selection according to current function.

    In the second run in the original post, notice that even by the second entry in the sample we are given, the sequence “LIKE” is already being preferentially preserved, even though it is not yet functioning as a word.

    The WEASEL selection is cheating by taking advantage of recognizing and selecting according to the function it will eventually have. This is possible only through access to knowledge about the ultimate solution.

    Given that it can peek at the future utility, of course it is able to quickly converge.

  95. Instead of using a dictionary of English words, a program such as Zachriel could have used a dictionary of proteins found in the human body represented as character sequences. However, the fundamental flaw in the approach is the same.

    Supposing that it discovered a sequence that matched one of the entries in the dictionary of proteins. Then, since this is one of the “words” in the dictionary, Zachriel would preferentially preserve it.

    However, proteins have function in the human body in the context of cooperation with other proteins. Many of these proteins would be useless if taken in isolation, without the cooperating proteins they are interdependent with.

    By selecting for any word in the dictionary, Zachriel is selecting according to potential value — value as it will be in the context of the future whole, once one has the complete language or the whole body. It is not selecting according to current utility or real advantage conferred with regard to current pressures.

  96. EricB:

    Did you think that anyone doubts this?

    Yes. Did you read SteveB’s comment to which I responded? (my apologies to StephenB whom I originally thought had written that post)

  97. Lots of ink has been spilled here about the goal/target of the weasel program. Make no mistake about it: the program would not work without a future target specified by Dawkins or his programmers.

    Consider this fact in light of the quotations below (my emphasis). IMO, what appears to be happening here is that goals, targets and the like are expressly not allowed when evolution/NS is discussed at the 30,000 ft-level, but then teleology gets smuggled into the bowels of the weasel program–by Dawkins, not by me–when it suits his needs.

    Adopting this view of the world means accepting not only the processes of evolution, but also the view that the living world is constantly evolving, and that evolutionary change occurs without any ‘goals.’ The idea that evolution is not directed towards a final goal state has been more difficult for many people to accept than the process of evolution itself. (Life: The Science of Biology by William K. Purves, David Sadava, Gordon H. Orians, & H. Craig Keller, (6th ed., Sinauer; W.H. Freeman and Co., 2001), pg. 3.)

    The ‘blind’ watchmaker is natural selection. Natural selection is totally blind to the future… Humans are fundamentally not exceptional because we came from the same evolutionary source as every other species. It is natural selection of selfish genes that has given us our bodies and brains … Natural selection is a bewilderingly simple idea. And yet what it explains is the whole of life, the diversity of life, the apparent design of life.(Richard Dawkins quoted in Biology by Neil A. Campbell, Jane B. Reese. & Lawrence G. Mitchell (5th ed., Addison Wesley Longman, 1999), pgs. 412-413.)

    Nothing consciously chooses what is selected. Nature is not a conscious agent who chooses what will be selected… There is no long term goal, for nothing is involved that could conceive of a goal.(Evolution: An Introduction by Stephen C. Stearns & Rolf F. Hoeckstra, pg. 30 (2nd ed., Oxford University Press, 2005).)

  98. 98

    Dawkins developed Weasel 30 years ago to demonstrate the difference between entirely random and cumulative selection. He said himself “it was a bit of a cheat” and, unlike Weasel, evolution is an untargetted process, blind to the future.

    It is a piece of history. Evolutionary theory does not rely on Weasel for support. Pointing out the obvious, that it is not an analogy for natural selection, will not have any impact on the validity of evolutionary theory.

  99. 99

    @ Gpuccio

    I agree with your assessment of Zachriel’s intellect. I should have mentioned it.

  100. 100
    George L Farquhar

    KariosFocus

    That is, Weasel is utterly unconnected to the real world, where the issue is that targeted search that rewards non-functional configs based on mere increments in proximity to a target, have nothing to do with the alleged designing capacities of chance variation plus natural selection across competing FUNCTIONAL sub populations.

    Still creating strawmen to destroy?

    Perhaps it would have been better for you, overall, if you had just said way back when “why yes, George. The letters do appear to latch but that’s because of the small sample size I’m looking at and not any explicit latching code”

    I mean. There is now a thread at Pandas thumb talking about how Dr Dembski is finally going to get round to publishing a “correct” version of Weasel, finally, on evoinfo.

    And of course we have the upcoming paper to look forwards to which it’s suspected uses the “incorrect” (or latching) version of Weasel in some way.

    I notice you have not made an apperance at the thread discussing the actual math behind Weasel either at PT or ATBC

    http://www.antievolution.org/c.....=14;t=6034

    http://pandasthumb.org/archive.....-para.html

    I look forward to seeing you at either of those places Kariosfocus. If you want to discuss the math further, go there. If you want to repeat yourself ad nauseum, stay here. Under discussion is the probablity of a correct letter appearing and not changing. If you are a true believer in knowledge, in taking the chance that you might be wrong, you might learn something.

    And on the subject of ad nauseum, FSCI, instead of talking about it and typing tens of thousands of words defending it to people asking the same questions over and over where you give the same answers over and over, why don’t you do something with it instead? You know, use it?

    Put your money where your mouth is.

    Publish.

  101. 101

    The Panda’s Thumb post is interesting. Here’s what Ian Musgrave says:

    I’ve gone back and done a head to head comparison myself between a program with no “locking” (all letters in any given string have a chance to be mutated) and one with “locking” (where the matching letters are preserved against mutation). Trying to implement “locking” al la Dembski proved too hard. You have to keep indices of the letter locations and keep updating them. It is such a pain in the bottom to try and do this that I cannot imagine Dawkins even wanting to try and program a “Locking” implementation in GBASIC. Remember, Dawkins weasel was a quick and dirty program bashed out in a short time. To implement “locking” I just kept a copy of the parent string unmutated (after all, in the real world not every offspring has mutations in genes of interest).

    His conclusions?

    “Locked” runs finished earlier, on average [sic] but most of the trajectory of the run was determined by mutation supply. As you can see, runs done with locked and unlocked versions fell within the error bars of each other, for runs that set the Offspring number at either 100 or 30.

    He says some other things, but I’ll leave it to others to go there.

  102. Does Ian Musgrave consider that locking takes place regardless of any specific coding given a target, a small enough mutation rate and a large enough sampling size?

    That alone ensures that at least one exact copy of the “parent” will be present in each generation.

    I have already gone over this. So what is so special about Ian?

  103. Arthur Smith:

    Pointing out the obvious, that it is not an analogy for natural selection, will not have any impact on the validity of evolutionary theory.

    No but pointing out that “evolution is an untargetted process, blind to the future“, does have an impact on its validity.

    And that is why no one can model “evolution”.

    It is an untestable concept- “untargeted process, blind to teh future”.

    It may make for interesting philosphy but it doesn’t belong in a science classroom.

    You rock Arthur! Thank you.

  104. SteveB [65]:

    wsoldin_je_d_swodx
    Now, consider the following “mutations” to the original, A and B.
    A wsocdin_je_d_swodx
    B wsoldin_je_d_srodx
    You, representing natural selection, do not know the target—in fact, you don’t even know what a target is, as you have no goals. Additionally, you cannot predict, cannot reason, and certainly cannot write computer code; you don’t have any lifelines and you can’t buy a vowel. All you can do is evaluate the “fitness” of each variation in the here and now. Your options are: select A, B, both, or neither.
    Which do you choose and why.

    Let’s say the original “wsoldin_je_d_swodx” is an eye.

    A) distorts the lens incrementally resulting in incrermental increase in image resolution.,
    B) results in a corrosive acid being introduced into the eye fluids.

    ————————–

    Patrick [79]:

    That may be doable for this GA but what about “Pneumonoultramicroscopicsilicovolcanoconiosis” (360 informational bits) or, better yet since it’s more relevant Dembski’s work (UPB), the word “Lopado­temakho­selakho­galeo­kranio­leipsano­drim­hypo­trimmato­silphio­karabo­melito­katakekhy­meno­kikhl­epi­kossypho­phatto­perister­alektryon­opto­kephallio­kigklo­peleio­lag?io­siraio­baph?­tragano­pterýg?n” (1464 informational bits). I’m not going to even try and look for functional intermediates.
    And I’d add that none exist, although the entire word consists of functional components. So someone could argue that an indirect pathway could duplicate all of them from other words and somehow assemble them into a coherent whole.

    Those words above would never be specifically targetted themselves. There’s only incremental increases in word length while maintaining legal words. Any conceviable word of the length above would be just as amazing as any other. And actually the way words of that length can be directly parsed and understood from their latin components to discern their meaning, a lot of the words randomly formed from such roots would be acceptable as descriptive of something, whether or not anyone had actually coined them yet or not (IMO). So when you get to a word of that length – its really like a sentence only with much less strict syntax.

    ———————-

    EricB:

    Let’s start just with english (and not consider biology for the moment): Do words like “of” have meaning on their own? Clearly they do:
    If… – movie title; “One” – song “Yes” – group. Think of album titles, advertizing slogans and campaigns and so on. Phrases and sentence fragments are used all the time to convey meaning. Infants and toddlers also use words and sentence fragments to convey meaning and are understood. In normal conversation, people use fragments as well. Poets will slice and dice language. It is really an arbitary convention that legal sentences have to have a subject and verb for example. Groups of words and even single words most assuredly do have meaning in English.

    And complex words and sentences can be built up at random very easily from these smaller components in English – with no planning involved for future use. This results from the fact that smaller words and phrases can be used in millions of different ways, so that utilizing one phrase isn’t locking us in to only one future target. (And just to reiterate, all the individual words and small phrases are useful on their own as well).

    So the applicablity to nature would be is if it as well had elementary components that were functional on their own and were functional in many other contexts as well. If this implies nature would have to be much like a human language itself, then there’s your teleological argument right there, and so be it.

  105. Let’s start just with english (and not consider biology for the moment): Do words like “of” have meaning on their own? Clearly they do:
    If… – movie title; “One” – song “Yes” – group.

    (I once heard some old-timer mention the above titles from some bygone era.)

  106. 106

    JT [105], I’ll add to that list “A” — the title of an important long (800 page) poem by Louis Zukofsky. Also, Zukofsky’s collected critical essays are divided into three sections, entitled “for,” “with,” and “about.”

  107. JT, I’ve never claimed there are no single words that might be justified as having functional value in isolation. The problem is that Zachriel presumes to treat any word it finds in the dictionary as having functional value in isolation.

    Dictionaries are constructed from usage within full blown English taken as a whole. This means that Zachriel is illegitimately basing selection upon a word’s value in the context of the entire English language. That is cheating by looking at future utility, whereas as Arthur Smith said, “evolution is an untargeted process, blind to the future”.

    You wrote:

    “Let’s start just with english (and not consider biology for the moment)…”

    If you think Zachriel has any relevance for biological evolution, why not consider biology? That is where its error becomes the most obvious.

    I expect you know that proteins that have function in the whole human body may nevertheless be useless when taken in isolation, apart from the support of certain cooperating proteins. It is plain that if Zachriel selected one of these in isolation, based merely on its appearance in a dictionary of functional protiens, that this would be cheating as an analogy to biological evolution. It selects based on future function in the eventual whole body.

    Do you disagree with that analysis? If so, where is the biological case mistaken? Yet this is exactly what Zachriel does with English words within the English language.

    The fact that Zachriel uses not just one but a whole dictionary of many available selectable targets does not save it from the plain fact that its relevance to biological evolution is an illusion. The problem is not removed by having more targets to select. One of its core problems, just as with WEASEL, is the fact that it makes no discrimination between present and future utility.

  108. David Kellog:
    JT [105], I’ll add to that list “A” — the title of an important long (800 page) poem by Louis Zukofsky.

    What about the flavor enhancer, “Accent”.

    That’s not even a character.

  109. Arthur Smith, it seems at least that JT is not yet convinced. I suspect there are others who do not realize that these programmed word games are irrelevant as an analogy to biological evolution.

    But if we sincerely only wanted to show that cumulative selection that is unrelated to biological evolution is more effective than random selection, we wouldn’t need a program for that.

    To find a password of N letters by chance is to choose from 26^N possibilities (or 52^N if case sensitive).

    If instead I can discover them one at a time in isolation and cumulatively select them, even a brute force search need only consider 26*N possibilities (or 52*N for case insensitive) to completely examine all possibilities.

    As N grows, the difference becomes huge, and as you know proteins are quite long. The advantage of non-biological, cumulative selection (where present function is not a concern) is beyond dispute.

    However, that is irrelevant to evolution of proteins, due to the nasty problem of needing present selectable function on the whole protein, combined with the predominance of amino acid sequences that are not functional as proteins.

  110. ericB [107]:
    The problem is that Zachriel presumes to treat any word it finds in the dictionary as having functional value in isolation.

    What are you saying? That supposing some hypothetical word is difficult to interpret as “functional” that invalidates the entire illustration?

    Dictionaries are constructed from usage within full blown English taken as a whole.

    I honestly don’t know what you mean. Some people communicate quite adequately with a vocabulary of a couple of thousand words or so (I suspect – maybe less) – children and toddlers with even far less. Also are you saying there was never a time when the vocabulary of English was not far far less than it is today? Did words have any meaning back then? What about the limited vocabuluary in the vocalizations of animals. Meaningless as well?

    I expect you know that proteins that have function in the whole human body may nevertheless be useless when taken in isolation, apart from the support of certain cooperating proteins.

    Obviously subcomponents have to be functional enough to survive before being incorporated into larger functions. I would tend to think in early stages of life there wouldn’t have been a lot of competition, so a lot of niches. A lot of neutral genetic info squirelled away in various corners not hurting anybody and slowly multplying.

    It is plain that if Zachriel selected one of these in isolation, based merely on its appearance in a dictionary of functional protiens, that this would be cheating as an analogy to biological evolution. It selects based on future function in the eventual whole body.

    I don’t know anywhere Zachriel goes to a dictionary of functional proteins and tries to establish a strict parallel with the English Language. If you’re just stating that its not proven yet nature is like human language no one’s said otherwise. But the idea that human language would be more powerful and more expressive and more flexible than nature itself seems hard to believe.

    The fact that Zachriel uses not just one but a whole dictionary of many available selectable targets does not save it from the plain fact that its relevance to biological evolution is an illusion

    So now you’re just asserting that its relevance to biology is an illusion. That’s a type of argument from ignorance as well – “Until conclusively proven I will assert its an illusion.”

    One of its core problems, just as with WEASEL, is the fact that it makes no discrimination between present and future utility.

    This is just categorically false. English words do have utility in isolation – and utility in many sorts of combinations. No one has purported to prove as of yet that nature is identical to human language, however.

    BTW – it occurs to me to ask whether you’ve actually reviewed those websites I mentioned previously [57], since you’re discussing this.

  111. ericB – I don’t want to be an unequivocal advocate until having more time to review all this. I personally want to write a program that attempts to reconstruct an entire text using only random incremental additions of words from that text. It seems obvious that should work though.

    You seemed irritated or antagonistic towards these ideas, but to me they were at least interesting if not compelling.

  112. I personally want to write a program that attempts to reconstruct an entire text using only random incremental additions of words from that text. It seems obvious that should work though.

    That would be a complete waste of time- obviously it would work.

  113. Onlookers:

    Re, GLF,100 : Still creating strawmen to destroy?Perhaps it would have been better for you, overall, if you had just said way back when “why yes, George. The letters do appear to latch but that’s because of the small sample size I’m looking at and not any explicit latching code”

    Selective hyperskepticism, turnabout accusation form. And ecvidently he is broadcassting a twisted wversion of whart has happened here at UD to his co-advocates of evo mat. About pa\r for their course I’d say.

    So, let us correct, for those interested in the truth of the matter:

    1 –> The above is in response to my: “Weasel is utterly unconnected to the real world, where the issue is that targeted search that rewards non-functional configs based on mere increments in proximity to a target, have nothing to do with the alleged designing capacities of chance variation plus natural selection across competing FUNCTIONAL sub populations.”

    2 –> My summary — since Dec last — is an accurate summary of Dawkins’ own description as cited many times on Weasel from BW, ch 3. In short, there is no strawman. At least, on my part. For, targetted search that rewards mere proximity of non-functional configs ducks Hoyle’s challenge of getting TO shores of islands of functionality, and is utterly irrelevant to the vaunted BLIND watchmaker of natural selection. this last, is said to select by differential success of functioning life forms. No function, no differential reproduction; no natural serelction. WEASEL is not about the blind watchmaker but instantiates the power of intelligent design, here pursuing a rhetorical ultimate purpose. So, we must not let turnabout rhetoric distract us from the plain truth of the matter on the merits, regardless of the outcome on explicitness of observed o/p latching circa 1986.

    3 –> Next, the claimed “small” sample size constitutes 300+ letters that could change [count 'em above], with 200+ being latched on output; never does a correct letter revert in he sample; which is avery low probability event indeed if the Weasel of 1987 is more representative of output circa 1986. On inference to best explanation, the output as published is a likely “good summary” on the Weasel’s performance, and it with high probability implies the outputs are latched. So, GLF is here distorting the record on the o/p of the 1986 Weasel pgm and the extent of the sample used form Mr Dawkins’ published “good” results. GLF is refusing to accept that a sample size of 300+ in a conte4xct of “good runs” being published, is not a small one by the law of large numbers standard.

    4 –> From the previous thread, 346 – 7, it will be evident that I proposed two alternative models for that observed o/p latching I noted on since Dec last: T2 — explicit latching, and T3 — implicit latching. On the 1986 o/p alone explicit latching is the best — most parsimonious — explanation.

    5 –> On the principle of charity on testimony, I have accepted that per recent reports first published here in this thread above, Mr Dawkins did not EXPLICITLY latch. But that simply means that on further best explanation, he implicitly latched. The observed latching is the dominant feature of the o/p circa 1986, and in fact the implicit latching thesis has the problem that the letters that flick back circa 1987 then rapidly revert, hence the “winking” effect. On mere probability, that should not be likely.

    6 –> Further to this, Apollos has shown that explicitly latching code can also revert temporarily. That is, only credible code is decisive, though on the evidence we can draw some fairly suggestive inferences.

    7 –> E.g. as discussed above in this thread, the co-tuning of mutation rates [about 5% will give 1 - 2 letters per member of a generation] with an optimised population size will give effective latching, as 24% of all members will be no-change at that mutation rate with a string length of 28 characters. then,w e see that on observation the reported runs take 40 – 60+ gens to reach target for “good” runs. This means that > 50% of time, no-change wins in a generation.

    8 –> Choose a population scaled so that we will get some typicality, perhaps 20 – 30 at base [just off law of large numbers as a pop size to start from], and small enough so that out in skirt cases of 1, 2 or 3 correct letters do not dominate outcomes, and you will [a] latch correct letters, and [b] progress to the target in the sort of range as published instead of within 30 gens.

    9 –> So, the evidence is that the primary issue is settled independent of the debate [and I almost never use this in a positive sense; this is no exception] we have had over whether or no Weasel explicitly or implicitly latched the o/p circa 1986. Weasel is simply not blind watchmaker, period. And, with a sample size of 200 latched letters in 300 chances for letters to change, we are seeing a dominant pattern of o/p latching. On the current balance of evidence [giving heavy weight tot he reported testimony from Mr Dawkins] I am willing to go along with implicit latching, though I would like to see an explanation of the winking effect on flickbacks of correct letters circa 1987.

    ____________

    GLF’s turnabout attempt backed up by refusal to address the brute fact of a sample size of 300+ letters with 200+ showing the secondary phenomenon he has chosen to debate over in order to get away from the implications of his wager argument attempt, fails.

    So, let us note the distruptive pattern of his threadjacking behaviour, and not reward him by followig the schools of wriggling red herrings lead out to ad hominem soaked strawmen ignited to cloud and poison the atmosphere required for progressive discussion.

    GEM of TKI

  114. PS: As for antievo and pandas thumb, I refuse to have anything to do with these notoriously disruptive and destructive fever-swamp of evo mat sites; strong cases in point of precisely the kind of destructive selective hyperskepticism and rhetorical manipulation and championing of the morally indefensible that is ripping our civlisation apart. If there is any merit in their arguments, GLF must present them here — without the nasty ad hominems, strawmen and red herrings, not to mention cherry-picked one sided analyses that major on minor distractive points to duck the main issue. (Indeed, the very Weasel program itself is a case in point: targetted search is precisely a begging of the question Hoyle et al have raised — getting TO shores of functionality, before hill-climbing to optimality can proceed. rewarding target- proximity of confessedly non-functional strings is exactly what should not have been seen, but was.)

  115. Mr Kellogg, re 102:

    Mr Musgrave is misleading you. (Utterly unsurprising for PT.)

    Look at Apollos’ code form the previous thread and you will see just how short, relatively speaking an explicitly latched Weasel is. I don’t know if the EIL code is there at the site, as Atom can confirm.

    In fact, Dawkins states that within each generation the closest to target is preserved. That means that he code has to have in it an assessment of proximity, which is going to be based on relative locations of letters to the target in any case. You already have the information you need to latch, just to get to the champion per generation.

    Also, on the math at EIL, latched searches — whether explicit or implicit makes no difference — will as median run to 98 generations. This is consistent with “good” runs of 40 – 60.

    GEM of TKI

    PS: JT at 105: Do words like “of” have meaning on their own? Clearly they do: Without a much wider context,t eh glyphs or sounds implicated are meaningless. That wider communicative and indeed social context is massively based on intelligence and intentionality.

  116. 116
    George L Farquhar

    KariosFocus

    Mr Dawkins did not EXPLICITLY latch.

    You were wrong. I was right.

    Was that so hard to say?

    Of course, “EXPLICITLY” gives the lie to your “apology”. It’s really a “I was still right”. Yet when I mention that the probability of latching behaviour being seen is a topic of dicussion at the links I gave you don’t want to know! You’d rather proclaim victory without having fought any wars!

    If there is any merit in their arguments, GLF must present them here — without the nasty ad hominems, strawmen and red herrings

    I would note that this thread
    http://www.antievolution.org/c.....=14;t=6034
    Has been created for discussion of Evolutionary Computation and Evolutionary Computation alone. Any ad hominems, strawmen and red herrings will no doubt be sent to the “bathroom wall” where off topic posts get sent.

    Therefore, you have nothing to worry about, you delicate sensibilities will remain unbruised.

    The fact of the matter is that this blog is not the best place to discuss such matters. How can graphs be posted? How can formulas reliably be shown? This format does not lend itself to such a dicussion.

    GLF’s turnabout attempt backed up by refusal to address the brute fact of a sample size of 300+ letters with 200+ showing the secondary phenomenon he has chosen to debate over in order to get away from the implications of his wager argument attempt, fails.

    Do you often draw hard and fast conclusions based on 5-10% of a sample size? Conclusions that have caused the creation of mutiple threads on mutiple blogs about this issue?
    I’m not “gettin away” from any implication of my “wager argument”. I asked for a quote from Dawkins where he says latching was used. You still have not been able to provide such.

    Therefore you lose. Is that so hard to understand?

    Kariosfocus

    GLF is refusing to accept that a sample size of 300+ in a conte4xct of “good runs” being published, is not a small one by the law of large numbers standard.

    If we agree, for the sake of argument, that the sample size published was 300+, could you tell me what that was from?

    I.E. 300+ samples out of a possible N?

    What is N?

    301?
    3001?
    30001?

    If you are not too scared of engaging with people who know what they are talking about, we can contine this at the ATBC thread linked to above.

    I look forwards to seeing you there.

    Why not prove everybody there wrong (after all, this is math and you can prove things in maths! If you are right, show your working!) and you can come back here and crow about it? About how you beat the Darwinists on their own patch.

  117. 117
    George L Farquhar

    Wesley R. Elsberry

    The graph shown here plots the probability that a population will have at least one candidate that preserves all the correct characters from the parent string. The graph shows population sizes from 1 to 500, mutation probabilities from 0.0 to 1.0, and is done for the case where the number of correct characters in the parent is 27. Once the population is around fifty, increases in population size make very little difference.

    http://www.antievolution.org/c.....ntry140135

    What’s your counter Kariosfocus? Why not jump onto that thread, tell Wesley why he is wrong? I’ve seen no such graphs here, it’s not possible to reproduce them here. So that conversation cannot happen here.

    I refuse to have anything to do with these notoriously disruptive and destructive fever-swamp of evo mat sites

    Is this the same reason you refuse to enter the “peer review game”?

    What could be disruptive or destructive about you proving somebody wrong with math?

    I’m sure the good folks at ATBC will be nice to you. If you can defend your arguments with data and logic anyway.

  118. Onlookers:

    Re GLF’s latest.

    1] The published 1986 Weasel output latches beyond reasonable dispute [testified to by the attempt to assert that 200 out of 300 letters with no counter instances is a small sample], which is what I remarked on as a telling secondary feature, back in December. (Just look at the above in this thread’s original post, and think through the 200 of 300 letters sample.)

    2] When the issue of explicit vs implicit latching was raised, I proposed T2 and T3 as explicit vs implicit latching, and stated the grounds for preferring the former in the first instance.

    3] On subsequent additional testimony, despite the onward issue of winking I am willng to accept that the best current explanation for the observed o/p latching behaviour is implicit latching and have again summarised how it its reasonably likely to be done.

    4] GLF knows or should know that once we have any reasonably large sample that is not likely to be structurally unrepresentative, it will most likely reflect the population as a whole [i.e. the valid form of the layman's law of averages: typical behaviour of big enough samples will usually be close to typical behaviour of the whole population]. So oncewe are in the realm of law of large nuimbers, i.e 20 – 30+, we can be comfortable that a sample will give a good enough look at the population to be significant. of course getting bigger and bigger samples increases precision, and 300 is ten times the typical large enough numbers threshold.

    5] Indeed, we know that Mr Dawkins put forth the showcased 1986 output as this he thought made a good case for him. he evidently did not realise the significance of the strong latching tendency.

    6] It is therefore notewoerthy that within a year or so, the 1987 videotaped runs strongly show frequent winking not latching. This strongly suggests, per inference to best explanation again, that the parameters were detuned to get away from obvious output latching.

    I have no reason for apologies to be made and so make none.

    Those who have since the 1980′s ROUTINELY used Weasel as if it were good evidence for the capacity of the blind watchmaker of natural selection have grossly misled large swathes of the public through yet another misleading icon of evolutionary materialism, and DO have apologies that are due — indeed, long since over-due.

    None have been forthcoming, but distractive side-debates are being posed.

    No prizes for guessing why, onlookers.

    GEM of TKI

  119. 119

    kf,

    The published 1986 Weasel output latches beyond reasonable dispute [testified to by the attempt to assert that 200 out of 300 letters with no counter instances is a small sample],

    No.

    It is certainly not beyond reasonable dispute, the relevance of the sample size depends on the size of the total population, which you don’t know.

  120. I will be picking up a copy of “The Blind Watchmaker” today.

    However as I have already stated the program does indeed latch -input to output- via probabilities.

    That is given a target, a small enough mutation rate and and large enough sample size, the output will NEVER be farther away from mthe target than the parent.

    IOW latching, given the proper conditions, is inevitable.

    So why do David and George ignore that fact?

  121. Onlookers:

    Re Mr Kellogg on sampling and runs.

    We must observe that the above 1986 Weasel shows a sample of 300, with 200 being in runs [thus a dominant feature of the output . . . and being of so large a number odf samples that if there is a reasonable chanve of a flick-back it should appear at least once], where never once a run appears do we see reversion away from the run. This contrasts with the 1987 outputs, which show frequent winks away from the correct letter.

    These observed stable characteristics of the processes — whether explicitly or implicitly latched — tell us a lot.

    So, again, we are simply looking at selective hyperskepticism.

    Perhaps, we should put it this way:

    Mr Kellogg you are in a dice game. Somehow the roll keeps on coming up 6′s 2/3 the time. After 200+ out of 300+ rolls, are you going to say maybe this is not a loaded die? [If so, can we meet for a little dice game; I could do with a fatter bank account.]

    Onlookers, see the point?

    GEM of TKI

    PS: Joseph, it is not just large enough. If the generations were really big, the upper skirts would show up often enough that we would see very r5apid convergence as 2′s and 3′s etc of newly correct letters would win the test for best of generation. the factt hat good runs are hitting up at 40 – 60 tells us the generations are of a moderate not very large size. For, 0 changes is plainly winning half or more of the time per generation.

  122. JT (111, 112):

    “I personally want to write a program that attempts to reconstruct an entire text using only random incremental additions of words from that text. It seems obvious that should work though.”

    Depending on the particulars, if “work” means “finish before you have died” then it is not so obvious that it will work. The universe would suffer heat death before random typing would produce one sonnet of Shakespeare. So the devil is in the details.

    The core issue there, as it is with WEASEL or Zachriel or any other such program, is that a runaway combinatorial explosion of possibilities will bury any hope of success. As the length grows, it quickly becomes unfeasible. So, the key is all about how to escape this combinatorial explosion.

    The evolutionary claim is that this bullet can be dodged by selecting between randomly accessible variations on the basis of functional advantages that give preference for reproduction.

    *** If and when this can be done, that could indeed escape from the doom of the combinatorial explosion, provided that there are no discontinuities in function too large for the random variations to hop across in reasonable time. The greater the discontinuity in function, the less probable a random variation will discover the next island of functionality. ***

    This is why the issue of how one represents function is pivotal. To paraphrase kairofocus (114), it is illegitimate to reward on the basis of target proximity (i.e. similarity) when the string is not yet functional, i.e. not functioning in its current context.

    JT: You seemed irritated or antagonistic towards these ideas, but to me they were at least interesting if not compelling.

    It is not the ideas that are objectionable. (See my affirmations in this note.) It is the fact that an appearance of success is being generated by programs that are committing illegitimate moves. That is always objectionable. That is what makes it an illusion, i.e. not what it appears to be.

  123. ericB:

    The core issue there, as it is with WEASEL or Zachriel or any other such program, is that a runaway combinatorial explosion of possibilities will bury any hope of success. As the length grows, it quickly becomes unfeasible. So, the key is all about how to escape this combinatorial explosion.

    In [110] I asked you,

    BTW – it occurs to me to ask whether you’ve actually reviewed those websites I mentioned previously [57], since you’re discussing this.

    You didn’t answer, but I’ll assume its NO, judging from your remark above. You’re just stating things for effect, with no knowledge or desire to learn anything. In a previous post I explained the following about the Zachriel “mutagenator” algortihm:

    For just a seven character word this incremental method is several thousand times faster than blind chance (because of regularities in the english language landscape). And incredibly the longer the word is, the more improvement there is over blind chance. It very quickly becomes 200,000 times faster for say a 10 character word than a 26^10 blind search.

    This is why the issue of how one represents function is pivotal. To paraphrase kairofocus (114), it is illegitimate to reward on the basis of target proximity (i.e. similarity) when the string is not yet functional, i.e. not functioning in its current context.

    How many times do I have to tell you, the Zachriel Algorithms do not reward proximity to a distant target. The mutagenator program starts with 0 letters and then find the shortest word of 1 letter then the find the next longest word randomly and then the next. and then the next. Only legal words are kept, and the word is continually lengthened this way. The phrasenator algorithm finds longer and longer phrases using the same method. There is no distant target that the algorithms are approximating. Just finding the next longest word randomly allows you to get increasingly longer words (or phrases) because of the nature of English. Why would you not even bother to go to the websites and spend at least 10-15 minutes carefully reading that material there.

    It is not the ideas that are objectionable. (See my affirmations in this note.) It is the fact that an appearance of success is being generated by programs that are committing illegitimate moves. That is always objectionable. That is what makes it an illusion, i.e. not what it appears to be.

    You said previously that the Zachriel algorithms suffered from “runaway combinatorial explosion”. Now you’re saying they’re cheating and creating the illusion of success. You’re just flatly contradicting yourself and don’t even care.

  124. The core issue there, as it is with WEASEL or Zachriel or any other such program, is that a runaway combinatorial explosion of possibilities will bury any hope of success. As the length grows, it quickly becomes unfeasible. So, the key is all about how to escape this combinatorial explosion.

    The WEASEL algorithm doesn’t suffer from combinatorial explosion either – how did you not know that?

    The evolutionary claim is that this bullet can be dodged by selecting between randomly accessible variations on the basis of functional advantages that give preference for reproduction.

    So what you seem to be saying is that both the weasel algorithm and the Zachriel algortihm sufffer from combinatorial explosion (not true for either of them), but that there is an evolutionary claim with no evidence that this combintaroial explision is avoided in nature, whereas it could not be in the algorithms above. You are just completely confused. Neither Zachriel or Weasel or combinatorially intractable, and Zachriel is an improvement over the Weasel Algorithm in that the former does not use proximity to a distant target, but rather only local data. Please try to learn something.

  125. EricB:

    I thought you were saying the weasel algorithm and Zachriel Algorithm suffered from combinatorial explosion (Let anyone else read the first two paragraphs of 122 and see if that was a fair conclusion). Evidently you were not saying that though. Your basic point was that Weasel and Zachriel were both cheating.

  126. 126

    kf, actually you have two samples from two runs: the first sample has 196 characters, and the second has 224 characters. In that sample, I observe that correct letters in the first run have a total of 88 chances to flick back, and correct letters in the second run have a total of 135 chances.

    For a sample to be representative, it has to be random. These are nonrandom, because they represent the best fit. You are using highly nonrandom data to say what happens generally.

    Further, note that calling these “runs” is incorrect. They are usually 10 generations apart, and so you don’t know what happens from one generation to the next. Moreoever, you don’t know what happens with all the mutations that were not chosen to parent the next generation: you merely the best fit 10 generations down the line. Between generation 10 and 20, a “best fit” could involve one letter becoming incorrect at generations 11, 12, 13, etc., but then reverting back by generation 20. But you don’t see any of this.

    How big is the population? If the population of “tries” in each generation is — just a guess — 50, and the total generations to target in the presented samples is (if I recall correctly) a little more than 60 in each case, then the total population of potentially changing characters would be 88,200 (28 characters x 50 tries x 60 generations). From that population you’ve got at most 88 or 135 characters that have the same correct letter when you see a nonrandom (best fit) sample several (in most cases, 10) generations apart.

    And from this you infer locking?

  127. 127

    Let’s try a thought experiment. I have an actual weasel. It’s ugly and undistinguished, but in one respect it’s special: it can reproduce asexually. It will produce 50 offspring, some with little differences, that are capable of reproducing in the same way.

    Now, let’s say I want to breed a weasel that has certain targeted features: a soft, glossy coat and short claws. I get my weasel to produce 50 offspring. I pick the one that’s closest to the target and euthanize the other 49. I do this over and over again, over sixty times. Each time, I produce 50 offspring from the best weasel and kill off the 49 I don’t want. By the 63rd generation, I have a pretty nice weasel: its coat is beautifully soft and glossy, and it has tiny, tiny claws.

    Along the way I did something else: every tenth generation, I stuffed the best weasel after he’d produced his offspring. I keep the original and these selections, in order, in glass cases, to show the overall progress of my breeding program.

    One day, my friend kairosfocus comes to my house and observes the cases. He says, “there’s something odd here. From what I can see, the weasel offspring always have softer coats and shorter claws than their parents. In truth, your weasel will never produce offspring with rougher coats and longer claws.”

    What’s wrong with that conclusion?

  128. 128

    In the event of quibbles, let’s say that the two targets are measured by 28 different variables, including (for claws) such things as differences between front and rear claws, sharpness, thickness, weight, hardness, ability to extend, etc., and (for fur) thickness, softness, length, glossiness, consistency on different parts of the body, color uniformity, etc. On all of these measures, the weasel closest to the target measures as good or better than the weasel ten generations (that is, 500 weasels) previous. Does this mean that the weasels never produce offspring that are farther from the target?

  129. 129

    kairosfocus, I know you object to ATBC and the Panda’s Thumb, but do you object to a mathematical explanation of the probabilities involved? Wes E. has provided one. Now, you may object to that post based on its use of a certain proper name, but do you object to the math?

  130. David:

    Let’s try a thought experiment… let’s say I want to breed a weasel that has certain targeted features… (my emphasis)

    Let’s try another thought experiment: how about we stop ascribing to natural selection anthropomorphic characteristics that it doesn’t have.

    You’re describing animal husbandry, not blind watchmaker evolution.

  131. 131

    SteveB,

    You’re describing animal husbandry, not blind watchmaker evolution.

    I’m describing something that’s pretty close to the weasel program: that is, a targeted search. In what ways is my example not like the Weasel program? And do those ways suggest that Weasel is locked or, rather, that kairosfocus is wrong?

  132. ericB:

    I apologize for mischaracterizing your post in 122. The first two paragraphs looked very much like you were saying both the Dawkins and Zachriel algorithm suffered from combinatorial explosion, but I did no realize you only meant in a hypothetical sense. Then you wrote the following:

    *** If and when this can be done, that could indeed escape from the doom of the combinatorial explosion, provided that there are no discontinuities in function too large for the random variations to hop across in reasonable time. The greater the discontinuity in function, the less probable a random variation will discover the next island of functionality. ***

    So evidently you were allowing evidently that the Zachriel algorithm might work.

    Also your comment about proximity to distant targets was only hypothetical as well, and I did not pick up on that either. So either your post was ambiguous or I need to improve my reading skills.

    So again, I apologize.

    JT

  133. Mr Kellogg:

    Do you have any knowledge of the law of large numbers?

    In effect, there are many phenomena in the world that have indefinitely many possible observations. So . . .

    Q: How can we infer confidently to patterns in nature (or technical, or social situations), then, if we are expected to have a large fraction of the possible outcomes in hand? (“A large fraction” is plainly impossible and/or inaccessible for finite, & fallible investigators and/or decision- makers and/or investors constrained by opportunity costs or access etc. [E.g. the financial statements of firms are very carefully massaged indeed. Sometimes, just a little too carefully.])

    A: Several points — and points that tie directly to the under-appreciated significance of the explanatory filter that a leading ID researcher, namely, a certain Mr Wm A Dembski, has championed (and as has recently been incrementally improved by explicitly focussing on aspects):

    1 –> First, things that are dominated by dynamical forces [i.e. expressible in differential and/or difference equation-based models, e.g. Newtonian translational and/or rotational dynamics, or electrical dynamics, or micro or macro economics, etc . . . ] will as a rule show regularities rather quickly and we will be able to see that the patterns characterised by relevant laws or models are reliable. (Of course there usually is some scatter, which we deal with as noise. |Graphical techniques, such as log-log or lot-linear or linear plots, time series plots [esp. three-sigma plots] box- whisker plots, etc are very often used to help in what is now often called exploratory data analysis.)

    2 –> There are other systems that are highly contingent, i.e. outcomes vary significantly under similar initial circumstances; perhaps stochastically, perhaps under direction, perhaps in part by both.

    3 –> Where the law of large numbers fits in is that samples that we have reason to believe are credibly uncorrelated to the system — i.e. are unbiased — soon enough tend to track the pattern of behaviour. For instance:

    A bell chart test case: imagine a large bell-shaped curve on a chart on a sponge backing laid the ground that has been sliced into equal width strips, and put back together in order. Drop a dart more or less at random on it from a height repeatedly, so that the odds of landing at any particular point are more or less even. The holes will tend to be at fairly evenly and randomly scattered points, but will be more or less evenly distributed. After 1 drop, obviously this is not going to be evenly representative. After 10, we will probably begin to see a broad pattern of scatter. After 20 – 30, we usually will have a fairly good scatter, but it is unlikely that there will be a close enough gridding that we will see many points from the “tails” of the curve. After 300 the density of scattered points will be much finer, and we are likely to pick up more and more points in the tails. Thus, a count of the points by strips is a good representative of their area. [In fact, if you ever need to estimate a very irregular area, putting it on a rectangular chart of known overall area and doing the dartboard exercise then counting points in the area vs in the whole rectangle will be an excellent quick and dirty solution.]

    4 –> We will readily see from such a thought expt., that reasonably uncorrelated sampling [i.e. credibly unbiased sampling] can give a reasonably good picture of a population or a trend, surprisingly quickly.

    5 –> In the case in view, as I earlier described, we have two runs of a process, and we have no reason to think that the patterns int eh runs are materially different. indeed, i took a ration that showed a fairly stable ratio of the latched points to the points sampled that could change.

    6 –> I therefore simply pooled, and we can see that there are 300+ points where letters can change, and of these 200+ show “correct” letters, mostly in non-functional configs, marching ever closer to the target. In NO case does a correct letter revert. Indeed, that trend of runs once a letter becomes correct is a dominant characteristic of the output.

    7 –> Your objection now is in effect that he reports are on every tenth generation, not every generation and/or every member of every generation. This is of course the data we have in hand, which we can take to be Mr Dawkins’ publication of what he thought circa 1986 were “good” runs. [NB: The fact that subsequently, from 1987 on, emphasis has been laid on how such Weasel-type outputs do not show such strong runs in every case, tells us that Mr Dawkins plainly misjudged the situation circa 1986.

    [ . . . ]

  134. 8 –> That is, Mr Dawkins made an inadvertent declaration against interest, the strongest — most likely to be true — form of testimony in forensic situations. The fierce and sometimes outright uncivil effort put up by Darwinists ever since, including in recent days here at UD, to take this back or to cloud it, is telling on the force of the admission.]

    9 –> Now, there is no good reason to believe that the Weasel type program, whether it is explicitly or implicitly latched or whether it uses more modern GA approaches, will be such that a sample grid of every tenth or so generation’s champion will be systematically unrepresentative of the overall trend of the program. Indeed, it amply illustrates the steady march to the target that Mr Dawkins so triumphantly published in 1986.

    10 –> Just a little too much so. For, the program circa 1986 also showed a strong tendency to “run” once a letter becomes correct, event hough it is not functional, i.e “nonsense phrases” are being blatantly rewarded for increased proximity to the target, so much so that a sample of 300 from “good” runs of the program, shows 200 examples of latching.

    11 –> So, the sampled latching of the output, a secondary feature [and which can be done explicitly or implicitly], highlights the more troubling underlying problem: Weasel is not BLIND watchmaker search.

    12 –> instead, it is targetted, directed search based on in effect broadcasting the location of the target by rewarding “warmer” search-points, even when they are not yet functional. And that is what I highlighted last December, which GLF quotemined to try to threadjack a previous thread that was heading towards some unpleasant truths on the implications of selective hyperskepticism:

    [Unpredictability thread, slice from no 111:] Weasel sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.

    [Cf. from 107 this post referred:] . . . the problem with the fitness landscape [model] is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. So far in fact . . . that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy.

    But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge. [NB: And, in fact, that was the challenge Sir Fred Hoyle had posed. A challenge that Weasel, from its outset, has ducked and distracted attention from. Weasel is -- and has always been -- a question-begging strawman fallacy.]

    And, I repeat, that [unmet cfhallenge] starts with both the metabolism first and the D/RNA first schools of thought on OOL. As indeed Shapiro and Orgel recently showed . . . .

    As for Weasel, you will note that I hardly bothered to take note of it once it was raised by JK, as it is so trivially irrelevant as a plainly DIRECTED, foresighted tartgetted search.

    It instantiates intelligent design, not the power of RV + NS. Even going up to Avida and the like, similar issues come up, as is highlighted under the issue of active information by Dembski and Marks.

    ____________

    In short, right from the beginning, it should have been clear what the key problem with Weasel was. And, the law of large numbers shows just how the Weasel o/p circa 1986 is inadvertently telling on the underlying problem.

    And no amount of after the fact distractive selectively hyperskeptical rhetoric by evo mat advocates will change that fact. If you showcase an example meant to illustrate your point, it is entirely proper for others to analyse the case and show the inadvertent admission against interest that lurks therein.

    GEM of TKI

    PS: And, I repeat: GLF’s attempt to play the “$100 K challenge not taken up” fallacious argument, simply shows the bankruptcy of the evo mat case on this.

  135. On Mr Kellogg, re 129:

    First, on personalities: Mr Elsberry has NOT got permission to use that proper name, and his abuse of that in the teeth of to all but moral certainty of knowing that he should not do so shows just how out of order, disruptive and uncivil the Anti Evo site is.

    I therefore request of you, Mr Kellogg, on basic duties of care, that you kindly inform Mr Elsberry et al of their misbehaviour and request that he take down the offending use of a personal name. If you do not do so, that would simply show enabling behaviour on your part. Especially, after you tried to ask me to go there to participate in that site’s exchanges.

    As to the rhetorical assertion that 300 rolls of a loaded die are irrelevant to showing an underlying trend [my obvious point but tot he willfully obtuse], the above thought example at 133 just above, point 3, will show why Mr Elsberry is grossly wrong. But, too: on the evidence and presumption that he is a properly qualified biologist and/or has a duty to inform himself on relevant and reasonably accessible facts before making critical commentary, he is — sorry to have to be so direct — being deceitful by negligence or by intent.

    If he — as I gather — is a biologist, he MUST have done enough statistics to be familiar with the issue of obtaining sufficiently large but conveniently small samples; i.e the issue of the law of large numbers. In the case of Weasel circa 1986, the published 300 samples of letter behaviour are quite enough to show a very strong trend indeed to latch.

    Not only so, but there are two reasonable mechanism that would explain that: T2 — explicit latching, and T3, implicit.

    On the latter, say 5% probability per letter per member of a generation [ = 0 - 2 changes, typical for 28 letters; 24% being at 0] and a sufficiently large but moderate generation size [so that multiple mutations -- way out in the skirts -- do not show up repeatedly and dominate behaviour] with letter by letter reward of targetting will produce just what is observed beyond reasonable dispute: (i) letters latch on going correct, and (ii) “no change” dominates over 50% of generations. Otherwise, (iii) the convergence to the target would be much faster than the reported 40+ and 60+ generations, which we can presume were for “good” runs. As well, (iv) we may note that 40+ and 60+ as “good” [= fast] runs are very consistent with the median number for latched searches, 98 generations. (Note, too: as pointed out from the previous thread, latching may be IMPLICIT, not just explicit.)

    In short, we have a very good model of what is going on, even on this secondary point.

    On the main issue, it stands clearly demonstrated from Mr Dawkins’ own mouth in BW ch 3 [as cited in his attempted defense by Wiki . . . another inadvertent admission against interest], that Weasel is a question-begging strawman argument that avoids Hoyle’s challenge to get TO shores of realistic functionality before playing with hill-climbing to optimality of function by RV + NS. Citing Wiki from BW ch 3:

    I don’t know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. Let us limit the task facing our monkey somewhat. [Ducks the issue of complexity . . . observed bio-function starts out at FSCI of 600 - 1,000 k bits on DNA alone; config spaces that start at about 10^180,000 cells . . . ] Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence ‘Methinks it is like a weasel’, and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . . [if he has to produce a toy scale fucntionality of the phrase above, he is looking at 27^28 ~ 10^40 configs; which as the EIL page on Weasel shows, effectively leads to endless non-functional wandering around in the letter space . . . i.e Dawkins credibly KNOWS that he real task is hopeless and instead of re-thinking, sets it to one side, begs the question posed by Hoyle and sets up a strawman instead]

    We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense phrases [non-functional configs are rewarde3d by passign on to the next manufactured genration of lemons], the ‘progeny’ of the original phrase, and chooses [i.e. design and decision] the one which, however slightly, most resembles the target phrase, [non-functional increments in proximity to target down to one letter are rewarded] METHINKS IT IS LIKE A WEASEL . . . .

    The exact time taken by the computer to reach the target doesn’t matter. [Oh yes it does! A realistic toy search is infeasible. How much moreso the real challenge that Hoyle posed.] If you want to know, it completed the whole exercise for me, the first time, while I was out to lunch. It took about half an hour. (Computer enthusiasts may think this unduly slow. The reason is that the program was written in BASIC, a sort of computer baby-talk. When I rewrote it in Pascal, it took 11 seconds.) Computers are a bit faster at this kind of thing than monkeys, but the difference really isn’t significant. What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase [telling word, again!] if it were forced to use the other procedure of single-step selection [i.e he knows he is ducking the challenge of threshold of complexity to get to initial function -- you have to land on the beach of an island of function before you can climb its hills to optimal performance]: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed. [translation: a realistic case being probabilistically implausible, I have diverted to a simplistic case that seemingly supports my view, instead of revising my thoughts in light of what the probabilities are trying to tell me . . . ]

    In short, Weasel is plainly and self-confessedly a designed, targetted search that avoids the issue of complexity of first function in bio-systems, and so is precisely not a BLIND watchmaker in action.

    All attempts to “justify” Weasel therefore amount to so much kicking up of sand into the eyes of onlookers, and to then leading naively trusting, blinded onlookers away from the truth. A more honest response would be to acknowledge the truth about how misleading an icon of evolution Weasel and its kin are, and to frankly apologise for having misled the public.

    That such is plainly not forthcoming is all too telling on the utter bankruptcy — intellectual and moral — of the current state of the evolutionary materialist paradigm.

    So, onlookers, let us draw our own conclusions for ourselves, and act in the defense of our true interests, before it is too sadly late for our civilisation.

    GEM of TKI

    PS: I have specifically requested that my proper name not be used online, to avoid spam. Thankfully, Mr Elsberry et al are in no position to damage my career by trying an “outing” game. And that is their plain and routine intent, let us not fool ourselves. SHAMELESS!

  136. I have specifically requested that my proper name not be used online, to avoid spam. Thankfully, Mr Elsberry et al are in no position to damage my career by trying an “outing” game. And that is their plain and routine intent, let us not fool ourselves. SHAMELESS!

    Haven’t we been through this before? If you are concerned about your name being on the internet, perhaps you should consider that it is plastered all over your always linked.

  137. kairosfocus @ 135

    In short, Weasel is plainly and self-confessedly a designed, targetted search that avoids the issue of complexity of first function in bio-systems, and so is precisely not a BLIND watchmaker in action.

    How many times does this have to be repeated?

    We have the author’s word that the WEASEL program was written for one purpose and one purpose only: as an illustration of the fact that cumulative selection can reach a target much, much faster than a purely random search.

    Nothing more, nothing less.

    As for the use of your name, you use your own initials in the sig on each of your posts and you yourself have placed your name in the public domain by publishing it on your website. It is absurd to expect others to pretend they don’t know what you have made no attempt to hide.

  138. Seversky:

    And in so writing a few — pardon — weasel words — the major issue ofer Weasel as yet another a misleading icon of ecvo mat is dodged: the BOOK, the BLIND Watchmaker is about the idea that we do not need design to get to complex info-rich bio-functionality. But a chief illustration in Ch 3 is . . . intelligent design at work.

    AND that in a context that Mr Dawkins explicitly confesses that if the realistic functionality threshold (even for a toy of a 28 letter phrase) had been imposed, Weasel could not have worked. So, nonsense — non functional — phrases are rewarded on mere proximity to target.

    Thus, Weasel is precisely a question- begging misleading strawman argument as I have pointed out.

    Weasel is not educational, it is indoctrinational and obfuscatory, begging the central question it suggests that it answers.

    Do you begin to understand why I despise rhetorical games?

    GEM of TKI

    PS: YOU KNOW THAT THERE IS A BIG DIFFERENCE BETWEEN INITIALS AND A NAME. THERE IS UTTERLY NO EXCUSE FOR THE SORT OF ATTEMPTED RETALIATORY OUTING AGAINST WHISTELBLOWING THAT ANTIEVO AND MR ELSBERRY HAVE INDULGED, PERIOD. (And, as a long time participant here, you KNOW or should know the reason I have asked that my name not be used: it causes spikes in email spamming. That my name may be accessed by search in a low traffic site of the Internet [for purposes of responsibility over authorship] is no excuse for putting it up in a high traffic site to be accessible without looking for it, and in effect inviting all and sundry to launch spammming attacks or worse. And, of course there is the obvious issue that evo mat advocates routinely resort to outing and persecution. And, FYI a personal name is NOT public domain information: you nor anyone does not have a right to take it and use it as it if you were me, regardless of how you come by it, or to subject me to harassment or worse — that is called identity theft sir, or worse than mere identity theft. So, kindly stop pretending to innocence and stop trying at rhetorical games to “justify” the indefensible. That is enabling behaviour. SHAME ON YOU!)

  139. 139
    George L Farquhar

    the reason I have asked that my name not be used: it causes spikes in email spamming.

    Could you explain exactly how this happens?

    FYI a personal name is NOT public domain information:

    What case law are you citing here?

    nor anyone does not have a right to take it and use it as it if you were me

    Nobody that I can see is “pretending to be you”.
    In comment 135 you said

    Mr Elsberry is grossly wrong

    You used his name. He used your name. Why is it identity theft when he uses it but OK when you do?

    So, kindly stop pretending to innocence and stop trying at rhetorical games to “justify” the indefensible.

    If you explain how simply using your name is indefensible (when you feel free to use other peoples name!) and how it causes more spam (exactly how, not just a claim that it does) perhaps you’ll get your way.

    Stamping your feet alone will never do so!

  140. Now that I have a copy of “The Blind Watchmaker and have read the relevant chapters, I can say the following in full confidence:

    The way Dawkins describes cumulative selection and the way he uses it in the “weasel” program, cumulative selection is a ratcheting process.

    And it is ratcheting towards a specified target.

  141. 141

    kairosfocus,

    I’m not going to deal with the proper name issue (which I think is your favorite red herring) until you deal with Mr. Elsberry’s math.

    As for the rest:

    4 –> We will readily see from such a thought expt., that reasonably uncorrelated sampling [i.e. credibly unbiased sampling] can give a reasonably good picture of a population or a trend, surprisingly quickly.

    The runs you have seen are not, and have never been presented as, an unbiased sampling. They have been the best fit from an undetermined number of mutated versions. My example of the asexually reproducing weasel is more like the Weasel program than your appeal to dice-throwing.

  142. 142
    George L Farquhar

    hi,
    there seems to be a bug in the board, I post but nothing appears and no message about moderation either.

  143. nd, FYI a personal name is NOT public domain information: you nor anyone does not have a right to take it and use it as it if you were me, regardless of how you come by it, or to subject me to harassment or worse — that is called identity theft sir, or worse than mere identity theft.

    I took a peak over there. No one is pretending to be you.

  144. 144
    George L Farquhar

    Joseph

    The way Dawkins describes cumulative selection and the way he uses it in the “weasel” program, cumulative selection is a ratcheting process.

    As you have the book at hand, could you provide a quote that supports this?

  145. 145

    kairosfocus,

    On second thought, I’ll deal with the proper name issue now. There’s a delay in my postings. (I have been in moderation for a testy comment I made to someone else. I hope I’ll get removed from moderation soon.)

    With regard to ATBC: It’s. Not. My. Board. I’m not going to ask Wes to do anything. I have never thought the proper name issue amounted to beans. I’ve removed that proper name from a blog I control, but that was on a forum I controlled and as a favor to the person who asked. I’m certainly not going to ask another person to change the practice on his board when I find the demand silly.

    Further, your justification does not hold water:

    the reason I have asked that my name not be used: it causes spikes in email spamming

    Wow: powerful spammers. When I send an email to a proper name, my emails get bounced back, unless I also use something called a . . . what’s the term . . . oh yes: an email address. Emails go to addresses, not names.

    Further, most spam addresses are gathered by bots, not readers. They look for address-type things (like the @ sign), not for proper name-type things, which are useless.

    Finally, spam filters are available freely in most email programs and are very effective. I may get hundreds of spam messages every day; I see almost none of them, however, because my spam filter works. You should try one!

  146. 146

    Moderators, I posted this in the wrong thread: It should go here. Could you kindly delete that version and post this one? For once I am happy to have a post moderated.

    I am reproducing a section of Wes’s refutation of kf’s response, with the names changed:

    [kairosfocus] is plainly confused about a great many things.

    His own choice of analogy provides a great example of an IDC advocate shooting himself in the foot. The dart and chart situation that would have some slight analogy with “weasel” isn’t one where all the dart-holes count; it would instead be like dropping the dart N times, and then only recording where, say, the right-most dart-hole of those N holes occurred, and shifting the chart to center the drop-point on that hole for the next set of N dart drops. Obviously, the larger N is, the less likely the chart will be moved to the right and not to the left for re-positioning. [kairosfocus]‘ assertion is like saying that we should expect one or more right-ward shifts of the chart during a process where N=50 dart drops, and the chart is re-centered on the right-most dart-hole after each N drops some modest number of times. [kairosfocus] is obviously clueless; the “law of large numbers” is a perfect counter to his argument, not a vindication of it.

    300 characters in a printout of the best candidates per selected generations are not subject to change, not unless one is applying a 100% mutation rate. For a reasonable mutation rate of 4%, one would be greatly surprised if just 24 characters in so many ordinary, not best, candidates had actually undergone change of any sort in the unknown derivation from their parent strings.

    But the whopper in [kairosfocus]‘ maunderings is the bland assertion that “we have reason to believe” [the sampled best candidate strings from various generations] “are credibly uncorrelated to the system”. No, [kairosfocus], we have a tremendous expectation that those results are “correlated to the system”: they are the result of a selective process from a population of candidate strings, taken from a pool of N such candidates at each generation. And the odds for very reasonable population sizes and mutation rates are that best candidates from N alternatives are strongly in favor of keeping “correct” bases from the parent string unaltered. I specifically analyzed the situation, provided the equation that delivers the probability at issue, and [kairosfocus] cannot be bothered to address those facts.

  147. Joseph, the issue has been “latching”, not “racheting.”

    And kairosfocus’s point that the program is intelligently designed is pointless – so is every computer program ever written. Does this mean that computer simulations can never be used to illustrate or explore some aspect of the real world?

  148. kairosfocus @ 138
    Taking your points in turn…

    And in so writing a few — pardon — weasel words — the major issue ofer Weasel as yet another a misleading icon of ecvo mat is dodged: the BOOK, the BLIND Watchmaker is about the idea that we do not need design to get to complex info-rich bio-functionality. But a chief illustration in Ch 3 is . . . intelligent design at work.

    The Blind Watchmaker, as you say, is the book and discusses many aspects of Dawkins’s major thesis. The WEASEL program is not the book. It is an illustration of one aspect of what is discussed in the book, namely, the power of cumulative selection. Whether or not you agree with the main thrust of the argument in TBW should make no difference to whether or not the WEASEL program is an effective illustration of just one point in the book.

    PS: YOU KNOW THAT THERE IS A BIG DIFFERENCE BETWEEN INITIALS AND A NAME. THERE IS UTTERLY NO EXCUSE FOR THE SORT OF ATTEMPTED RETALIATORY OUTING AGAINST WHISTELBLOWING THAT ANTIEVO AND MR ELSBERRY HAVE INDULGED, PERIOD.

    Yes, there is a difference between between your initials and your name. They are three letters which, in the search for your full name, are latched in place.

    That my name may be accessed by search in a low traffic site of the Internet [for purposes of responsibility over authorship] is no excuse for putting it up in a high traffic site to be accessible without looking for it, and in effect inviting all and sundry to launch spammming attacks or worse.

    An open website is public domain regardless of the volume of traffic.

    And, FYI a personal name is NOT public domain information: you nor anyone does not have a right to take it and use it as it if you were me, regardless of how you come by it, or to subject me to harassment or worse — that is called identity theft sir, or worse than mere identity theft.

    It would only be identity theft if someone were to use your name and pretend to be you for personal gain. No one here has done that as far as I am aware. Simply using your own name to refer to you is not an offense nor is it even a breach of your privacy. Your name is out there as I found in a simple Google search which retrieved this article by one Ian Boyne in the Jamaica Gleaner which begins comments about you as follows:

    But G***** M*******, well-meaning but with a surfeit of zeal over knowledge, implies that there is a necessary conflation between theology and action.

    Note that Ian Boyne did not asterisk out your name, I did.

  149. JT (132), I commend you on not stopping with your first impression of what you thought I was saying. You continued to think about it and eventually arrived at a more accurate understanding. I certainly accept your apology.

    Yes, if any of these programs were falling under the combinatorial explosion, they simply wouldn’t work. See my comments about random typing and Shakespeare’s sonnet, a point that Dembski has made.

    In principle, there can be legitimate ways to evolve incrementally with function based selection giving a huge boost over random search, though unsurprisingly this is subject to conditions, restrictions, requirements and constraints. I know about software and I know there are also illegitimate ways to rig the system so that one gets an unwarranted advantage, i.e. one that steps outside the limitations one supposedly is modeling.

    So the question is, does a program like Zachriel play within justifiable bounds, or does it step across a line and presumptuously take an unjustified advantage?

    Just to be clear, I’ve never supposed that Zachriel is driven by proximity to a single fixed target. I know that Zachriel employs (among other things) a dictionary of possible selectable words, not just a single fixed string. Nor is that distinction (one vs. many) a necessary part of the issue I am raising. I am primarily talking about something else, namely the War Games fallacy, as I explained back in post 75.

    Please take a look again at post 75 and let me know what part of that post seemed unclear. Thanks.

  150. In contrast with the illusions of evolutionary word programs, here are two positive examples, one actual and one hypothetical, that do not commit the same errors.

    Some have used genetic algorithms to explore possible new designs for improved antennas. Starting with one or more initial functional designs, the ability to modify these in various ways (perhaps change the lengths or angles of the arms, add forks, etc.) , and the ability to analyze the expected effectiveness of the results, such an algorithm can explore the space of possible antennas defined by the fitness function it has been given. It can look for optimal points as defined by that function.

    Notice that it is doing so by evaluating function at each stage (albeit as defined by the provided function and within the limits of the modifications it can make).

    One can also design software to compete in games. Though I cannot immediately provide a link, I believe I’ve seen games based on battles of software tanks. One could make an evolutionary simulation in which one starts with one or more programs for such a software tank, makes truly blind and undirected random modifications to the software, and then sets that generation of tanks into competition. The top surviving tanks are reproduced, potentially with additional modifications.

    Notice that this also would have a genuine evaluation of competitive function / fitness as the criteria for preservation and propagation.

    Typically, word games such as Zachriel or WEASEL are illusions in that they do not have a legitimate way of defining preferential preservation in terms of current actual effective function.

    Since their basis for selection is not based on current function, even as examples of (non-functional) “cumulative selection” they are irrelevant to discussions of biological evolution.

    Biological evolution simply cannot operate as they do.

  151. hazel,

    Have you ever seen a ratchet in operation?

    It is a LATCHING mechanism that allows for the ratcheting.

  152. George:

    As you have the book at hand, could you provide a quote that supports this?

    Well George, first get yourself a dictionary and look up the word cumulative:

    1 a: made up of accumulated parts b: increasing by successive additions

    There is a big difference, then, between cumulative selection (in which each improvement, however slight, is used as a basis for future building), and single-step selection (in which each new ‘try’ is a fresh one).- Dawkins TBW page 49

    That’s ratcheting.

  153. hazel:

    Does this mean that computer simulations can never be used to illustrate or explore some aspect of the real world?

    In order for a computer to simulate anything the programmer(s) need to have COMPLETE knowledge of that they are trying to simulate.

    For example we can simulate flying in an airplane because we understand flight well enough to do so.

    With biology we don’t even know what is responsible for the development of our eyes/ vision system so we have no idea how to simulate that.

    We have no idea what makes an organism what it is.

    We have no idea how many mutations can accumulate and what that accumulation will do.

    So again we can only simulate that which we understand.

  154. Seversky,

    The problem with cumulative (natural) selection is that it only exists in the head of Dawkins.

    The power of cumulative (natural) selection has NEVER been demonstrated in nature.

    Take away the target and cumulative (natural) selection is nothing more than a blind, random, meandering walk, right off a cliff.

  155. We have COMPLETE knowledge of flying.

    That’s news to me.

    In general, what you say is not true. Every day people program simulations based on mathematical models of real-world phenomena of which we have very incomplete knowledge.

    The computer simulations give us results that we might never expected with the computational power of the simulation.

    However, and this is critical, we then have to take those results and test them back in the real-world to see if our model is sound enough. If the results don’t match the real world, we go back and refine our
    model.

    To state the obvious, if we had COMPLETE knowledge of something we wouldn’t need to create a simulation.

    Duh.

  156. Joseph,

    I’m sure you’ve seen the example of antenna design by means of evolutionary algorithms.

    http://ti.arc.nasa.gov/project.....ntenna.htm

    Do you agree that:

    1) It works

    2) The process does not involve “ratcheting” toward a target specified beforehand.

    Note: I’m not asking whether you believe that this application accurately reflects what happens in nature.

  157. hazel:

    My bad.

    I should have said all simulations are only as good as our knowledge.

    So if we take our knowledge of biology computer simulations of any evolutionary aspect wouldn’t be very impressive.

  158. madsen,

    1) Yes it works

    2) Target 1- optimize the Yagi-Uda. No idea how that was handled. But they must have tested the mutants against something. And yes I do understand in antenna design sometimes one-step back- a tweak one way- is required to get a bigger step forward- another tweak or two someplace else.

    target 2 – more optimizing of a design

    But anyway, what is your point?

    This thread is about Dawkins and weasel.

    I know people can write programs that can do things.

    What no one can demonstrate is nature, operating freely doing something like that.

  159. Thanks, Joseph, and I agree with you that real biology at the genetic level is too difficult for comprehensive, meaningful simulations both due to the huge amount of factors involved and to our lack of understanding.

    That doesn’t mean that someone hasn’t, or can’t, write simulations of some aspects of genetic biology. As I explained before, all simulations work by modeling some simplified version of reality, after which one goes back and tests the results of the simulation against reality. In the results are verified empirically that gives one the confidence that the model has some merit, so that one can refine it a bit. If the simulation results don’t match reality, then you change the model until they do.

    Dawkins “weasel” and similar programs are meant to test and illustrate a point. They are not meant to model real biology. Dawkins knows that, I’m sure.

  160. Joseph,

    But anyway, what is your point?

    This thread is about Dawkins and weasel.

    I know people can write programs that can do things.

    What no one can demonstrate is nature, operating freely doing something like that.

    Did you look at the ST5 antenna? There was no ratcheting toward a pre-specified target. Rather, just mutation and selection were involved.

    Furthermore, the two (quite different) best resulting designs were superior to human-designed antennas in certain respects.

    The reason I am bringing this point up in the weasel thread is that I want to understand more about this notion of ratcheting toward a specified target you have brought up. In particular, I would like to see if we can at least agree that evolutionary algorithms work even in the absence of such targets, the ST5 antenna being one example

  161. madsen (and Joseph), first thanks for bringing up the antenna design example I was just alluding to.

    As a quick comment, these evolutionary algorithms work within the limits of

    a) the modification/mutations allowed by the programmer, and
    b) the fitness function defined by the programmer to evaluate each of the resulting candidate designs.

    This combination defines a landscape with peaks and valleys. The programmer doesn’t know where the peaks are in advance. (The working of the algorithm is essentially a search to find highest points in that defined landscape.) However, everything they have specified does indeed specify where those peaks are.

    Whether you want to call that a “target specified beforehand” or not is a question of defining terms. But they have specified a landscape and at best the algorithm only finds the peaks (maxima) or else finds the low points (minima) that are determined by the model implemented.

  162. Hi ericB,

    Whether you want to call that a “target specified beforehand” or not is a question of defining terms. But they have specified a landscape and at best the algorithm only finds the peaks (maxima) or else finds the low points (minima) that are determined by the model implemented.

    Of course I agree that the fitness function determines any maxima and minima in the fitness landscape. But the programmer doesn’t actually have to know the locations of these points ahead of time for the algorithm to work. I think we both would agree to that.

    However, in view of this statement of Joseph’s:

    The way Dawkins describes cumulative selection and the way he uses it in the “weasel” program, cumulative selection is a ratcheting process.

    And it is ratcheting towards a specified target.

    I wonder if he accepts that EA’s can solve problems without the programmer explicitly telling the computer what the answer is beforehand.

  163. Also, to clarify and be fair to Joseph, I do realize he was referring to natural selection and the weasel program, and not to EA’s in that quote.

  164. Onlookers:

    It is now sadly evident that the much over-used evolutionary materialist advocate selective hypersketicism threadjacking tactic of red herrings led out to ad hominem soaked strawmen and onward to ignition that clouds and poisons the rtmopsphere for sertious discussion has reached the stage of turnabout accusations in this thread.

    Such distractive tactics also inadvertently expose just how threadbare the evo mat case is.

    In the case of Weasel, it is now abundantly clear that it uses a targetted, proximity based search strategy that rewards non-functional configs on proximity without reference to functionality. So, it is yet another misleading icon of evolution.

    That is so whether or no it explicitly or implicitly latches or near-latches, and it is so in more modern Genetic Algorithm search simulations that are just as much characterised by active information fed in by designers.

    Further, as it is plain that the evo mat advocates here and at the Anto Evo site are intent on personalities, rather than the merits, no further reasonable discussion with them is possible; sadly.

    (And if these do not know the implications of trumpeting personal information to one and all online and/or linking one side of a dispute in which the Newspaper in question had to publish a corrective, they are a lot more naive than we have any reason to believe. So, let us learn from their behaviour, the dangers posed by their agenda and its underlying undermining of intellectual and ethical responsibility, to our civilisaiton. then, let us act resolutely in the defense of that which we hold dear.)

    Let us now focus on correcting the misleading arguments they seem to be ever so fond of, as that will help us in defending ourselves from the epidemic of selective hyeprsketicism that has so many influenced by evolutionary materialism firmly in its grim jaws.

    1] On the law of large numbers and practical identification of trends

    You will observe that, above, I have repeatedly adverted to a key sampling theory concept, the law of large numbers.

    In effect, as a rule, once we have sufficient samples of a population in hand and no reason to suspect undue bias, the trends of he population will come out in the sample, with pretty fair reliability. A typical threshold sample size for a linear [and that can be curvilinear too . . . ] trend is about 5 – 9 points [recall from school lab exercises] and for more stochastic situations 20 – 30 or so. Once you are much under that sort of range, you have to begin to resort to more and more exotic testing procedures. Of course as samples cost time, effort and money, there is also a practical upper limit.

    In the case in view, we are dealing with a sample of scale 300 or so, with a trend that comes out in about 200 cases, with NO counter-instances.

    That’s a strong trend in anybody’s book.

    AND, as will be discussed a bit more below, we have good models for why that is so.

    2] Mathematical models and simulations vs thought experiments

    Mathematical and/or computer simulation exercises are interesting, but have this critical flaw: they are no better than their underlying assumptions and dynamical/ logical structure. So, GIGO — garbage in, garbage out.

    That is why physicists tend to put a more serious weight on experiments and thought experiments where actually carried out experiments are not directly feasible (or where the thought exercise is sufficient to make the point).

    So, in the thread above, I gave the cases:

    (i) of a loaded die that 2/3 of the time comes up 6′s on a run of 300 tosses (this, to show that probabilistic trends can strongly come out in reasonable sized samples), and

    (ii) dropping darts to scatter holes at random across a bell-chart divided into stripes on a floor (to show how after a reasonable number of samples, trends will be more and more evident and skirts will onward begin to show up as enough sample points make low probability outcomes more likely to become manifest).

    The relevance of such to the case in view in the original post, should be plain to all who have ever had much to do with real world data collection and analysis for serious decision making or for scientific investigation across time. A LOT of real science has been based on data sets of scale comparable to — or a lot smaller than — the one above in the original post, and many crucial decisions have had to be made on that scope of data or less. And, the decisions or inferences were confidently and often correctly made, too.

    The thought exercises bring out what is going on pretty well. Namely, selective hyperskepticism in the face of strong evidence that the Weasel program circa 1986 explicitly or implicitly latched [or if you want quasi-latched], as it ratcheted its way tot he target by rewarding mere proximity in the teeth of non-function.

    That ratcheting off proximity to target in the teeth of non-function, is telling, especially in light of Mr Dawkins’ description of what he did, why above.

    Weasel is utterly irrelevant to, strawmanises and begs the question of the need to generate functionally specific complex information to get TO the shores of islands of functionality. Instead of addressing the blatant fact that even toy-scope modest function [1 in 10^40 not a small searchable fraction of 10^180,000], it starts on the rhetorically convenient assumption that you are at shores of some minimal functionality and can find easy steps all the way to the peaks of optimal function.

    But event that nicely stepped path to the peak — post Behe’s Edge of Evo — is a serious question mark!

    What is clear is that Weasel is plainly NOT the work of the BLIND watchmaker of the title of Mr Dawkins’ 1986 book.

    3] But Weasel (circa 1986) does not EXPLICITLY latch!!!!

    On Mr Dawkins’ say-so as reported by Mr Elsberry and I believe Mr Kellogg above, we have accepted that; never mind that it is the most natural explanation otherwise of the observed 1986 behaviour of Weasel as reported and published by Mr Dawkins. [Which we can very safely assume was representative of the result of what he then thought of as "good" runs to target.]

    But, we have also shown that:

    1 –> At 5% mutation rate per letter per string per generation, typical strings will show 0 [~ 24% of time], 1 or maybe 2 mutations, with 3 or more being out in the low probability tail.

    2 –> Weasel on “good” published runs circa 1986 runs to target in 40+ and 60+ generations which means that no-change is winning some 50% of the time.

    3 –> that is consistent with generations of sufficiently large scope that 0-change strings show up reliably, but 2 and more are relatively rare. Otherwise, extreme tail end multiple mutation cases with correct letters would dominate the closest to target filter and runs to target would be fast indeed.

    4 –> This is also consistent with the emergence of implicit [quasi-]latching. That is, the numbers of cases in a generation where a letter reverts and another letter advances so that the resulting flicked back population member becomes the advanced champion of the generation is very rare.

    5 –> As a result, we see steady ratcheting of progress to target, with letters that make an advance preserved from one generation to the next, i.e latched or effectively latched.

    6 –> And, most importantly, since there is no realistic functionality requisite, there is no material resemblance to a process that allegedly uses chance variation plus non foresighted natural selection to move towards function and by hill climbing thence optimal function.

    ++++++++++++++

    The matter is plainly over on the merits, and we need not attend further to selective hyeprskepticiasm as it inadvertently publicly reduces itself to self-referential absurdity and incoherence; also in that exposing the underlying intellectual and ethical irresponsibility that lurk in evolutionary materialism. Indeed, that have lurked in materialism ever since Lucretius’ day, some 2,000 years ago.

    GEM of TKI

    ____________

    PS Re Mr Severski — if he had followed up just a little more at the Gleaner’s site he would have seen that the Gleaner — no friend of mine [that formerly great newspaper has long since lost its credibility (especially on its commentary pages), sadly; so much so that it is one of the motivators of my advice on spin tactics in the media here] — was led to publish a corrective article by me shortly thereafter. [And, yes I know the corrective appears above my name. I am forced to do that, given the abuse of my name by Mr Elsberry. Hopefully the spam surge will be short enough to be tolerable. Why spamming seems connected to my name appearing in fairly high traffic locations, I know not, save that there are web crawling bots out there that will search for information on names. I hope the other Caribbean person with that name will not suffer unduly because of this. In any case, it is a matter of basic courtesy to treat people online in light of the handles they use.]

    (In fact the “corrective” was not a “response” but a PRE-sponse that anticipated what Mr Boyne said, but was [badly] edited by the Gleaner to come across as a response to Mr Boyne’s article, without notification to the reader. See what I am speaking about on loss of credibility, Mr Severski? Did you check out the quality of the SOURCE and the implications of the context before you — and probably others art Anti Evo — pounced on the convenient quip? Whichever leg of that dilemma, the point on intellectual irresponsibility is underscored. In short: thank you for notifying us of your ill-informed anti-Christian bigotry, so we know what to make of your further comments here or elsewhere.]

    PPS: Onlookers, FYI: Mr Boyne, in the series of articles in question, was accusing evangelical Christians in Jamaica, utterly unjustly, of being potential theocratic tyrants who were willing to set out on butchery of those who dared disagree with them, much like has become a common blood slander among radical secularists in Europe and North America. For instance here is a gem from Boyne: “[Evangelicals in Jamaica etc are] prone to bigotry, intolerance and the desire to impose their will on others just as the Islamic militants.”

    Does Mr Severski also wish to agree with Mr Boyne’s assessment on that claim? On what evidence, please?

    In fact — and pointing to this now increasingly inconvenient fact is what provoked the attack by Mr Boyne in the first place — there has in recent decades been a studious ignoring and/or suppression of the contributions to the rise of modern liberty and democracy by those coming from the Biblical framework, not to mention the religious background of several crucial heroes of Jamaica, or even that of the writer of our National Anthem. [Onlookers, Cf here for a documentation of this point.]

    In short, Severski (sadly, predictably) has – yet again — resorted to adverse personal commentary and selective citation or linking, without doing due diligence to first find then present a true, well-warranted and fair view of the truth. Typical of selective hyperskepticsm at work.

  165. PPPS: I am further forced to link two blog remarks on the Boyne affair, here and here [the latter being the rebuttal that I actually submitted to the Gleaner]. I trust that fair minded people will see for themselves the sort of anti-Christian bigotry we are dealing with, and will draw prudent conclusions. Again, I ask that my HANDLE be used in referring to me, or even my initials.

  166. FOOTNOTE: While I am at it, onlookers need to be pointed to this blog comment on the importance of public standards of decency, which also gives highly relevant context on why Evangelicals stand up for modesty in dress [one of Mr Boyne's points of issue in the series that provoked my response and led to his hit piece that Severski so unwisely decided to cite as though it were the unquestionable truth].

  167. madsen:

    Did you look at the ST5 antenna? There was no ratcheting toward a pre-specified target. Rather, just mutation and selection were involved.

    There was a pre-specified target. And as I said I don’t think ratcheting was involved.

    Furthermore, the two (quite different) best resulting designs were superior to human-designed antennas in certain respects.

    You mean unaided human design. Those two antennas were still designed by humans.

    The reason I am bringing this point up in the weasel thread is that I want to understand more about this notion of ratcheting toward a specified target you have brought up.

    I was talking ONLY about ONE SPECIFIC example- the example that is the topic of this thread.

    In particular, I would like to see if we can at least agree that evolutionary algorithms work even in the absence of such targets, the ST5 antenna being one example.

    But there was a target- an antenna that could do X.

    However that does not mean ratcheting was involved.

  168. hazel:

    That doesn’t mean that someone hasn’t, or can’t, write simulations of some aspects of genetic biology.

    Like what, for example?

  169. madsen:

    I wonder if he accepts that EA’s can solve problems without the programmer explicitly telling the computer what the answer is beforehand.

    But EAs are written to solve specific problems.

    Do you think that someone can write an EA that is not specified to solve a problem and somehow it will evolve the capability to do so?

    I would love to see that.

  170. 170

    kairosfocus [165],

    Your first-parody seems to be a delightful self-parody of your own excesses. Good for you! I had thought your sense of humor was utterly absent.

    I’m going to provide translations of a couple other paragrahs here for onlookers who may not see them in terms of the long-term debate:

    In the case of Weasel, it is now abundantly clear that it uses a targetted, proximity based search strategy that rewards non-functional configs on proximity without reference to functionality. So, it is yet another misleading icon of evolution.

    Translation: I agree that Weasel works as Dawkins always said it did: as “a bit of a cheat.”

    That is so whether or no it explicitly or implicitly latches or near-latches, and it is so in more modern Genetic Algorithm search simulations that are just as much characterised by active information fed in by designers

    Translation: While I, kairosfocus, am not conceding error on the latching issue, I insist that, if I were wrong, it wouldn’t matter.

    Why you spent so much time insisting on your case for an issue that you now say doesn’t matter in the least baffles. All you’ve done is gone back to the notion that it’s a targeted search — which has never been denied.

  171. 171

    correction: for “first-parody” read “first paragraph.”

  172. Joseph,

    But EAs are written to solve specific problems.

    Do you think that someone can write an EA that is not specified to solve a problem and somehow it will evolve the capability to do so?

    I would love to see that.

    I agree that GA’s are written to solve specific problems. I’m not claiming anything about their ability to evolve any capabilities.

  173. It is now sadly evident that the much over-used evolutionary materialist advocate selective hypersketicism threadjacking tactic of red herrings led out to ad hominem soaked strawmen and onward to ignition that clouds and poisons the rtmopsphere for sertious discussion has reached the stage of turnabout accusations in this thread.

    Say what-y now?

  174. Joseph,

    If by “target”, you mean the region of the search space where the fitness function is greater than some set value, then I don’t disagree.

    The only thing I’m saying is that the two actual winning designs for the ST5 antenna didn’t have to be specified beforehand in order for the algorithm to work. They are novel designs, created by, erm, …, well, I’m not sure who created them. But the point is, you don’t know what you’re going to get until you run the algorithm.

  175. madsen:

    The only thing I’m saying is that the two actual winning designs for the ST5 antenna didn’t have to be specified beforehand in order for the algorithm to work.

    And the only thing I am saying was the target was a pre-specified result.

    They are novel designs, created by, erm, …, well, I’m not sure who created them.

    The person/ people who wrote the code.

    Computers and therefor computer codes are TOOLS, nothing more.

    But the point is, you don’t know what you’re going to get until you run the algorithm.

    I am pretty sure they knew they were going to get an antenna that matched the results they were looking for.

  176. madsen,

    Yes or No-

    In “The Blind Watchmaker” the example of cumulative selection known as the “weasel” program, uses a ratcheting process.

  177. David Kellogg,

    It is a targeted search that uses a ratcheting process to reach that target.

    And it would be fair to say the “ratcheting process” part has been denied.

  178. kf,

    In the absence of source code this whole argument seems to be about whether Dawkins was cheating more than previously admitted, but it’s difficult to come to any solid conclusion due to lacking all relevant information. Personally I don’t see the point since all know that the program is irrelevant to the main issues for reasons already discussed.

    I’ve quickly scanned this conversation and I’m looking for some clarification in order to bring this thread to an end.

    By “explicit latching” do you mean that the latching mechanism is a function in the program?

    By “implicit latching” do you mean that the runtime parameters of the program were fine-tuned to produce such results but nothing was explicitly hard-coded except that non-functional intermediates are allowed? If so, is this the scenario that Darwinists are preferring on other sites?

  179. Patrick,

    At issue is the paper by Marks and Dembski which uses the “weasel” program as an example of a partitioned search which “locks” the letters into place once they match the target.

    The anti-ID side was quick to jump on the paper because of that.

    However upon further investigation the example they reference is an example of a targeted search which uses a ratcheting process.

    And that is as described by Dawkins.

  180. The issue has been ratcheting vs latching.

    Ratcheting just means, as far as I can tell, that the program incrementally moves from the original random sequence to the “methinks” phrase, which it does. The word was first used by Joseph, and is not part of the original issue. No one denies that the process “ratchets” in this sense.

    Latching, or locking, is a word used by Dembski in the original post, to refer specifically to the idea that once a letter occurs in the correct place, such as the first m, then it never changes in further generations. This is different than ratcheting, and is the issue under discussion.

    Everyone agrees that that 1987 video clearly shows that the program does NOT latch (or lock – they mean the same thing.)

    At 164, kairosfocus (GEM), quoting himself writes,

    4 –> This is also consistent with the emergence of implicit [quasi-]latching. That is, the numbers of cases in a generation where a letter reverts and another letter advances so that the resulting flicked back population member becomes the advanced champion of the generation is very rare.

    5 –> As a result, we see steady ratcheting of progress to target, with letters that make an advance preserved from one generation to the next, i.e latched or effectively latched.

    It seems clear that GEM has accepted that the latching is NOT built into the program, but is rather quasi or implicit. Since the program does eventually reach the target, all letters eventually become correct and stay correct, even though along the way they may have been correct, then changed to incorrect, then became correct again.

    So the program does not display real latching even though it ratchets its way to the target. Calling what happens quasi or implicit latching obscures rather than clarifies the distinction being argued, which is that the program does not latch (or lock) letters once they are correct.

    That’s my understanding of this long, and long-winded, discussion

    Hope that helps. :-)

  181. Joseph,

    madsen,

    Yes or No-

    In “The Blind Watchmaker” the example of cumulative selection known as the “weasel” program, uses a ratcheting process.

    Please tell me exactly what ratcheting means in the context of the weasel program, and I’ll be happy to answer.

  182. madsen:

    Please tell me exactly what ratcheting means in the context of the weasel program, and I’ll be happy to answer.

    see comment 152

  183. hazel,

    Ratcheting involves “locking into place”.

  184. 184

    Joseph [177],

    Yes to the first sentence if you agree with what hazel [180] says:

    Ratcheting just means, as far as I can tell, that the program incrementally moves from the original random sequence to the “methinks” phrase, which it does. The word was first used by Joseph, and is not part of the original issue. No one denies that the process “ratchets” in this sense.

    No to your second sentence: the ratcheting process has never been denied by anybody associated with Weasel. Recall that in [140], once you had a copy of TBW, you confirmed that this was the way it worked. So this has never been denied.
    David

  185. Joseph,

    The person/ people who wrote the code [created the novel designs]

    Ok, let’s stipulate that for now. As long as you allow that design can arise through the application of GA’s.

    I am pretty sure they knew they were going to get an antenna that matched the results they were looking for.

    Again, I’m talking about whether or not the particular design of the winning antenna was specified beforehand. The answer is clearly no.

  186. Joseph: Do you see the difference between the two idea I described above. If by ratcheting you mean latching in the sense that Dembski defines it in the opening post, then the program does NOT ratchet.

    If by ratching you mean that EVENTUALLY all letters fall in place, even though a letter might have been in place and later out of place, then the programs ratchets without latching.

    Ratcheting (your word) and latching do NOT mean the same thing.

  187. Hi,

    I’ve been lurking in this thread for a long time and I feel a little guilty for not contributing, so I thought I’d add some fuel to the fire.

    Just to be clear, I’m pro-ID, although sometimes I do waver in that. I’ve been trying to account for my own bias while reading this.

    For a while I was leaning towards agreeing with GLF that there is no letter latching going on. At this point though, I think there is implicit letter latching. I don’t think there is any way to prove or disprove it, but based on the output printed from the 1986 run, it seems unlikely that “correct” letters reverted to “incorrect” letters and then back to “correct” letters. I’m not claiming that I “know” that to be true or that my conclusion should sway anybody. That is just my conclusion based on reading the thread and looking at the charts. There could be different versions of the program, different configurations, or whatever is used to generate randomness could have led to the (apparent) implicit letter latching.

    I don’t think Dawkins is trying to mislead people with this example. My impression is that he’s demonstrating a concept, not trying to say that it proves the concept. Obviously he hopes to persuade people, and its likely that some people, especially people who are predisposed to agreeing with him, will read more into the results than is warranted. Based on what I’ve read (in particular, Climbing Mount Improbable), I get the impression that he is willing to assume far too much. For example, when discussing the evolution of the eye, he concentrated on how “easy” it would be to duplicate photoreceptors, while ignoring the much bigger problem of the initial evolution of the photoreceptors. To his credit, he did acknowledge that he was ignoring that issue, but he spent significantly more time showing how the “easy” part could happen.

    This will probably sound like a personal attack, but its not meant as one. GLF seems more intent on proving KF wrong than he is on proving his beliefs to be right. I know it could be my own biases that are making me interpret (or misinterpret) what is being said. I try to account for that, but this seems more like a personal grudge than an attempt to show the truth. Even if it could be proven that there was no letter latching, it would have a negligble impact on the overall arguments. When I see so much effort spent on a relatively unimportant point, (and I’ve had the same reaction to some pro-ID posters), it makes me question the judgement of the person making the arguments.

    Thanks.

    dl

  188. Joseph,

    In post #152 you define ratcheting as the process described by Dawkins in his book. No locking is described or implied there.

    Yet in post #183, you claim that ratcheting does involve “locking into place”.

    So I have to ask, which definition of “ratcheting” am I to use?

  189. kairosfocus @ 164

    In the case of Weasel, it is now abundantly clear that it uses a targetted, proximity based search strategy that rewards non-functional configs on proximity without reference to functionality. So, it is yet another misleading icon of evolution.

    In other words, it does no more than Dawkins said it did: illustrates that cumulative selection converges on a target much more quickly than any random search.

    If it was elevated to “an icon of evolution” it was only in the minds of critics who either misunderstood it or saw in it an opportunity to erect a strawman which could be demolished much more easily and dramatically than the real thing.

    As for the reference to the Gleaner article, that was, like the WEASEL program, just an illustration. It demonstrated only that it was easy to find evidence of the use of your proper name in the public domain. I had no knowledge of the nature of the dispute being reported there nor did I comment on it, so your inference about my views on Christianity is presumptuous.

    As to your request that we only use your “handle” or initials in posts to this board I am happy to comply. I will only note, as before, that it is absurd to pretend that your name is not now widely known in these circles. I would also point out that any contributor who wants to preserve anonymity will usually choose a “handle” that has no obvious connection to them, as I have. If someone appends their actual initials to most of their posts and chooses a “handle” that leads straight to their website, we are bound to wonder just how serious they really are about remaining anonymous.

  190. madsen,

    “Locking” taken in context, means there isn’t any backward movement.

    Ratcheting means there isn’t any backward movement.

    The way Dawkins uses and describes the “weasel” program in TBW it is a ratcheting process in every sense of the word.

  191. The person/ people who wrote the code [created the novel designs]

    Ok, let’s stipulate that for now. As long as you allow that design can arise through the application of GA’s.

    Yes design can arise from a GA if the purpose of the GA is create a design.

    Again, I’m talking about whether or not the particular design of the winning antenna was specified beforehand. The answer is clearly no.

    It designed the thing it was programmed to design.

    And even though no one knew what it would be beforehand, they knew what it needed to do.

    IOW they didn’t write some general GA and then that GA evolved into an antenna- designing GA for just the application they were looking for.

  192. Good – it clear that you use ratcheting to mean the same as latching. That clears that up.

    Now, just to clarify, do you agree that the 1987 video shows non-latching?

  193. hazel:

    If by ratcheting you mean latching in the sense that Dembski defines it in the opening post, then the program does NOT ratchet.

    Perhaps not in the same sense but with exactly the same result.

    IOW there isn’t a coded statement that locks each letter as they appear and match.

    The process itself takes care of that.

    If by ratching you mean that EVENTUALLY all letters fall in place,

    No going backward.

    THAT is what I mean by ratcheting.

    The way Dawkins uses and describes it in TBW the output will always be greater than or equal to the input.

    See comment 152

  194. hazel,

    The argument ONLY pertains to the book “The Blind Watchmaker”.

    That is because it was the BOOK that was referenced in the Marks/ Dembski paper.

    The video is irrelevant.

  195. Joseph,

    “Locking” taken in context, means there isn’t any backward movement.

    Ratcheting means there isn’t any backward movement.

    The way Dawkins uses and describes the “weasel” program in TBW it is a ratcheting process in every sense of the word.

    Looking back through the thread and reading your latest comment, I think we actually agree on what is happening with the program—there isn’t anything in it which prohibits backward movement, but the probability of that happening given the parameters involved is very small.

  196. 196

    Apologies; I meant to post the above in the moderation policy thread. 195 refers not to madsen but to a comment in that thread.

  197. madsen: Again, I’m talking about whether or not the particular design of the winning antenna was specified beforehand. The answer is clearly no.

    Joseph: It designed the thing it was programmed to design.

    And even though no one knew what it would be beforehand, they knew what it needed to do.

    One can say, more specifically, that they predetermined the process in two ways.

    1. By the way they coded the modification / “mutation” software they controlled and defined the set of possible candidates that could be considered. (This is not by specifying a list, but rather by specifying the rules of construction, composition, variation.) The algorithm could not consider any kind of candidate design that their own rules did not allow for. This is necessary, since every candidate design must be of a kind that their fitness software will both understand and know how to score.

    2. By the way they coded the fitness function, they also predetermine the manner in which competing candidate designs will be scored and compared. They have predefined what they want for specifying which design is “better.”

    I think we all agree that they don’t know in advance which design will win. They don’t even have a list of all the possible designs. (This could be prohibitively large. The search algorithm doesn’t even consider all possible candidates. It’s job is to try to efficiently seek out the optimal ones.)

    On the other hand, it cannot be that the algorithm considers designs outside of what they have expressly allowed for. It could not decide, for example, to try changing something that the programmers did not anticipate changing, e.g. changing to a different material if the program wasn’t expressly designed to search among and evaluate different materials.

    In short, genetic algorithms are simply tools to quickly search within chosen very large sets of possibilities as specified and bounded by programmers for cases that may maximize (or minimize) conditions set by the programmers.

  198. ericB,

    I think we all agree that they don’t know in advance which design will win. They don’t even have a list of all the possible designs. (This could be prohibitively large. The search algorithm doesn’t even consider all possible candidates. It’s job is to try to efficiently seek out the optimal ones.)

    I agree with everything you have said, and the above paragraph is the point I was trying to make.

  199. OK, I’ve gotten interested in the Weasel problem. Let me summarize a bit, and then show some actual math that will help clarify things, I think.

    1. The key issue is whether the program uses LATCHING (or locking). I take “latching” to mean that once a correct letter appears in the correct slot, it can NOT mutate again, and will therefore stay correct for the duration of the program.

    2. Everyone agrees that the 1987 video clearly shows that the program does NOT use latching: some letters which are correct in one generation are later not correct.

    As Dembski says in the opening post, “There [in the film] you see that his WEASEL program does a proximity search without locking (letters in the target sequence appear, disappear, and then reappear).”

    3. The question is whether he used the same program when he wrote the Blind Watchmaker (BWM). In the book he shows examples of every tenth generation of two runs, and in those examples there are no incidences of a correct letter mutating to an incorrect letter and then mutating back again. As Dembski says, “That leads one to wonder whether the WEASEL program, as Dawkins had programmed and described it in his book, is the same as in the BBC Horizons documentary.”

    4. Dawkins does not give many details about how his algorithm works. Some (notably Joseph and kairosfocus (GEM)) claim, based on the examples, that the BWM algorithm DID use latching (and was thus different than the video version), while others argue that the BWM algorithm and the video algorithm are the same, and that it is only the limited data set in BWM that makes us think that latching was used.

    5. Dembski’s response to this was:

    Thus, since Dawkins does not make explicit in THE BLIND WATCHMAKER just how his algorithm works, it is natural to conclude that it is a proximity search with locking (i.e., it locks on characters in the target sequence and never lets go).

    I disagree. I think the natural thing to do is assume that the two algorithms are the same, and that the apparent latching in the BWM examples in just an artifact of a limited data set.

    More importantly, I’m going to show some math that will help support my claim. I will show why a correct letter mutating to incorrect is a fairly rare event, so that we would not expect to “capture” this event by taking snapshots of every ten generations for just two runs, as illustrated in the BWM.

    Dembski is a mathematician, and his specialty is probability, so perhaps he will comment on what I present below.

    6. Here is what Dawkins says in BWM

    a. We have 28 slots to fill and each slot can take one of 27 letters (counting a space as a letter.)

    b. We begin by choosing a random letter for each slot. There are 27^28 = power ways of doing this, so the odds of getting the right phrase by pure luck is about 1 x 10^-40

    c. each generation the letters mutate, although Dawkins does not state what mutation rate he used.

    d. Here’s the key sentence:

    The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase. [p.47 in my book]

    7. Note that Dawkins does not explain the rules the program uses to decide which phrase “most resembles the target phrase.”

    I am going to make some assumptions.

    a. The the number of correct letters determines which phrase is to be kept. If the parent phrase has 14 correct letters and the child has 15 correct letters, we pick the child.

    b. However, what if the parent has one correct letter mutate to incorrect (remember, I am assuming no latching) and also an incorrect letter mutate to a correct one, so that both phrases have the same number of correct letters?

    I am going to assume, because I think it is the most reasonable thing to do, that in this case we keep the parent phrase.

    c. Also, obviously, if the child phrase is worse than the parent phrase because a correct letter mutated to incorrect and no incorrect letter mutated to correct, then obviously we keep the parent.

    d. So if a correct letter mutates to incorrect, the only way the child phrase could be more fit would be if two or more incorrect letters mutated to correct.

    OK now let’s do some math!

    8. First we need a mutation rate. Dawkins gives no hints as to what he used, but both Wesley Elsberry and kairosfocus (GEM) have used 5% in their examples, so I’ll use that. (Note that the argument I make is not dependent on any particular rate, but it’s much easier to make my points when we have actual numbers to think about rather than writing everything as formulas with a variable rate r.)

    So, let p = probability of a letter mutating in any one generation = 5%
    and let q = probability of a letter not mutating in any one generation = 95%.

    9. So in each generation how many letters can we expect to mutate in the whole 28 letter phrase. The Binomial Probability formula nCk•p^n•q^n-k applies to this situation:

    Probability of no mutations = 95%^28 = 24%
    Probability of one mutation = 28C1•5%^1•95%^27 = 35%
    Probability of two mutations = 28C2•5%^2•95%^26 = 25%
    Probability of three mutations = 11%
    Probability of four mutations = 4%
    Probability of five mutations = 1%
    Probability of more than five mutations = negligible

    So note that most of the time the phrase will mutate three or fewer letters in any one generation.

    10. Now let’s look at the probabilities of what happens to each letter each generation. A letter can either be correct or not correct in the parent generation.

    a. Chance of correct staying correct is 95%, and chance of correct changing to incorrect is 5%. (Remember, no latching)

    b. The chance of incorrect changing to correct is different, though. The incorrect letter has to not only mutate, it has to mutate to the correct letter, which will only happen 1/26 of the time it mutates. (There are 27 letters, so there are 26 other letters it could mutate into.)

    Therefore the probability of incorrect changing to correct is 5% • 1/26 = 0.2%, or about 1/500 of the time.

    Therefore the probability of incorrect staying incorrect is 99.8%

    11. Looking at these two probabilities (correct to incorrect and incorrect to correct) might lead one to think that the program would take a very long time to reach the target. However let’s now look at things generation by generation as opposed to letter by letter.

    Suppose we have a parent phrase with n correct letters so far, and suppose one of those n correct letters mutates. There are three possibilities

    a. No incorrect letter becomes correct, and so the child phrase is worse, not better, so the parent phrase is kept and we never see that the correct letter mutated.

    b. One incorrect letter becomes correct. There are now still n correct letters, and I decided at the beginning that in the case of a tie, we keep the parent. So again we don’t see that the correct letter mutated because the parent lived, not the child.

    c. So the only way that a correct mutation can survive is if two or more incorrect letters become correct at the same time, because only then would the child phrase have more correct letters than the parent.

    For this to happen, we would have to have three mutations: one correct to incorrect and two from incorrect to correct. As we say in 9 above, three or more mutations only happen 15% of the time anyway, and since incorrect staying incorrect even after a mutation is much more common than incorrect becoming correct, the probability of a correct letter mutating to incorrect AND that child phrase being better than the parent are very small because it would take at least two or more incorrect to correct mutations to compensate and improve the fitness of the child

    The conclusion here is, then, that correct letters can mutate to incorrect but they very seldom survive because they weaken the fitness of the phrase too much.

    Even if Dawkins had printed out all of the successful generations instead of every tenth one we might not see an example of correct mutating into incorrect. Only if he showed not only the successful phrases that lived but also all the rejected phrases that died would we see all the correct to incorrect mutations.

    So latching is not needed. The process itself ensures that most of the time a correct letter will stay correct. It’s not that correct stays correct because there is some latching rule in the algorithm, but rather because correct mutating to incorrect is too detrimental to the fitness of the phrase.

    My reasoned conclusion, as I stated in the beginning, is that the algorithm did not use latching.

    [A disclaimer. I know that the situation is much more complicated than what I have explained, because you have 28 slots subject to the conditions in 10 and 11 above in each generation, and you have different numbers of correct letters at different times, and so on. Analysis of that would be extremely complicated, being full of various probability trees. I’m not going there – this has been time-consuming enough as it is.)

    Comments on this analysis?

  200. Very nice!

  201. 201

    Hazel, personally I enjoyed your analysis., and reasd every word of it. I am simply not convinced of the exercise made by Dawkins to begin with.

    I think of the analysis made by another mathmatician, David Berlinski:

    - – - – - – - -

    Advent of the Head Monkey

    IT IS Richard Dawkins’s grand intention in The Blind Watchmaker to demonstrate, as one reviewer enthusiastically remarked, “how natural selection allows biologists to dispense with such notions as purpose and design.” This he does by exhibiting a process in which the random exploration of certain possibilities, a blind stab here, another there, is followed by the filtering effects of natural selection, some of those stabs saved, others discarded. But could a process so conceived — a Darwinian process — discover a simple English sentence: a target, say, chosen from Shakespeare? The question is by no means academic. If natural selection cannot discern a simple English sentence, what chance is there that it might have discovered the mammalian eye or the system by which glucose is regulated by the liver? A thought experiment in The Blind Watchmaker now follows. Randomness in the experiment is conveyed by the metaphor of the monkeys, perennial favorites in the theory of probability. There they sit, simian hands curved over the keyboards of a thousand typewriters, their long agile fingers striking keys at random. It is an image of some poignancy, those otherwise intelligent apes banging away at a machine they cannot fathom; and what makes the poignancy pointed is the fact that the system of rewards by which the apes have been induced to strike the typewriter’s keys is from the first rigged against them.

    The probability that a monkey will strike a given letter is one in 26. The typewriter has 26 keys: the monkey, one working finger. But a letter is not a word. Should Dawkins demand that the monkey get two English letters right, the odds against success rise with terrible inexorability from one in 26 to one in 676. The Shakespearean target chosen by Dawkins — “Methinks it is like a weasel”-is a six-word sentence containing 28 English letters (including the spaces). It occupies an isolated point in a space of 10,000 million, million, million, million, million, million possibilities. This is a very large number; combinatorial inflation is at work. And these are very long odds. And a six-word sentence consisting of 28 English letters is a very short, very simple English sentence.

    Such are the fatal facts. The problem confronting the monkeys is, of course, a double one: they must, to be sure, find the right letters, but they cannot lose the right letters once they have found them. A random search in a space of this size is an exercise in irrelevance. This is something the monkeys appear to know. What more, then, is expected; what more required? Cumulative selection, Dawkins argues- the answer offered as well by Stephen Jay Gould, Manfred Eigen, and Daniel Dennett. The experiment now proceeds in stages. The monkeys type randomly. After a time, they are allowed to survey what they have typed in order to choose the result “which however slightly most resembles the target phrase.” It is a computer that in Dawkins’s experiment performs the crucial assessments, but I prefer to imagine its role assigned to a scrutinizing monkey-the Head Monkey of the experiment. The process under way is one in which stray successes are spotted and then saved. This process is iterated and iterated again. Variations close to the target are conserved because they are close to the target, the Head Monkey equably surveying the scene until, with the appearance of a miracle in progress, randomly derived sentences do begin to converge on the target sentence itself.

    The contrast between schemes and scenarios is striking. Acting on their own, the monkeys are adrift in fathomless possibilities, any accidental success-a pair of English-like letters-lost at once, those successes seeming like faint untraceable lights flickering over a wine-dark sea. The advent of the Head Monkey changes things entirely. Successes are conserved and then conserved again. The light that formerly flickered uncertainly now stays lit, a beacon burning steadily, a point of illumination. By the light of that light, other lights are lit, until the isolated successes converge, bringing order out of nothingness.

    The entire exercise is, however, an achievement in self-deception. A target phrase? Iterations that most resemble the target? A Head Monkey that measures the distance between failure and success? If things are sightless, how is the target represented, and how is the distance between randomly generated phrases and the targets assessed? And by whom? And the Head Monkey? What of him? The mechanism of deliberate design, purged by Darwinian theory on the level of the organism, has reappeared in the description of natural selection itself, a vivid example of what Freud meant by the return of the repressed.

    This is a point that Dawkins accepts without quite acknowledging, rather like a man adroitly separating his doctor’s diagnosis from his own disease.(6)

    [(6) The same pattern of intellectual displacement is especially vivid in Daniel Dennett’s description of natural selection as a force subordinate to what he calls “the principle of the accumulation of design.” Sifting through the debris of chance, natural selection, he writes, occupies itself by “thriftily conserving the design work . . . accomplished at each stage.” But there is no such principle. Dennett has simply assumed that a sequence of conserved advantages will converge to an improvement in design; the assumption expresses a non sequitur.

    Nature presents life with no targets. Life shambles forward, surging here, shuffling there, the small advantages accumulating on their own until something novel appears on the broad evolutionary screen-an arch or an eye, an intricate pattern of behavior, the complexity characteristic of life. May we, then, see this process at work, by seeing it simulated?

    “Unfortunately,” Dawkins writes, “I think it may be beyond my powers as a programmer to set up such a counterfeit world.”(7)

    [(7) It is absurdly easy to set up a sentence-searching algorithm obeying purely Darwinian constraints. The result, however, is always the same — gibberish.

    This is the authentic voice of contemporary Darwinian theory. What may be illustrated by the theory does not involve a Darwinian mechanism; what involves a Darwinian mechanism cannot be illustrated by the theory.

  202. 202

    hazel, you have presented a good case. I want to suggest one possible improvement. Dawkins refers to “mutant nonsense phrases, the ‘progeny’ of the original phrase.” In other words, each chosen phrase mutates multiple times in each generation, and the choice is made not betweeen the “parent” and the single mutated offspring but among multiple mutations.

    Upright BiPed, Berlinski may once have been a mathematician, but the quote you present is not a mathematical analysis.

  203. Let’s stay on topic, please. I’m just discussing the latching issue in Weasel – we are not discussing any larger issues. This is an exercise in math, and math only.

  204. 204

    …my apologies

  205. Onlookers:

    I must first conclude some unfinished, rather distasteful business. Pardon.

    For, S. having dropped a clanger, I see what looks uncommonly like a discreet silence on the part of those who so enthusiastically endorsed a dismissive statement in the Jamaican press, but which turned out on closer examination to be dripping with blood slander and with endorsement of public lewdness.

    if this is so, such a tip-toeing away in silence would itself speak volumes. Sadly.

    Let us hope S et al will explain themselves here and that the denizens of Anti Evo will correct their ill-judged use of the same remarks int eh Jamaican media.

    Anyway, it is time to speak to a fairly substantial pair of comments; one of which inadvertently reaches the same conclusion that J and I have long since done on Weasel, while not recognising that we have spoken to evident latching of o/p and two different possible causes: [T2] explicit latching in program modules, {T3] implicit latching and/or quasi-latching.

    One hopes that this arrival at consensus on actual analysis will finally lay to rest the hot rhetoric both here and at Anti Evo.

    I will turn to that in a moment.

    GEM of TKI

  206. Hazel and Upright (and onlookers):

    First, Hazel, thanks for taking time to engage on substance, not rhetoric. (you have missed a key point in J’s and my case, that we embrace not only explicit but internal latching or quasi-latching) but the analysis allows us to move the issue forward I believe. I believe the misunderstanding of our position is obviously inadvertent, not at all a calculated strawman.

    Upright, thanks for putting up a very telling wider analysis by Mr Berlinski.

    On points:

    1] Hazel, 200: Some (notably Joseph and kairosfocus (GEM)) claim, based on the examples, that the BWM algorithm DID use latching (and was thus different than the video version), while others argue that the BWM algorithm and the video algorithm are the same

    Both Joseph and I are more nuanced than that. The observation is evident latching on the o/p circa 1986 contrasted with a run in 1987 that on video multiply and frequently flicks back (including a “winking” effect where reversion to correct seems to happen rather quickly).

    The issue we have had — and J and I are not simply equivalent — is explicit vs implicit latching, with the possibility of a quasi latching also in play. [J has stated that implicit latching is a most likely explanation, as a byproduct of the targetted search algorithm used as described by Mr Dawkins. I have pointed out that the core issue on Weasel is that it is targetted search that rewards non-functional configs, and that this obvioates any claims to being an exemplar of a BLIND watchmaker at worjk, ever since December last. As a secondary point I have pointed out to the evident o/p latching (as have many others over these 23 years) and have suggested that the most likely explanation is latching in the program, easiest as explicit, though also possibly implicit.]

    On Mr Dawkins’ testimony as reported by Mr Elsberry [and I note, in light of a recent comment by a certain Dr Simmons, that "Mr" is strictly correct even in reference to one holding a PhD, save where disrespect is contextually evident; I have meant no disrespect], I have accepted it ar reasonable to address implicit latching.

    In that context, I have pointed out that once a per letter per member of a generation mutation rate and population size are set in a range that a significant fraction will be 0-change and thereafter decreasing fractions will have 1 or more changes, multiplied by a population size that makes the skirt unlikely to appear in the cases where doubly changed letters substitute so that one reverts while one advances, we will reasonably get the sort of pattern observed in the 1986 runs. In particular, I commented on the point that if double and triple etc mutations are common, i.e. there is a fairly large population, we will see quite fast runs to target — instead of runs that are consistent with “good” runs for latching; which has median run length at 98 generations.

    It is credible that such co-tuning can be achieved with a modest number of test runs and intuitively appe3laing mutation rates. [The 5% rate I put up is from others, I believe tracing inter alia to Rob. It averages out at a bit more than 1 per member of the generation.]

    Next, I have had to point out that there is such a thing as a reasonable sample size in empirical contexts, that will be credibly typically representative of the population as a whole. And 200 – 300 in a context that we may safely presume means that the samples were selected for being typical of “good” runs, is beyond reasonable doubt, well within that range.

    2] Upright, 202 and 205: Advent of the Head Monkey

    Thank you. This aptly captures the wider context of the issue, and sets up any serious discussion on the merits. I both agree with Mr Berlinski — and yes, I know he holds a PhD (in I believe mathematics) — and find him to have admirably presented the matter.

    3] H, 200: I take “latching” to mean that once a correct letter appears in the correct slot, it can NOT mutate again, and will therefore stay correct for the duration of the program.

    This only describes a simplistic version of EXPLICIT latching; more sophisticated explicit latching is possible and implicit latching and quasi-/ imperfect- latching are also possible. By way of example, in the previous thread, Apollos presented code which will explicitly latch AND will have letters latched revert to incorrect status. (This means straight off that only credible code is actually demonstrative on the status of Weasel circa 1986.)

    I have spoken to both implicit [T3] and explicit [T2] latching in processing, and to evident latching of output. The T2 and T3 options are alternative explanations for the observed output. In light of the further statement of Mr Dawkins circa 2000 received at second hand, we have in the main discussed T3 in this thread.

    In short, the issue is somewhat mis-framed in the opening points of comment no. 200.

    4] The question is whether he used the same program when he wrote the Blind Watchmaker (BWM).

    Actually, th sis the secondary question, the primary one being the rhetorical status of Weasel vs its true substantial import. I and others have objected that in context it serves to lend persuasive force tot he BLIND watchmaker thesis, while in fact it is an example of designed, targetted search that uses a toy example and simplifying assumptions that evade the force of the Hoylean challenge of getting TO shores of functionality before hill climbing based on differential functionality can be brought to bear.

    On the second level point, the issue is not whether the algorithm structure circa 1986 is different from that of 1987, but — on the presumption of some version of a T3, implicit latching or quasi-latching algorithm from 1986 on — whether parameters and o/p behaviour are credibly the same. In fact, early in the thread, Mr Dembski stated that such a shift could have material impact.

    I have held that there is good reason to infer that the printed excerpts circa 1986 are representative of what was thought to be “good” performance at that time, and that this behaviour shows strong latching on the o/p, up to runs to target consistent with that. By sharp contrast, circa 1987, just as strongly, the o/p does NOT latch, but shows frequent reversions, often with winking.

    I have concluded that the program circa 1987 is materially different form that circa 1986; which on T3 can be managed by changing two parameters: generation size and per letter per member mutation rates. It is probably noteworthy that an o/p that winks and runs fast but takes fairly long to get to target, is visually impressive.

    [ . . . ]

  207. 5] I will show why a correct letter mutating to incorrect is a fairly rare event, so that we would not expect to “capture” this event by taking snapshots of every ten generations for just two runs, as illustrated in the BWM.

    In short, you agree with my co-tuning of mutation rate and generation size analysis, and that it will lead to evident o/p latching or quasi-latching under certain circumstances. under others, esp by making mutation rate per letter per member high enough,a nd making generation size large enough that the far-skirt members are more likely to be present, we will see that the rewarding of mere proximity without reference to function will lead to domination of the succeeding generations by multiple correct letter members, and thus very fast runs tot target.

    In an intermediate range, we should see more and more of reversion of letters to incorrect status,a s a double mutation substitutes another correct letter. The 1987 run seems to come from that intermediate range.

    6] Chance of correct staying correct is 95%, and chance of correct changing to incorrect is 5%. (Remember, no latching)

    No EXPLICIT latching. Latching (and quasi-latching) as I have discussed for quite some time in this thread, and in the previous one — and as J has also pointed out — can be implicit, an effect of interacting factors.

    7] the probability of incorrect changing to correct is 5% • 1/26 = 0.2%, or about 1/500 of the time. Therefore the probability of incorrect staying incorrect is 99.8%

    Thus, implicit quasi-latching,and effective latching in the case of appropriately co-tuned population scope and mutation rates. (Which can probably be done by trial and error looking for “good” results.)

    8] the probability of a correct letter mutating to incorrect AND that child phrase being better than the parent are very small because it would take at least two or more incorrect to correct mutations to compensate and improve the fitness of the child

    Oops.

    This unfortunately neglects the case of the child simply being equal in [Hamming] distance to target as the parent, the latter being the champion from the previous generation.

    This raises the interesting issue of a tie between zero-change members and a double change member that substitutes a new correct letter for an old correct letter that has reverted. there may be a randomiser in the program [flip a coin and pick], or a line that selects the changed phrase by preference in such cases.

    It is in the case of the double-mutation with such substitution that we are most likely to see reversion behaviour. (My point of concern on this is that in the 1987 run, we see a further winking effect — very rapid reversion to correct — which seems odd if the letter is now simply on the long odds of getting back correct by mere chance. But, I am willing to not revert to the easiest [T2, Apollos variant] explanation for such, on grounds of taking testimony as so absent specific and strong grounds for rejecting it.)

    8] The conclusion here is, then, that correct letters can mutate to incorrect but they very seldom survive because they weaken the fitness of the phrase too much.

    This is exactly why Joseph and I have spoken of two patterns of algorithms, those that explicitly latch, and those that implicitly latch.

    In short, your explicit binomial theorem based mathematical, probability analysis agrees with our narrative analysis based on programming patterns and population behaviour patterns. Indeed, you have confirmed that the implicit latching alternative is a viable mechanism for the o/p behaviour circa 1986.

    HOWEVER, YOU HAVE ALSO UNFORTUNATELY MANAGED AT THE OUTSET TO MISS THE POINT WE HAVE REPEATEDLY MADE HERE AND IN THE PREVIOUS THREAD, THAT THERE IS IN ADDITION TO EXPLICIT LATCHING, IMPLICIT LATCHING (AND QUASI-LATCHING).

    ______________

    So, your analysis agrees with ours, but your conclusion as stated is unfortunately flawed because you missed the point that we have looked at BOTH explicit latching and implicit latching. Also, I again call attention to Apollos’ code example that shows that EXPLICIT latching can be set up to mimic implicit quasi latching — with reversions.

    So, can we agree that the Weasel program circa 1986 was probably a version of T3, with implicit latching, but in 1987, parameter changes would have made the latching shift to quasi-latching, without a tearaway streak to the target?

    GEM of TKI

  208. I cannot believe this thread is still going. Dawkins described what is known in the evolutionary computation community as a (1,n)-ES, where the ES stands for evolution strategy. I informed one of Dembski’s friends of this in email dated July 31, 2008.

    In a (1,n)-ES, 1 parent generates n offspring in each generation, and the parent of the next generation is the fittest of the n offspring in the present generation. In a (1+n)-ES, the parent of the next generation is the fittest of the n+1 individuals in the present generation. That is, in the “comma” strategy, the parent “dies” after generating offspring, and there is a loose analogy to annual plants. In the “plus” strategy, there is a loose analogy to perennials.

    There is a large body of formal analysis of evolution strategies. I believe that most of it involves Markov chain analysis. Some very bright people have already answered many fundamental questions about the behavior of evolution strategies. There is no need to reinvent the wheel.

    Why Dembski is going on about “locking,” I have no idea. Until recently, the Big Three of evolutionary computation were evolutionary programming, evolution strategies, and genetic algorithms (including genetic programming). I find it difficult to believe that a critic of EC such as Dembski does not recognize an evolution strategy when he sees one.

    At any rate, Google and ye shall find.

  209. By the way, a (1,n)-ES is not a search algorithm in the sense of Dembski and Marks. I see no way to say anything about the active information of a “comma” ES without augmenting the ES with an extrinsic entity that registers the fittest parent of all generations. Of course, then the active information is not measured on the ES, but on the augmented ES.

    It seems to me that the analytic framework of Dembski and Marks does not apply to the Weasel program — unless the framework is actually a Procrustean bed.

  210. SG:

    Weasel’s downfall is that it implements acknowledged targetted search that rewards NON-FUNCTIONAL “nonsense” phrases on mere proximity to target. This is NOT a matter of increments in current fitness being rewarded, in any sense of “fitness” worth using.

    For, Weasel begs the question of achieving significantly complex information based functionality, in its leap to increment proximity of latest generation to target by selection off nearness to target for explicitly non-functional phrases. Thus, we see highlighted the key issue raised by two forms of the design inference: complex funcuoinality based on information is hard to find by random search, and irreducible complexity by direct means or by adaptation of existing parts to form a new function, is also hard to do by the engine of variation, chance. (And, without first having function, on pain of question begging, we cannot see nature culling based on better or worse degrees of function leading to differential success and reproduction.)

    Further to this, in the recent Marks-Dembski paper, the T2, explicit latch version of the Weasel family of algorithms and implementations is addressed.

    Searching for target zones and/or target points is . . . search in configuration spaces. (That is, this thread is over an incidental issue, whether in Weasel circa 1986 the correct letters to date are explicitly or implicitly latched, i.e locked or nearly locked in place. A glance at the Dawkins published runs of 1986 shows that once a letter is correct, it seems to be latched in place as Weasel marches on towards its target.)

    Markov processes and evolutionary fitness strategies may be interesting but they are on a tangent to the key issue here.

    GEM of TKI

    PS: Onlookers: the above by SG and my recent response to Dr S in the mod pol thread, underscore how Weasel often succeeds rhetorically by such misdirection while failing to address the issue of increment in complex, functional information to get TO shorelines of islands of function. (The specific issue raised by Hole at eh beginning of the issue, and to which Weasel was claimed to respond.)

  211. to David Kellogg: thanks for pointing out that each parent has multiple children. I had never thought about this Weasel program until yesterday when I got interested in the math, so I didn’t know that. But I understand now some aspects that weren’t clear to me before.

    Of course this just makes my case stronger.

    And this should lay to rest, I think, the question of whether the BWM algorithm contained a specific rule about latching, or not.

    My conclusion is that it did not, and that the BWM algorithm and the video algorithm are the same. This was the issue in Demski’s opening post:

    Thus, since Dawkins does not make explicit in THE BLIND WATCHMAKER just how his algorithm works, it is natural to conclude that it is a proximity search with locking (i.e., it locks on characters in the target sequence and never lets go).

    I disagreed with Dembski, and I think I’ve proved my point.

  212. hazel:

    On that the issue is that Weasel circa 1986 and circa 1987 have significantly diverse output characteristics. Shifting the parameters controlling mutation rate and generation size would account for that, and for the purposes of our discussion, would be materially different.

    GEM of TKI

  213. madsen:

    Looking back through the thread and reading your latest comment, I think we actually agree on what is happening with the program—there isn’t anything in it which prohibits backward movement, but the probability of that happening given the parameters involved is very small.

    Given a target, a small enough mutations rate AND a large enough sample size, there will NEVER be any regression. NEVER.

    IOW ratcheting toward the target, ie locking the correct letters in place, is inevitable.

  214. hazel:

    And this should lay to rest, I think, the question of whether the BWM algorithm contained a specific rule about latching, or not.

    As I have already stated the program does not need a coded statement for something it does on its own.

    I take it you cannot understand that.

    Not my problem…

  215. Sal Gal:

    Why Dembski is going on about “locking,” I have no idea.

    It’s like this- Marks and Dembski wrote a paper.

    In that paper they referenced “The Blind Watchmaker” pertaing to the “weasel” program Dawkins describes in it.

    In their paper they used it as an example of a partitioned search- meaning once the letters matched the target they get “locked into place”.

    The anti-IDists have made a big to-do about it.

    However when looking at ALL the data is is obvious that the program used and described by Dawkins in TBW uses a ratcheting process.

    And that ratcheting process locks the matched letters in place.

  216. There is a big difference, then, between cumulative selection (in which each improvement, however slight, is used as a basis for future building), and single-step selection (in which each new ‘try’ is a fresh one).- Dawkins TBW page 49

    That’s ratcheting.

  217. Joseph,

    Given a target, a small enough mutations rate AND a large enough sample size, there will NEVER be any regression. NEVER.

    Please prove this mathematically.

  218. Now that I’ve gotten involved, I’ve gone back and read the start of the thread. I see that ROb made the same point I am making in post 5. All I did was supply a little math that shows why no latching rule is necessary, responding mainly to Dembski’s original point that it was “natural” to assume that a latching rule was in place.

    That’s all.

    Joseph writes,

    As I have already stated the program does not need a coded statement for something it does on its own.

    I take it you cannot understand that.

    Not my problem…

    I was responding to Dembski’s opening post, in which he referred to a partitioned search, “i.e., it locks on characters in the target sequence and never lets go).” I was not responding to you. If you accept that there is no latching rule in the algorithm, then we agree.

  219. hazel:

    I was responding to Dembski’s opening post, in which he referred to a partitioned search, “i.e., it locks on characters in the target sequence and never lets go).”

    And that is what happens as described by Dawkins in TBW.

    That is what happens with ratcheting.

  220. madsen,

    If you are REALLY interested just do it-

    That means if you try you could find the exact parameters in which regression never takes place.

    Ya see if the mutation rate is small enough it would mean that only one character could change- mutate.

    It would also mean that there would be exact copies of the parent.

    So if there are exact copies and mutated copies then AT A MINIMUM, the exact copies would be chosen.

    Pretty basic actually.

  221. Joseph,

    madsen,

    If you are REALLY interested just do it-

    That means if you try you could find the exact parameters in which regression never takes place.

    Ya see if the mutation rate is small enough it would mean that only one character could change- mutate.

    It would also mean that there would be exact copies of the parent.

    So if there are exact copies and mutated copies then AT A MINIMUM, the exact copies would be chosen.

    Pretty basic actually.

    While I accept that by adjusting the mutation rate and perhaps some of the other parameters, you can arrange that the probability of regression will be arbitrarily small, I don’t think you will be able to prove it will “NEVER” happen.

    If you can present a proof, though, I’d be interested in seeing it.

  222. David Kellogg:

    the ratcheting process has never been denied by anybody associated with Weasel.

    Yes it has:

    All seven
    characters are ratcheted into place.- Dembski/ Marks

    IOW “rachet”ing was stated in the paper that you guys are trying to derail.

    Read it for yourself:

    Conservation of Information in Search: Measuring the Cost of Success page 5

    That said it looks like they were correct as ratcheting does take place as described by Dawkins.

  223. madsen

    While I accept that by adjusting the mutation rate and perhaps some of the other parameters, you can arrange that the probability of regression will be arbitrarily small, I don’t think you will be able to prove it will “NEVER” happen.

    I know you can never prove that regression can occur given the parameters I stated.

    As I said given a small enough mutation rate and a large enough sample size the “worse” that can happen is the output = input.

    But you could prove me wrong by going to one of the sites tat has the program and running it over and over again, until you find a regression.

  224. I wrote comment #178 because I noticed people seemed to be generally agreeing in principle but yet–not realizing this–were still arguing, apparently due to objecting to certain terminology and descriptors. Never mind personal grievances.

    To finish this thread, I’ll quickly do away with comment #212:

    Hazel, you know you’re taking what Dembski said out of context. Dembski just noted that based upon ONLY the information in the book that it’s reasonable to presume a hard-coded latching function. The BBC video makes it obvious that this is not so, so he raises the possibility there are multiple versions of the program and that the issue should be investigated. As in, Dembski is leaving with a question, not an assertion. But given that you and several others have already found an explicit latching function is unnecessary given fine-tuned conditions*, and that Dawkins specifically says that explicit latching was not used, the best explanation at this point is that the program in the book and in the video were likely the same.

    *Although as kf notes it’s possible to get the same results with multiple other approaches.

    I think that about finishes this thread.

  225. Joseph,

    I know you can never prove that regression can occur given the parameters I stated.

    But it’s your claim that regression can never happen. It’s up to you to demonstrate this.

  226. Joseph, the example about Weasel in the paper you cited shows clearly that Dembski’s assumed that there was an explicit rule that locks letters in place. Here’s what he wrote:

    Partitioned search [12] is a “divide and conquer” procedure best introduced by example. Consider the L = 28 character phrase METHINKS*IT*IS*LIKE*A*WEASEL (19) Suppose the result of our ?rst query of L = 28 characters is SCITAMROFNI*YRANOITULOVE*SAM (20) Two of the letters, {E,S}, are in the correct position. They are shown in a bold font. In partitioned search, our search for these letters is ?nished. For the incorrect letters, we select 26 new letters and obtain OOT*DENGISEDESEHT*ERA*NETSIL (21) Five new letters are found bringing the cumulative tally of discovered characters to {T,S,E,*,E,S,L}. All seven characters are ratcheted into place. Nineteen new letters are chosen and the process is repeated until the entire target phrase is found.

    Notice that when you get two correct letters, you only “mutate” the remaining 26, and once you get five more correct, you only mutate the remaining 19.

    This is an explicit rule. It’s is NOT quasi-latching or implicit latching or latching because it is highly improbable that a correct letter mutated to incorrect will get passed on. It is a RULE that says once we have found a letter it is not even considered for further mutation.

    Period. This is absolutely clear.

  227. Thanks for the summary, Patrick.

  228. Hazel:

    On the evidence of the o/p circa 1986, the simplest explanation is explicit latching of letters once they go correct. Implicit latching and/or quasi-latching is also possible, and that was noted before this thread ever began. In the D-M paper, they looked at the former case, which is a legitimate case of Weasel, given the many versions floating out there.

    The 1987 o/p does not latch, and indeed seems to have a puzzling wink and get back pattern. On preponderance of evidence, including Mr Dawkins reported claim that he did not explicitly latch, the program, implicit latching as has been outlined many times above, is a reasonable explanation.

    In either case, latching or quasi latching are secondary to Weasel’s real downfall: it is targetted search that rewards proximity without functionality. So, it is not a reasonable part of any case pointing to a BLIND watchmaker; the presumable aim of the book, BW.

    That point was made ever since December last, but it seems that raising a secondary debate over what is highly evident form the 1986 o/p is highly distractive from that core flaw.

    Bottomline: It is high time Weasel and kin were retired, as serving only rhetorical not illuminative, purposes.

    GEM of TKI
    ________________

    PS: Seversky, Kellogg and co associated with Anti Evo, I think you need to address the citation with evident approval here and also at Anti Evo, of someone indulging in, frankly, anti-Christian blood slander and through such ad hominems, dismissal of legitimate concerns over public lewdness. (As I recall, some Dancehall entertainers have actually been censured by the Jamaican courts over their beyond belief on-stage behaviour.)

  229. kf@229, I agree with your first two paragraphs, so I think we can consider the discussion finished. I’ve never been interested in any of the other issues you mention, though, so I have no comment there.

  230. Patrick,

    In essence, if ratcheting is implemented, isn’t that what is going on?

    That is with respect to parent and offspring.

    IOW once a “parent” gets a matching letter, its offspring will also have those matching letters.

    It is just a product of the process.

    IOW once the letters are matched the search for them is over- ratchet clicks and fade…

  231. 231

    Kellog at 202

    I’m not sure on what grounds one could insiuate that David’s mathmatical acumen is somehow deteriorated.

    You are correct however that his analysis was not pure mathematics, but, I am quite certain that was his specific intent.

    He makes the larger point clear, there is a target in Dawkin’s exercise. The simulation reaches a predefined goal. Life, or more appropriately evolution, does not work that way. There is no target.

    Produce “methinks its a weasel” without that phrase existing anywhere in the system, then that will be impressive.

    Unitl then, Berlinski is 100% correct.

  232. 232

    Joseph [223], I meant ratcheting in the sense defined by hazel. I’ll just refer to hazel [227] for clarification.

    kairosfocus [229], I neither know nor care what you are talking about in your PS.

    UB [232], Berlinski has a Ph.D. in philosophy and has done some work in mathematics but hasn’t contributed to the mathematical literature as far as I know. That’s what I meant. I would bet his mental faculties are as fine, or not, as ever. My point was that his essay does not address the mathematics of Dawkins’s program (the immediate point of discussion) or even its claims; it just makes an interpretation as to its value.

  233. kairosfocus:

    On the evidence of the o/p circa 1986, the simplest explanation is explicit latching of letters once they go correct.

    How is the hypothesis that Dawkins added latching simpler than the hypothesis that he didn’t add latching?

    The 1987 o/p does not latch, and indeed seems to have a puzzling wink and get back pattern.

    This is explained by noting that the display cycles through all of the candidate strings in the population, not just the winner.

    Bottomline: It is high time Weasel and kin were retired, as serving only rhetorical not illuminative, purposes.

    I disagree that it’s not illuminative. It shows how the results of the oracle (or environment) simply communicating a fitness level can fool some people into thinking that more information is being communicated. Dembski, kairosfocus, and others thought that the oracle must not just be communicating the number of correct letters, but also which letters are correct. If nothing else, it shows that intuition isn’t reliable when it comes to genetic behavior.

    Note that the artificiality of the fitness measure — i.e. the fitness measure has nothing to do with any observable functionality — has never been in dispute.

    And I agree that Weasel and kin should be retired. Tell that to Marks and Dembski.

  234. Nice post, R0b. It’s clear that you understood all this from the beginning, but I learned some things thinking through them myself.

    I would add that our intuition is also often wrong when it comes to probability. Probability is a complicated subject and leads to many correct but counter-intuitive results.

  235. R0b, KF, hazel, gpuccio, joseph, Patrick, et al.,

    I finished coding Weasel Ware 2.0 this weekend, with lots of new features and goodies. I still have some testing to do and such, but one of the features I added was the ability for users to create their own fitness functions, via interpreted javascript. So we can see exactly how much active information a Proximity Reward fitness matrix contributes to the search and users can replace this with a reward matrix generated by a fitness function of their choice.

    I expect a lot of new discussion to open up once this is released.

    Oh, and R0b, you now will have both latching and non-latching versions side-by-side, for whichever you want to assume that Dawkins used for The Blind Watchmaker.

    Atom

  236. 236

    kairosfocus [229], out of curiosity, I tried to determine what you were talking about with respect to “blood slander.” I did a little reading, using your blog and the AtBC discussion as a guide.

    I think the term “blood slander” is awfully strained in your usage; I’d even call it hyperbolic. Of course, there might be hyperbole on the other side as well, but I don’t know enough to judge.

    I did want to thank you, though, for (perhaps inadvertently) bringing my attention to the beautiful Redemption Song Monument; it appears to be a powerful and moving sculpture.

  237. Rob wrote:

    “It shows how the results of the oracle (or environment) simply communicating a fitness level can fool some people into thinking that more information is being communicated. Dembski, kairosfocus, and others thought that the oracle must not just be communicating the number of correct letters, but also which letters are correct.”

    Actually latching behavior requires less information: smaller code, less memory allocated, and the code is much faster and more efficient. Coding latched behavior is the shortest route to reproduce the output, so the conclusion that explicit latching is used is most reasonable based purely on examination, absent source code.

    The non-latching implementation wastes a lot of time and memory to avoid locking letters. A side effect of hurling large mutating populations at what would otherwise be a simple search-and-latch implementation is that due to an unsophisticated target comparison a reversion is seen from time to time.

    “If nothing else, it shows that intuition isn’t reliable when it comes to genetic behavior.”

    There has been little disagreement here that Weasel exhibits no such behavior.

  238. Apollos:

    Actually latching behavior requires less information: smaller code, less memory allocated, and the code is much faster and more efficient.

    That’s all true. But I didn’t say anything about code size, memory footprint, or efficiency.

    In Dawkins’ non-latching code, the only information that the oracle communicates is the fitness level of the candidate. This is significant because the same is true of the environment in evolutionary theory. That is, the environment determines how successfully a given organism will reproduce, but it doesn’t tell the organism which genes to exempt from mutation.

    WRT Weasel, the number of correct letters falls in the range of 0 to N, where N is the number of letters in the sequence. That means that the oracle communicates log2(N) bits of info in the non-latching algorithm. In the latching algorithm, on the other hand, the oracle communicates N bits of info.

    “If nothing else, it shows that intuition isn’t reliable when it comes to genetic behavior.”

    There has been little disagreement here that Weasel exhibits no such behavior.

    Weasel is a genetic algorithm, so I’m not sure what your comment means.

  239. Atom:

    Oh, and R0b, you now will have both latching and non-latching versions side-by-side, for whichever you want to assume that Dawkins used for The Blind Watchmaker.

    Very cool, Atom.

    You’ll probably also want to change the “The Math” page, that claims to analyze Dawkins’ algorithm, but in fact analyzes an algorithm that has a mutation rate of 100% for unlatched letters, 0% for latched letters, and a population of 1.

  240. 240
    George L Farquhar

    Joseph

    IOW there doesn’t have to be coded statement that locks the matching letters.

    The locking is a byproduct of the program.

    Atom

    Oh, and R0b, you now will have both latching and non-latching versions side-by-side, for whichever you want to assume that Dawkins used for The Blind Watchmaker.

    Atom, how did you know what behaviour to implement in the original version – where did you get the idea of latching letters (or not) from in the first place? Why did you do it that way?

    Joseph,
    If there is no coded statement in the program that locks the matching letters, how is it possible you need two different programs (as Atom says) to represent latching and non-latching behaviour?
    Kariosfocus

    To get implicit latching without reversions, the population as well as mutation rate actually need to be mutually tuned, so that you get steady advances without letter substitutions [one flicks back while another moves ahead].

    Same question as to Joseph.

    Tune the population and setttings you say? What settings, what program?

  241. 241
    George L Farquhar

    Atom,
    Would it be possible for you to add a “FSCI meter” in the new version you are going to make available?

    It would be interesting to see the FSCI values for the various strings change over time.

    I realise this is “feature creep” but I think it would be a great feature!

    So we can see exactly how much active information a Proximity Reward fitness matrix contributes to the search and users can replace this with a reward matrix generated by a fitness function of their choice.

    Can we measure the active information in FSCI terms?

  242. R0b,

    I’m not in charge of the “Math” pages, I just code the GUIs. I’m sure Dr. Marks will update the pages, however, to include descriptions of the new features.

    GLF,

    I got the idea of latching from Dr. Marks and Dr. Dembski, who got that idea from the apparent (and possibly real) locking in the book version. With a low enough mutation rate and large enough population size (1 mutation per generation, 100 offspring) you get apparent locking behavior, as Joseph pointed out, even though no locking mechanism is in place for Proximity Reward Search.

    As for the FSCI meter and feature creep: Sounds like something of an interesting idea, but the strings you search for can be any string, meaningful or not. I’d have to think through what FSCI would mean in that context (if it even applied), before I could even begin to code something like that. (FSCI requires a definition of functional states…I guess I could make the target the single functional state?)

    Anyway, I have real (paid) work to do in the meantime, so I have to put off any major improvements or new features until I get a chunk of free time.

    But there’ll be plenty to play with and discuss in the new version.

    Atom

  243. R0b wrote:

    In Dawkins’ non-latching code, the only information that the oracle communicates is the fitness level of the candidate.

    …and this “fitness level” encodes quite a bit of information about how far away a string is from the target; it is exactly inversely proportional to it, if I am not mistaken.

    Without the information about target location encoded in the fitness reward matrix, the same algorithm sputters and can’t find the target well, if at all, even with reproduction, mutation and selection. In version 2.0 you can see this for yourself by choosing (or creating) a fitness function that has limited information about the target location.

    Atom

  244. Upright BiPed:

    He makes the larger point clear, there is a target in Dawkin’s exercise. The simulation reaches a predefined goal. Life, or more appropriately evolution, does not work that way. There is no target.

    Don’t tell Marks and Dembski this. If evolution, according to MET, has no target, then Marks and Dembski can’t apply their active information measure to biology without begging the question.

    Produce “methinks its a weasel” without that phrase existing anywhere in the system, then that will be impressive.

    Unitl then, Berlinski is 100% correct.

    There are, of course, many such algorithms. A few years back, there was a kerfuffle in the ID debate over an algorithm that found the Steiner tree for a given set of points. This algorithm, like all useful searches, successfully inverted its objective function, providing solutions that were unknown to the designer of the program.

    If Berlinski’s point is that Weasel doesn’t do this, then he’s trivially correct. Weasel is useless except as a pedagogical tool.

  245. Dawkins evidently did not know, almost 25 years ago, that he had implemented an evolution strategy (ES). This does not give us license to ignore today that the Weasel program is an instance of an ES.

    The shortcoming in the Weasel program is not in the ES, but in the fitness function. No evolutionist believes that the (un-)fitness of an organism is its distance in some space from a target.

    The ES itself knows nothing about targets. It knows only that the solution space is the set of all length-28 sequences of uppercase letters and blanks. The ES passes candidate solutions to the fitness function, in which the details of the problem are hidden from the ES proper. The Weasel fitness function may be written

    w(s) = 28 – Hamming(s, T),

    where s is the candidate solution (a sentence of 28 letters and blanks), T is the target sentence, and Hamming(s, T) is the Hamming distance — the number of mismatches — between s and T. The ES uses the fitness values w(s) of progeny s to select the parent of the next generation, but has no “idea” how w(s) is computed.

    In short, you get Dawkins’ Weasel program by plugging a particular fitness function into a generic ES. I can see only propaganda purposes in attacking the Weasel program instead of the combination of generic ES and fitness function. Do Dembski and Marks really want to make scholarly contributions to evolutionary informatics, or do they seek instead to make a big show of setting up and knocking over an ancient straw man?

    The Weasel fitness function is very similar to one of the more heavily studied functions in the theory of evolutionary computation. The only difference between the Weasel function and the ONEMAX function is that ONEMAX restricts the characters to 0′s and 1′s, and the target is all 1′s. It’s a fair guess that some, if not most, ONEMAX analyses generalize easily to non-binary alphabets. In other words, there is quite body of theory to draw upon in analysis of an ES operating with the Weasel fitness function.

    I believe that Dembski and Marks have known for quite some time that the Weasel program is an ES. So what game is Dembski playing? If you are a legitimate scholar and you know that you are analyzing an ES, you go to the ES literature to find prior analyses. You certainly do not conceal the prior work by making up new terminology like “proximity search” and “locking.”

    I call shenanigans.

  246. R0b says,

    Weasel is useless except as a pedagogical tool.

    Thank you. I had written something very similar before seeing your comment: Dawkins made it very clear that he was using the Weasel program as a pedagogical tool.

    Anyone who has worked with evolution strategies — this should include any evolutionary informaticist — asks first about an ES whether parents compete with progeny for survival into the next generation. Dawkins answers this clearly enough in The Blind Watchmaker:

    The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.

    I have to ask if Dembski has ever implemented an evolution strategy for himself. I cannot imagine that he would have found Dawkins’ explanation ambiguous if he had.

  247. Sal Gal wrote:

    You certainly do not conceal the prior work by making up new terminology like “proximity search” and “locking.”

    I made up the phrase “Proximity Reward” search, not Dembski. I use the phrase since it clearly explains to the layman what the fitness function in the Weasel example is doing. It rewards a string based on proximity to a target, hence “Proximity Reward Search”

    If that’s “shenanigans”, then so be it. I think it is clear.

    Atom

  248. Sal Gal,

    Is your representation of Evolution Strategies correct? It seems ES deals with real numbers. Hence, Weasel is not an ES in the terminology of some. If so, your assertion of shennanigans is premature at best and possibly incorrect altogether.

    Are you prepared to argue Dawkins uses Gaussian Random Noise onto real numbers?

    :-)

    From Data Structures

    Evolution Strategy or Evolutionsstrategie

    A search technique first developed in Berlin. Each point in the search space is represented by a vector of real values. In the original Evolution Strategy, (1+1)-ES, the next point to search is given by adding gaussian random noise to the current search point. The new point is evaluated and if better the search continues from it. If not the search continues from the original point. The level of noise is automatically adjusted as the search proceeds.

    Evolutionary Strategies can be thought of as like an analogue version of genetic algorithms. In (1+1)-ES, 1 parent is used to create 1 offspring. In (u+l)-ES and (u,l)-ES m parents are used to create l children (perhaps using crossover).

    Also from Wiki:

    Evolution strategy – Works with vectors of real numbers as representations of solutions, and typically uses self-adaptive mutation rates;

  249. Damn, back at 228 I actually thought I agreed with kairosfocus about something, but now I see I was wrong.

    He wrote,

    On the evidence of the o/p circa 1986, the simplest explanation is explicit latching of letters once they go correct. Implicit latching and/or quasi-latching is also possible, and that was noted before this thread ever began. In the D-M paper, they looked at the former case, which is a legitimate case of Weasel, given the many versions floating out there.

    No, no, no. The simplest explanation is NOT explicit latching. There is no reason to think that Dawkins programmed such a rule into Weasel. The simplest explanation is that there never was such a rule, and as I showed (and as many people much more qualified than I already knew), correct letters mutating back to incorrect is very rare, although possible.

    Dembski and Marks paper assumed that an explicit latching rule existed, but I think that assumption is much less warranted that my conclusion, which is that no such rule ever existed.

    I don’t mean to start this discussion up all over again, but I need to make it clear that I retract my statement that I agreed with kairosfocus.

  250. 250

    hazel, I was wondering about that, and thought about mentioning it in [232] and [236]. Thanks for the clarification.

  251. Upright BiPed:

    Produce “methinks its a weasel” without that phrase existing anywhere in the system, then that will be impressive.

    1. No human has written a static board evaluator for checkers that yields expert-level play (better than 99% of humans) with shallow minimax search of the game tree. Yet the Blondie24 static evaluator does just that, and was obtained by coevolution. Humans and computer programs that play better than Blondie24 look further ahead into the game before making moves. That is, Blondie24′s board evaluation is qualitatively unlike that of human and human-programmed players. So what is the source of the knowledge implicit in this novel approach to play?

    2. Fogel took a public-domain chess program that played at the master level, and adapted the static board evaluator, which had been improved by “intelligent” humans over a period of years, with an algorithm for coevolution. In short order, the program improved to grand master performance — the rating increased by over 300 points. There was nothing “chess oriented” in the algorithm. All it did was to adjust the parameters that humans had been adjusting. What was the source of the implicit gain in “understanding” of the relative values of different board configurations?

    3. Back in the 1990′s, a population of 20 thousand artificial neural networks was evolved to predict annual sunspots counts. The approach was purely statistical — there was no input of what little knowledge there was of solar weather. The predictions of the neural nets were combined in a stacked generalization scheme. The resulting scalar predictions were much more accurate than any previously reported — including those for models developed by human experts. What was the source of “knowledge” of solar dynamics?

    The second example is the one IDists should give the most thought. When an evolutionary algorithm begins with an artifact that a team of humans has struggled to make perform very well, and makes it perform fabulously, that is significant — particularly when there is no problem-specific input to the algorithm.

  252. Atom,

    Why did Dembski and Marks not explain to you that you had implemented an evolution strategy?

    Now that you know that you have implemented an evolution strategy, will you cite Rechenberg and Schwefel, and call their algorithm by its established name?

  253. Sal Gal,

    I appreciate your condescension. I think the terminology I’m going with fits the purpose of explaining what the GUI does well. If Dr. Marks wants to cite some of the prior literature, he is free to or better yet, you can feel free to link to the GUI once its up and provide more background detail on Weasel, or Evolutionary Search, if you’d like. That is always an option.

    Atom

  254. Onlookers:

    It is clear that the substantial matter is more or less settled:

    1 –> Weasel’s real downfall is targetted search without functionality.

    2 –> The circa 1986 o/p [which can be seen per law of large nos etc as representative of what Mr Dawkins' saw as "good" o/p at that time] can most easily be accounted for on explicit latching; but implicit [quasi-] latching will also work.

    3 –> The 1987 o/p plainly shows the latter, and on strength of Mr Darwin’s reported testimony that he did not use explicit latching, the latter is the best explanation on preponderance of evidence. (Though that winking behaviour still sticks out as a puzzle.)

    4 –> To see what is going on, I think we can visualise:

    – a lens-shaped disk, one side red, the other blue; loaded so that the blue side is uppermost say 95% of the time on tossing. [This is a 2-sided, loaded die, or an extension of a flippable coin.)

    -- then, look at a string of letters and spaces, X1, X2 . .. X28. Toss the disk and according as the sides come up red/blue, change at random across the set {A, B, C, . . . , Z, *}.

    -- repeat for say 20 or 50 or whatever times.

    -- the closest to "Methinks" wins and becomes the nesw string.

    -- repeat until the Methinks sentence appears

    5 --> the key problem here is that non-functional phrases are rewarded on mere proximity to Methinks, so that the issue of needing to achieve a minimum reasonable threshold of complex function is a begged question. this is multiplied by the implications of such a broadcasting oracle that in effect through intelligent design, attracts arbitrary initial configurations, step by step.

    6 --> Weasel is NOT illustrative of the "BLIND watchmaker" of the title of the book in question.

    GEM of TKI

    ________________

    PS: On the ad hominem side issue that came up once Severski cited with approval Mr Boyne's remarks equating Evangelical Christians an Islamic militants (in a context of dismissing our very legitimate concerns on public lewdness in the Dancehall subculture that was at that time triggering court actions in Jamaica in defense of public morality):

    Re DK, 236: I think the term “blood slander” is awfully strained in your usage; I’d even call it hyperbolic.

    EXCUSE ME!

    Mr Kellogg have you been attending to the headlines in recent years on the activities of "Islamist militants," e.g. 9/11, 7/7 and the Taliban regime?

    Mr Boyne EXPLICITLY equated Jamiaca's Evangelical Christians with such, and when that was pointed out, in trying to deny it, he did just that again.

    But, in fact evangelicals and our progenitor dissenters of old, have both had much to do with the rise of modern liberty and democracy, and with the rise of liberty in Jamaica specifically. [We have some national heroes of Jamaica to prove that -- all martyrs.]

    Not to mention, there simply is no proper comparison between the list of Islamist militants as outlined above and members of “the Firstborn Church of God in Christ, Spirit Filled and Triumphant” or some similar typical small Jamaican country side or ghetto area church. If one cannot see that to make such an equation is beyond all limits of civility and intends to smear Christians with the blood spilled by Islamist terrorists and tyrants [all thought the wonderful power of that ever so handy smear- word, "fundamentlaism"], something is wrong, deadly seriously wrong. (And, this is all in a context of plainly legitimate concern over out of control lewdness in the Dancehall subculture: public, onstage “patting” and “dry humping” are no laughing matter, sir. Nor is that monstrosity that now sits at the corner of Knutsford Boulevard and Oxford road, put there at a cost to the public of J$ 4.5 millions, by the same artist who put a nude female figure arched over rearwards such that the exaggerated pelvic region was at eye level and only a few metres form the entry of the student chapel at U Tech, Jamaica.]

    Your own dismissal of such tells me all I need to know, sir; about Anti Evo, about Mr Severski who has seemingly vanished, and your own perceptions.

    Sadly. PLEASE, re-think.

  255. Pardon: Sev’s cite was of remarks in a context that equated . . .

  256. Hazel:

    Look again at the 1986 o/p.

    Do you see any of the letters that become correct ever reverting?

    Think about the simplicity of coding of an explicit latch vs a pgm that ends up implicitly latching (as Apollos, Atom etc have shown).

    think about sample size and implications thereof when 200 samples show a pattern, with no exceptions.

    GEm of TKI

  257. 257

    kairosfocus [253], I don’t mean to continue a side debate, but I read nothing in which Mr. Boyne “equat[ed] Evangelical Christians an[d] Islamic militants” — in what I read, he was careful to distinguish the two. But that was in a response to something I did not manage to dig up. Perhaps I missed it.

    Our disagreement over the value of the Redemption Song monument is a mere difference of taste — de gustibus.

    The issue of public lewdness is not really one I have a stake in, which is why I didn’t comment on it.

  258. kf writes,

    Think about the simplicity of coding of an explicit latch vs a pgm that ends up implicitly latching (as Apollos, Atom etc have shown).

    No. Obviously it is simpler to NOT write a rule than to write a rule. If you don’t write a rule for latching, then what you call quasi-latching just happens.

    Also, if you have an explicit latching rule, then you have to mark each slot with a marker showing whether it is correct or not so that the GA side of the program (as opposed to the fitness function) knows to not subject that slot to further mutation.

    On the other hand, if you don’t have an explicit rule, the GA side of the program doesn’t need to know such information. The fitness function just adds up the number of correct characters, but it doesn’t need to pass back information about which characters are correct.

    The latter case is obviously simpler.

    Having a rule is NOT simpler.

  259. Hazel:

    Please, look again at the output circa 1986. You will see that so soon as a letter goes correct, it latches.

    The issue is, why?

    The simplest answer is that it does so by a metric on letterwise distance to target, and once the letters hit home, locks them. That is, partitioned search on a letterwise basis.

    Moreover, Weasel c 1986 is not like the GA’s you are thinking of.

    It is not measuring a figure of merit that is independent of a specified target point in the config space [unlike, say the directivity of an antenna], then tapping out the way to a local peak by successively identifying sectors that climb most steeply when rings of tests are successively thrown out.

    It measures straight [quasi-]Hamming distance to target point then rewards the closest in each generation with the title champion; then sets out on the next round, with that as the start point for further increments to target. So, it naturally ALREADY has in it the information on where the current champion is relative to the target, on a letter by letter basis.

    As was pointed out above and elsewhere, it takes no great further effort to then use that information to lock up zero-distance letters, implementing letterwise partitioned search. [An easy metric is 1 if non correct, 0 if correct, with a line of code or two to lock from further mutation on zero. You can even put in fairly simple code to change the otherwise locked, as Apollos did.]

    To do the implicit latch case, you are actually tuning performance on the parameters.

    Some of that may be fairly intuitive, e.g. 5% p(change) means that a bit over 1 letter on avg will change per population member, with a significant fraction not changing. (And note how much information about the target is captured in that simple setting of a parameter!)

    And of course, population size to get the sort of published run length requires a fair amount of tweaking to avoid a tearaway rush as newly multiply correct code pop members dominate on the “nearest to target” metric. You may not see such finetuning as programming, but it is.

    A further degree of that tuning would be to set up so we see [quasi-]latching as opposed to flicking back as a fairly frequent phenomenon. That is in fact a significant and observable published o/p difference 1986 vs 1987.

    And of course, all of this is aon a secondary point. Weasel’s downfall is the targeted search that rewards non-functional configs. By doing that it cannot reasonably be viewed as a BLIND — non-purposive — watchmaker in action using a good analogue of natural selection.

    I suspect that a pairwise partitioned search — i.e 14 pairs of characters — that requires pairs of letters to be correct before the metric on distance decreases from 1 to 0 would show an interesting degradation of performance in terms of number of generations to target. And, that would be for just 1 of 27^2 as a crude functionality metric. Quad-letter matching or 7-letter or 14 letter matching requirements would then show increasing degradation.

    All of which would point out that once realistic functionality requisites are there, Weasel falls apart.

    Hey, ATOM, are you going to write that capability into your new version’s distance to target metric? [It would be real fun to watch how metrics that are 1-letter, 2, 4, 7, 14 and 28 compare, live. And, with variable generation pop size, variable mutation rates and with on/of on explicit latching.]

    GEM of TKI

  260. hazel wrote:

    No. Obviously it is simpler to NOT write a rule than to write a rule. If you don’t write a rule for latching, then what you call quasi-latching just happens.

    It was actually easier to write Partitioned Search than it was to write the Proximity Reward Search (non-latching Weasel.) The latter required arrays of offspring, more user interface components, functions for mutating strings, and of course a fitness function.

    Partitioned Search required minimal UI components, a single string to check against, and a single random letter function. (Which the mutate strings function above uses anyway.)

    Just my two cents, since y’all are talking about what is simpler to implement.

    Atom

  261. KF,

    That’s what we refer to as “feature creep” :) In all seriousness, though, I believe you can do what you’re asking in Weasel 2.0 by simply writing a custom fitness function that assigns values based on pairs of letters, rather than single ones. It should be simple to code that function in javascript (if you don’t know javascript, I’ll code an example version for you via email…just let me know.)

    But yeah, you should be able to do what you’re asking already. You can then set a multi-run of 10,000 searches, view the results as CSV (for easy import into spreadsheet apps), and explore the performance issues involved with testing against pairs (or triples, or whatever) of letters vs. just one letter.

    Atom

  262. Hi Atom. Yes, I’m interested in the question of which would be simpler to implement. I have gotten interested in this because of the math and logic involved. I had never paid any attention to this Weasel program until the subject came up here. I taught simple BASIC and Pascal programming back in the mid ’80s, about the same time Dawkins was writing Weasel, but have done no programming in any more complicated languages and have not thought about programming for years.

    So I’d like to explore your comment that implementing a rule would be simpler than not implementing a rule, or even more basic, what the difference would be in the two situations.

    You write,

    It was actually easier to write Partitioned Search than it was to write the Proximity Reward Search (non-latching Weasel.) The latter required arrays of offspring, more user interface components, functions for mutating strings, and of course a fitness function.

    Partitioned Search required minimal UI components, a single string to check against, and a single random letter function. (Which the mutate strings function above uses anyway.)

    I don’t think I see why some of what you say is true. Let’s look at each part.

    1. As you say, each type requires the a mutation function

    a. Partitioned (using your term for a latching rule in place) requires that you look at a letter, see if it is correct, which requires checking it with either the target itself or a previously stored bit of information flagging that it is correct, and then, if it is incorrect, subjecting it to a possible mutation based on the mutation rate being used.

    b. Proximity (no latching rule in place) requires that you just subject each letter to a possible mutation based on the mutation rate being used.

    Seems clear to me that the mutation function is simpler for Proximity.

    2. Each type requires a fitness function. In both cases you have to check the child (the phrase being evaluated) against the target string, and in both cases you have to add up the number of correct letters so that you can later decide which child in the generation is the most fit.

    And in both cases, you are going to have to go through the child phrase letter by letter.

    a) In Proximity, all you have to do is decide whether the letter is correct, increment the correct letter counter if it is, and store (temporarily) the number of correct letters. Basically the routine would be

    For each letter,

    • check the letter against the target letter
    * if they match, increment the correct letter counter

    b. In Partitioned, you could do exactly the same thing.

    Or, in Partitioned, you conceivably could use the previous information stored in the child as to whether a letter was correct or not, and only check those that had been incorrect, but you still have to add up the number of correct letters. Here the routine would be something like

    For each letter,

    * check to see if the letter has already been marked as correct
    * if so, increment the correct letter counter
    * if not, check the letter against the target letter
    * if it is now correct, increment the correct letter counter
    * also, if it now correct, flag it as correct.

    This second option is certainly not simpler than just checking each letter against the target.

    3. User Interface.

    Why do the two methods differ here? You just input your basic info (mutation rate and population size) and decide what output you want to display: every child and its associated number of correct letters, or just the best surviving candidate for the next generation.

    I don’t see any user interfaces differences.

    4. Arrays of offspring

    You say that only Proximity requires an array of offspring, but I don’t understand that.

    In both cases you have to store information about all the children in a generation in order to decide which is the best and thus survives to be the next parent. After you have picked the survivor you can erase that array to use in the next generation. Presumably you would want to keep an array of the survivers so you could go back and look at the progression from beginning phrase to target string, but this has nothing to do with the type of search you used.

    Also,

    a. In Proximity, all the array needs to hold for each child is the string itself, and perhaps the result of the correct letter counter from the fitness function (depending on how you implement a routine to choose the best child.)

    b. In Partitioned, the array for each child must hold the string itself and a correct flag to mark those letters that no longer mutate, and perhaps the result of the correct letter counter from the fitness function (depending on how you implement a routine to choose the best child.)
    Therefore, Proximity is again simpler.

    Therefore, overall, Proximity is simpler.

    I welcome your comments, and would hope you could address the specific points I’ve made.

    hazel

  263. Atom

    Thanks.

    I suspect we pretty well know what to expect already. (And, the fact that it is EIL that is providing the “roll yer own” version of Weasel should more or less tell us where that will most likely point.]

    GEM of TKI

  264. So little time…

    Part of the Marks/ Dembski paper discusses a “partitioned search”.

    To illustrate a partitioned search they refer to the book “The Blind Watchmaker” and the use of the “weasel” program.

    In TBW Dawkins uses “weasel” to illustrate cumulative selection.

    “Cumulative” means “increasing by successive additions”.

    INCREASING BY SUCCESIVE ADDITIONS.

    “Ratchet” means to “move by degrees in one direction only”.

    Increasing by additions means to move by degrees in one direction only.

    Dawkins NEVER mentions that one or more steps can be taken backward. He never says anything about regression.
    Therefor cumulative selection is a ratcheting process as described and illustrated by the “weasel” program in TBW.

    That is once a matching letter is found the process keeps it there. No need to search for what is already present.

    Translating over to nature this would be taken to mean once something useful is found it is kept and improved on.

    IOW it is not found, lost, and found again this time with improvements. By reading TBW that doesn’t fit what Richard is saying at all.

    And he never states that he uses the word “cumulative” in any other way but “increasing by successive additions”.

    How can a process be “cumulative” and at the same time allow you to keep losing what you have?

    We would call that the “yo-yo” selection process.

    And then no one would infer it is a partitioned search.

  265. Joseph

    Well said, thanks.

    GEM of TKI

  266. PS: Severski et al — why the silence for days now? I think there are some serious things you [and your ilk at Anti Evo] need to account for above given the context for your enthusiastically cited dismissal of me.

  267. kairosfocus @263

    PS: Severski et al — why the silence for days now?

    Probably, like me, they are all being moderated to the point that participation here is nearly impossible.

    I think there are some serious things you [and your ilk at Anti Evo] need to account for above given the context for your enthusiastically cited dismissal of me.

    The fact remains that your claim that Dawkins explicitly latched correct letters in his implementation of Weasel is completely unfounded, and refuted both by video evidence and Dawkins’ own statements.

    When are you going to admit that you were wrong?

    JayM

  268. 268

    kairosfocus, I can’t speak for others, but I have been delayed by the moderators, to whom I have repeatedly appealed.

  269. Severski et al — why the silence for days now?

    I hope they haven’t been put on moderation. I have enjoyed their contributions.

  270. Which also raises the question about what I need to do to be taken off moderation?

  271. I have zero interest in the Weasel Program and haven’t followed this thread except that there seems to be a controversy on whether once a letter is reached, is it maintained. Here is an analysis from a couple years ago that says the new string is maintained if it is a better fit than the current string. Thus, it could mean some letter went off but some other letter came on.

    http://vlab.infotech.monash.ed.....in-weasel/

    Here is a quote from the site

    “Note that this demo works slightly different than the model described by Dawkins in his book. We are grateful to Dr. W. R. Elsberry for pointing this out and for highlighting the differences. In the original model the letters do not become fixed. Instead, at each generation (i.e. step) a number of mutant strings are produced from the current copy by randomly changing some letters. The mutants are considered to be chosen for the next generation. The chosen string is the one that most resembles the target string.

    The results of the model presented here and of the original Dawkins model are essentially the same: at each step either a “better” string is produced or something quite similar to the current version is retained. As a usual consequence, current strings are replaced with new strings that have at least as many matches as the previous one. We have created a more elaborated version of the weasel – the genetic algorithm weasel. That version incorporates the selection process as it is described by Dawkins and extends his model with some features commonly found in genetic algorithms.”

  272. to Atom: in 262 I wrote about four ways that a Proximity (non-latching) search would be simpler to implement than a Partitioned (latching) search, and I’d like to add a small comment about arrays here.

    But first let me say that I know you are busy, and that responding to me might be a fairly low priority for you. However, given that the subject of which is more natural and/or simpler has been the subject of quite a bit of discussion, and since you are the programmer of Weasel Ware (I gather), I would hope you would find time sometime in the next few days to respond to me. I feel like I have some good points, so I would very much like to hear your specific responses, either showing me where I am wrong, or perhaps agreeing with me in places.

    With that said, let’s talk about arrays.

    First, of course, every phrase (a 28 character string) is a one dimensional array, and both methods need those. I, however, had been thinking of the set of all the children of each generation (let’s say n = 100, for example.) I had been thinking that program would generate all 100 children, store those in a 28 x 100 array, and then go back and use the fitness function to decide which child is best, based on the highest number of correct letters. (Obviously, there would have to be some way of deciding among children with the same highest score, but that is an arbitrary feature not important for the discussion.)

    But now I see that such an array is not necessary, as you can check each child right after it is “born”, so to speak, against the current best child: if the new child is better, it becomes the new best child, and if not it is discarded and the next child is born. Therefore, if one was concerned solely with counting how many generations it took to reach the target, and one had no desire to save any historical data, one wouldn’t need anything but a few 1 x 28 arrays to store just a few strings: the target, the current parent, the current best child, and the current newborn child. No large arrays necessary at all.

    And none of things has anything to do with which kind of search you use, as far as I can see.

    So please consider this comment added to (and supplanting in places) the comment on arrays in post 262 above.

  273. My apologies for the gap in my posting but I can assure you it was not due to moderation or to my having fled the scene, I just found myself otherwise engaged for a while.

    As far as the WEASEL program is concerned it seems to me that, whatever the suspicions of some here, there is no evidence that Dawkins included latching. The appearance of latching can be explained just as easily as an artefact of the sampling in the printed runs. Even if he had included latching, though, would it have made much difference if the purpose was, as stated, simply to illustrate the advantage of cumulative selection over random searching in reaching a target?

    On the question of targets, the fact that WEASEL searches for a target phrase specified by the programmer is often urged as evidence that undermines its value as an illustration of evolution. Critics argue that evolution denies the influence of a Designer and that without one there can be no purpose and no targets; if it is just a question of blind chance and necessity how could creatures like ourselves have emerged?

    This misses the point that there is no requirement for an Intelligent Designer to set targets. Nature, in the form of the environment is quite capable of doing it without any help. Suppose, for example, a particular region begins to get slowly colder after a long period of balmy warmth. This sets a new target for the local flora and fauna of surviving in lower temperatures. Thus we might find animal physiology and morphology converging on solutions that aid heat conservation like a stockier build, surrounding the body core with a layer of fat and growing a much thicker coat of fur.

    Yes, you can argue that an omniscient Designer foresaw this possibility and ‘frontloaded’ a genetic ‘toolkit’ and stored it away in amongst the “junk” DNA against just such an eventuality. But this requires that the genetic software be conserved unchanged over millions or possibly billions of years. This is at least as unlikely as the evolutionary ‘just so’ stories are supposed to be by critics.

    On the other issue of the article by Ian Boyne let me reiterate that it was cited only to illustrate that kairosfocus proper name could easily be found in the media. I did not read the article in detail. I got the impression that it was something to do with Christianity but no really much more than that.

  274. 274

    Atom (and kairosfocus, and hazel), Zachriel has written a simple non-latching WEASEL program that does not require an array. You can get the Excel file at

    http://www.zachriel.com/weasel/weasel.xls

    Take a look at the VBA code. It seems pretty simple. No latching, no array, just a few lines. Is your code simpler?

    Zachriel also finds that letters revert. Here is what he says in a post at AtBC:

    Here is a quicky analysis of Letter Reverts (over 10 trials):

    Pop, Mut, Rev
    50, 5%, 1.3
    50, 10%, 19.1
    25, 5%, 2.2

    Trials with higher Letter Reverts also have Fitness Retreats.

    Interestingly, with Pop=25 Mut=10% , fitness rises, then bounces around 20-22. Reverts are *normal*.

  275. Anything that allows for a reversion is NOT an example of CUMULATIVE selection.

    Unreasonably high mutations rates goes against what Dawkins is talking about in TBW- slight changes, each an improvement.

    His point on CUMULATIVE selection is lost if traits (characters) can be found, lost, found again, only to have something else lost, etc.

  276. hazel,

    Sorry for the delay. You guessed correct, I was tied up all day at work and didn’t have an opportunity to respond. I will explain (in psuedo-code) the two implementations as I think it will answer a lot of your questions.

    Partitioned Search

    1) Start with a random string the same length as the target.

    2) For each letter in the string, check if it matches that position of the target. If so, do nothing. If not, mutate that letter.

    3) Repeat until string matches target.

    (UI components needed: User specified target.)

    Proximity Reward Search

    1) Begin with a random string the same length as the target.

    2) Using the user set values for offspring, create an array of N progeny strings that are clones of the parent.

    3) For each child, mutate letters using the number set by the user for “Mutation”.

    4) Pass each child string into a fitness function, comparing them against the target string. Using the distance from that target (checking each letter for correctness at that position), assign values to the strings. Retain the string with the lowest error value.

    5) Repeat, using the best string as the new parent, until the best string has no errors.

    (UI components needed: User Specified target, Offspring number setting section, mutation number setting section.)

    So from my point of view, Partitioned Search was much easier to implement. I’m dealing with only one string vs. a community of strings, and a simple update at each step, rather than multiple mutations followed by a total comparison of the whole community of offspring.

    I hope that answers your questions.

    Atom

  277. to Joseph: When you climb a mountain, you occasionally go downhill before going up again. Just because you have an occasional reversal doesn’t mean you aren’t accumulating fitness in respect to the target. (And we’re not talking about “traits” here, just letters in Weasel.)

    to David, for clarification: is the following a correct interpretation of what you are saying?

    The first set of data, 50, 5%, 1.3, means that on average 1.3 of the 50 children in a generation have a letter reversal, even though the best child of the 50 wouldn’t have a letter reversal.

    And Zachriel’s remark that “Trials with higher Letter Reverts also have Fitness Retreats,” means, I think, that in situations like the second one, 50, 10%, 19.1, there might be generations where in fact the best child is less than fit the parent. Is this a correct understanding?

    Thanks for the info.

  278. 278

    hazel, Zachriel has tried (unsuccessfully) to post here. Without speaking for him, this is my interpretation. To this point:

    The first set of data, 50, 5%, 1.3, means that on average 1.3 of the 50 children in a generation have a letter reversal, even though the best child of the 50 wouldn’t have a letter reversal.

    The first part of that sentence is correct. I think the second part may or may not be correct. It could be the case that, under some conditions, depending on the population of children and the mutation rate, the best child had one correct letter revert but also had two incorrect letters mutate to correct ones.

    Probably not in most cases though. In most cases, the best child would not be among those with a letter reversion. It happens. In other words, reversion happens in the program — it’s trivial, and it’s in fact harder to program it to prevent reversion. Selection for best fit, however, prevents such reversions from having much effect.

  279. If anyone want to see the Weasel generator that was created a couple years at Monash University in Australia follow the link I put up above and press demo at the bottom. It uses a latching function. Here is the link again

    http://vlab.infotech.monash.ed.....in-weasel/

    The lowest I got was 57 iterations and the highest was 123 in about 15 tries. You can then view each selected parent at each iteration. On one of the iterations it got to all but one letter and then took 25 tries before reaching the total phrase.

    You can enter any phrase you wish and see how long it takes. One could actually do a study and see how an added letter affects run time. Since the program only works with lower case letters when I put in a capital, the program went into an infinite loop because it could never match the letter. I assume all 28 letter space sequences will take the same amount of time. But somehow weasel seems like an appropriate word for Dawkins.

    There is a version like Dawkins model that has population size and mutation rates as inputs as well as other variable you can change. Follow link for weasel genetic algorithm.

  280. “there might be generations where in fact the best child is less than fit the parent. Is this a correct understanding?”

    That is certainly true in the Monash example. It gives the fitness function at each iteration and it regresses some times. It seems to be doing a random walk around a certain value but with a generally upward direction. It also seems to make rapid progress at first and then the latter steps seem to take the longest.

    This process while fun to watch and do has nothing to do with evolution. Methinks “methinks it is like a weasel” is Much Ado about Nothing.

  281. David Kellogg,

    Zachriel is discarding the children as soon as he checks if they are better than the best (which is actaully almost identical to my implementation…I do the same thing), but the idea is the same as my pseudo-code above: you have to evaluate the corrrectness of each letter on each child to get a fitness value, then compare the fitness values of the child strings to a current champion.

    Partitioned Search, on the other hand, only checks the correctness of each letter for one string.

    As the psuedo-code above shows, it is the simpler concept to implement.

    The relevant code for implementing a partitioned search is very small. My function for generating the next string in a Partitioned search, GenerateNextPartStep(), is eight lines of code with one call to GetRandomLetter(), a basic function that returns a random letter.

    If we exclude Zach’s code for getting a random letter and mine, we see that his relevant code section is 16 lines roughly; this could be misleading, however, since we’re comparing VBA to javascript. In any case, although we could both write shorter versions of code (I’m sure), naturally coding it the way we did, I see that Proximity Search is not any simpler than Partitioned Search, and in fact is somewhat more involved in coding.

    Atom

  282. Correction to last post: thinking about it some more, VBA uses extra lines for ending loops and incrementing, so you could count it different ways. Excluding that difference, his code becomes only slightly longer, the difference being negligable. Again, what is important is not how tightly we can compress our code, but which concept has the easier basic algorithm, which can be expressed in any language.

  283. 283

    Below I offer a complete run I did of Zachriel’s WEASEL. This was the second time I ran the program. I used a population of 50 children and a mutation rate of 5%. The rows show the best fit from each generation.

    Note what happens between generations 32 and 33. In generation 32, the best fit has an E where the E of LIKE should be. But in generation 33, the best fit includes a J in that space, and it doesn’t become an E again for a long, long time.

    1 .E….N…………………
    2 .E….N……..L…………
    3 .E….N..I…..L…………
    4 .E…ND..I..S..L……….E.
    5 .ET..ND..I..SA.L………FE.
    6 .ET..ND.@I..SA.L…..@…FE.
    7 .ET..ND.@I..SA.L…..@…FE.
    8 .ET..ND.@I..SA.L…..@…FE.
    9 .ET..ND.@I..SA.LI….@…FE.
    10 .ET..ND.@I..SA.LI.B..@…SE.
    11 .ET..ND.@I..IG.LI.B..@…SE.
    12 .ET..ND.@IO.IG.LI.B..@…SE.
    13 .ET..ND.@IO.IS.LI.B..@…SE.
    14 .ET..[email protected].B..@.E.SE.
    15 .ET..[email protected]..@.EASE.
    16 .[email protected]..@TEASE.
    17 .[email protected]..@TEASE.
    18 .[email protected]@.@TEASE.
    19 .[email protected]@.@TEASE.
    20 .[email protected]@.@TEASE.
    21 .[email protected]@.@WEASE.
    22 .[email protected]@.@WEASE.
    23 [email protected]@.@WEASE.
    24 [email protected]@.@WEASE.
    25 [email protected]@.@WEASE.
    26 [email protected]@.@WEASE.
    27 METLENKK@IYBISQLIHE@.@WEASE.
    28 METLENKK@IYBISQLIHE@.@WEASE.
    29 METLENKK@IYBISQLIHE@.@WEASE.
    30 METLENKK@IYBISQLIHE@.@WEASE.
    31 METLENKK@IYBISQLIHE@.@WEASE.
    32 METLENKK@IYBISQLIHE@.@WEASE.
    33 METLENKS@IY@ISQLIHJ@.@WEASE.
    34 METLENKS@IY@ISQLIKJ@.@WEASE.
    35 METLENKS@IY@ISQLIKJ@.@WEASE.
    36 METLENKS@IY@ISQLIKJ@.@WEASE.
    37 METLENKS@IT@ISQLIKJ@.@WEASET
    38 METLENKS@IT@ISQLIKJ@.@WEASET
    39 METLENKS@IT@ISQLIKJ@.@WEASET
    40 METLINKS@IT@ISQLIKJ@.@WEASET
    41 METLINKS@IT@ISQLIKJ@.@WEASET
    42 METLINKS@IT@ISQLIKJ@.@WEASET
    43 METLINKS@IT@ISQLIKJ@.@WEASET
    44 METLINKS@IT@ISQLIKJ@.@WEASET
    45 METLINKS@IT@ISQLIKJ@.@WEASET
    46 METLINKS@IT@ISQLIKJ@.@WEASET
    47 METLINKS@IT@ISQLIKJ@.@WEASET
    48 METLINKS@IT@ISQLIKJ@.@WEASET
    49 METLINKS@IT@IS@LIKJ@.@WEASET
    50 METLINKS@IT@IS@LIKJ@.@WEASET
    51 METHINKS@IT@IS@LIKJ@.@WEASET
    52 METHINKS@IT@IS@LIKJ@.@WEASET
    53 METHINKS@IT@IS@LIKJ@.@WEASET
    54 METHINKS@IT@IS@LIKJ@.@WEASET
    55 METHINKS@IT@IS@LIKJ@.@WEASEL
    56 METHINKS@IT@IS@LIKJ@.@WEASEL
    57 METHINKS@IT@IS@LIKC@.@WEASEL
    58 METHINKS@IT@IS@LIKC@.@WEASEL
    59 METHINKS@IT@IS@LIKC@.@WEASEL
    60 METHINKS@IT@IS@LIKC@A@WEASEL
    61 METHINKS@IT@IS@LIKC@A@WEASEL
    62 METHINKS@IT@IS@LIKC@A@WEASEL
    63 METHINKS@IT@IS@LIKC@A@WEASEL
    64 METHINKS@IT@IS@LIKC@A@WEASEL
    65 METHINKS@IT@IS@LIKC@A@WEASEL
    66 METHINKS@IT@IS@LIKC@A@WEASEL
    67 METHINKS@IT@IS@LIKF@A@WEASEL
    68 METHINKS@IT@IS@LIKF@A@WEASEL
    69 METHINKS@IT@IS@LIKF@A@WEASEL
    70 METHINKS@IT@IS@LIKF@A@WEASEL
    71 METHINKS@IT@IS@LIKF@A@WEASEL
    72 METHINKS@IT@IS@LIKF@A@WEASEL
    73 METHINKS@IT@IS@LIKF@A@WEASEL
    74 METHINKS@IT@IS@LIKF@A@WEASEL
    75 METHINKS@IT@IS@LIKF@A@WEASEL
    76 METHINKS@IT@IS@LIKF@A@WEASEL
    77 METHINKS@IT@IS@LIKF@A@WEASEL
    78 METHINKS@IT@IS@LIKF@A@WEASEL
    79 METHINKS@IT@IS@LIKF@A@WEASEL
    80 METHINKS@IT@IS@LIKF@A@WEASEL
    81 METHINKS@IT@IS@LIKF@A@WEASEL
    82 METHINKS@IT@IS@LIKF@A@WEASEL
    83 METHINKS@IT@IS@LIKF@A@WEASEL
    84 METHINKS@IT@IS@LIKF@A@WEASEL
    85 METHINKS@IT@IS@LIKF@A@WEASEL
    86 METHINKS@IT@IS@LIKF@A@WEASEL
    87 METHINKS@IT@IS@LIKF@A@WEASEL
    88 METHINKS@IT@IS@LIKF@A@WEASEL
    89 METHINKS@IT@IS@LIKF@A@WEASEL
    90 METHINKS@IT@IS@LIKF@A@WEASEL
    91 METHINKS@IT@IS@LIKP@A@WEASEL
    92 METHINKS@IT@IS@LIKP@A@WEASEL
    93 METHINKS@IT@IS@LIKP@A@WEASEL
    94 METHINKS@IT@IS@LIKP@A@WEASEL
    95 METHINKS@IT@IS@LIKP@A@WEASEL
    96 METHINKS@IT@IS@LIKP@A@WEASEL
    97 METHINKS@IT@IS@LIKP@A@WEASEL
    98 METHINKS@IT@IS@LIKV@A@WEASEL
    99 METHINKS@IT@IS@LIKV@A@WEASEL
    100 METHINKS@IT@IS@LIKV@A@WEASEL
    101 METHINKS@IT@IS@LIKC@A@WEASEL
    102 METHINKS@IT@IS@LIKC@A@WEASEL
    103 METHINKS@IT@IS@LIKC@A@WEASEL
    104 METHINKS@IT@IS@LIKC@A@WEASEL
    105 METHINKS@IT@IS@LIKC@A@WEASEL
    106 METHINKS@IT@IS@LIKC@A@WEASEL
    107 METHINKS@IT@IS@LIKC@A@WEASEL
    108 METHINKS@IT@IS@LIKC@A@WEASEL
    109 METHINKS@IT@IS@LIKC@A@WEASEL
    110 METHINKS@IT@IS@LIKC@A@WEASEL
    111 METHINKS@IT@IS@LIKC@A@WEASEL
    112 METHINKS@IT@IS@LIKC@A@WEASEL
    113 METHINKS@IT@IS@LIKC@A@WEASEL
    114 METHINKS@IT@IS@LIKH@A@WEASEL
    115 METHINKS@IT@IS@LIKH@A@WEASEL
    116 METHINKS@IT@IS@LIKH@A@WEASEL
    117 METHINKS@IT@IS@LIKH@A@WEASEL
    118 METHINKS@IT@IS@LIKH@A@WEASEL
    119 METHINKS@IT@IS@LIKH@A@WEASEL
    120 METHINKS@IT@IS@LIKH@A@WEASEL
    121 METHINKS@IT@IS@LIKH@A@WEASEL
    122 METHINKS@IT@IS@LIKH@A@WEASEL
    123 METHINKS@IT@IS@LIKH@A@WEASEL
    124 METHINKS@IT@IS@LIKH@A@WEASEL
    125 METHINKS@IT@IS@LIKH@A@WEASEL
    126 METHINKS@IT@IS@LIKH@A@WEASEL
    127 METHINKS@IT@IS@LIKH@A@WEASEL
    128 METHINKS@IT@IS@LIKH@A@WEASEL
    129 METHINKS@IT@IS@LIKH@A@WEASEL
    130 METHINKS@IT@IS@LIKH@A@WEASEL
    131 METHINKS@IT@IS@LIKH@A@WEASEL
    132 METHINKS@IT@IS@LIKH@A@WEASEL
    133 METHINKS@IT@IS@LIKH@A@WEASEL
    134 METHINKS@IT@IS@LIKH@A@WEASEL
    135 METHINKS@IT@IS@LIKH@A@WEASEL
    136 METHINKS@IT@IS@LIKH@A@WEASEL
    137 METHINKS@IT@IS@LIKH@A@WEASEL
    138 METHINKS@IT@IS@LIKO@A@WEASEL
    139 METHINKS@IT@IS@LIKO@A@WEASEL
    140 METHINKS@IT@IS@LIKO@A@WEASEL
    141 METHINKS@IT@IS@LIKO@A@WEASEL
    142 METHINKS@IT@IS@LIKO@A@WEASEL
    143 METHINKS@IT@IS@LIKO@A@WEASEL
    144 METHINKS@IT@IS@LIKO@A@WEASEL
    145 METHINKS@IT@IS@LIKO@A@WEASEL
    146 METHINKS@IT@IS@LIKO@A@WEASEL
    147 METHINKS@IT@IS@LIKO@A@WEASEL
    148 METHINKS@IT@IS@LIKO@A@WEASEL
    149 METHINKS@IT@IS@LIKO@A@WEASEL
    150 METHINKS@IT@IS@LIKO@A@WEASEL
    151 METHINKS@IT@IS@LIKO@A@WEASEL
    152 METHINKS@IT@IS@LIKR@A@WEASEL
    153 METHINKS@IT@IS@LIKR@A@WEASEL
    154 METHINKS@IT@IS@LIKO@A@WEASEL
    155 METHINKS@IT@IS@LIKO@A@WEASEL
    156 METHINKS@IT@IS@LIKO@A@WEASEL
    157 METHINKS@IT@IS@LIKO@A@WEASEL
    158 METHINKS@IT@IS@LIKO@A@WEASEL
    159 METHINKS@IT@IS@LIKO@A@WEASEL
    160 METHINKS@IT@IS@LIKO@A@WEASEL
    161 METHINKS@IT@IS@LIKO@A@WEASEL
    162 METHINKS@IT@IS@LIKO@A@WEASEL
    163 METHINKS@IT@IS@LIKO@A@WEASEL
    164 METHINKS@IT@IS@LIKO@A@WEASEL
    165 METHINKS@IT@IS@LIKO@A@WEASEL
    166 METHINKS@IT@IS@LIKO@A@WEASEL
    167 METHINKS@IT@IS@LIKO@A@WEASEL
    168 METHINKS@IT@IS@LIKO@A@WEASEL
    169 METHINKS@IT@IS@LIKO@A@WEASEL
    170 METHINKS@IT@IS@LIKO@A@WEASEL
    171 METHINKS@IT@IS@LIKO@A@WEASEL
    172 METHINKS@IT@IS@LIKO@A@WEASEL
    173 METHINKS@IT@IS@LIKO@A@WEASEL
    174 METHINKS@IT@IS@LIKO@A@WEASEL
    175 METHINKS@IT@IS@LIKO@A@WEASEL
    176 METHINKS@IT@IS@LIKO@A@WEASEL
    177 METHINKS@IT@IS@LIKO@A@WEASEL
    178 METHINKS@IT@IS@LIKO@A@WEASEL
    179 METHINKS@IT@IS@LIKO@A@WEASEL
    180 METHINKS@IT@IS@LIKO@A@WEASEL
    181 METHINKS@IT@IS@LIKO@A@WEASEL
    182 METHINKS@IT@IS@LIKF@A@WEASEL
    183 METHINKS@IT@IS@LIKF@A@WEASEL
    184 METHINKS@IT@IS@LIKF@A@WEASEL
    185 METHINKS@IT@IS@LIKF@A@WEASEL
    186 METHINKS@IT@IS@LIKF@A@WEASEL
    187 METHINKS@IT@IS@LIKI@A@WEASEL
    188 METHINKS@IT@IS@LIKL@A@WEASEL
    189 METHINKS@IT@IS@LIKL@A@WEASEL
    190 METHINKS@IT@IS@LIKL@A@WEASEL
    191 METHINKS@IT@IS@LIKC@A@WEASEL
    192 METHINKS@IT@IS@LIKC@A@WEASEL
    193 METHINKS@IT@IS@LIKC@A@WEASEL
    194 METHINKS@IT@IS@LIKC@A@WEASEL
    195 METHINKS@IT@IS@LIKC@A@WEASEL
    196 METHINKS@IT@IS@LIKC@A@WEASEL
    197 METHINKS@IT@IS@LIKC@A@WEASEL
    198 METHINKS@IT@IS@LIKC@A@WEASEL
    199 METHINKS@IT@IS@LIKC@A@WEASEL
    200 METHINKS@IT@IS@LIKC@A@WEASEL
    201 METHINKS@IT@IS@LIKC@A@WEASEL
    202 METHINKS@IT@IS@LIKC@A@WEASEL
    203 METHINKS@IT@IS@LIKC@A@WEASEL
    204 METHINKS@IT@IS@LIKC@A@WEASEL
    205 METHINKS@IT@IS@LIKC@A@WEASEL
    206 METHINKS@IT@IS@LIKC@A@WEASEL
    207 METHINKS@IT@IS@LIKC@A@WEASEL
    208 METHINKS@IT@IS@LIKC@A@WEASEL
    209 METHINKS@IT@IS@LIKC@A@WEASEL
    210 METHINKS@IT@IS@LIKQ@A@WEASEL
    211 METHINKS@IT@IS@LIKQ@A@WEASEL
    212 METHINKS@IT@IS@LIKQ@A@WEASEL
    213 METHINKS@IT@IS@LIKK@A@WEASEL
    214 METHINKS@IT@IS@LIKK@A@WEASEL
    215 METHINKS@IT@IS@LIKK@A@WEASEL
    216 METHINKS@IT@IS@LIKK@A@WEASEL
    217 METHINKS@IT@IS@LIKK@A@WEASEL
    218 METHINKS@IT@IS@LIKK@A@WEASEL
    219 METHINKS@IT@IS@LIKK@A@WEASEL
    220 METHINKS@IT@IS@LIKK@A@WEASEL
    221 METHINKS@IT@IS@LIKK@A@WEASEL
    222 METHINKS@IT@IS@LIKK@A@WEASEL
    223 METHINKS@IT@IS@LIKK@A@WEASEL
    224 METHINKS@IT@IS@LIKK@A@WEASEL
    225 METHINKS@IT@IS@LIKK@A@WEASEL
    226 METHINKS@IT@IS@LIKK@A@WEASEL
    227 METHINKS@IT@IS@LIKK@A@WEASEL
    228 METHINKS@IT@IS@LIKK@A@WEASEL
    229 METHINKS@IT@IS@LIKK@A@WEASEL
    230 METHINKS@IT@IS@LIKK@A@WEASEL
    231 METHINKS@IT@IS@LIKK@A@WEASEL
    232 METHINKS@IT@IS@LIKM@A@WEASEL
    233 METHINKS@IT@IS@LIKM@A@WEASEL
    234 METHINKS@IT@IS@LIKM@A@WEASEL
    235 METHINKS@IT@IS@LIKM@A@WEASEL
    236 METHINKS@IT@IS@LIKM@A@WEASEL
    237 METHINKS@IT@IS@LIKM@A@WEASEL
    238 METHINKS@IT@IS@LIKM@A@WEASEL
    239 METHINKS@IT@IS@LIKI@A@WEASEL
    240 METHINKS@IT@IS@LIKI@A@WEASEL
    241 METHINKS@IT@IS@LIKI@A@WEASEL
    242 METHINKS@IT@IS@LIKI@A@WEASEL
    243 METHINKS@IT@IS@LIKI@A@WEASEL
    244 METHINKS@IT@IS@LIKI@A@WEASEL
    245 METHINKS@IT@IS@LIKI@A@WEASEL
    246 METHINKS@IT@IS@LIKI@A@WEASEL
    247 METHINKS@IT@IS@LIKI@A@WEASEL
    248 METHINKS@IT@IS@LIKC@A@WEASEL
    249 METHINKS@IT@IS@LIKC@A@WEASEL
    250 METHINKS@IT@IS@LIKC@A@WEASEL
    251 METHINKS@IT@IS@LIKW@A@WEASEL
    252 METHINKS@IT@IS@LIKW@A@WEASEL
    253 METHINKS@IT@IS@LIKO@A@WEASEL
    254 METHINKS@IT@IS@LIKO@A@WEASEL
    255 METHINKS@IT@IS@LIKO@A@WEASEL
    256 METHINKS@IT@IS@LIKO@A@WEASEL
    257 METHINKS@IT@IS@LIKO@A@WEASEL
    258 METHINKS@IT@IS@LIKO@A@WEASEL
    259 METHINKS@IT@IS@LIKO@A@WEASEL
    260 METHINKS@IT@IS@LIKO@A@WEASEL
    261 METHINKS@IT@IS@LIKO@A@WEASEL
    262 METHINKS@IT@IS@LIKO@A@WEASEL
    263 METHINKS@IT@IS@LIKO@A@WEASEL
    264 METHINKS@IT@IS@LIKO@A@WEASEL
    265 METHINKS@IT@IS@LIKO@A@WEASEL
    266 METHINKS@IT@IS@LIKO@A@WEASEL
    267 METHINKS@IT@IS@LIKO@A@WEASEL
    268 METHINKS@IT@IS@LIKO@A@WEASEL
    269 METHINKS@IT@IS@LIKO@A@WEASEL
    270 METHINKS@IT@IS@LIKO@A@WEASEL
    271 METHINKS@IT@IS@LIKO@A@WEASEL
    272 METHINKS@IT@IS@LIKO@A@WEASEL
    273 METHINKS@IT@IS@LIKO@A@WEASEL
    274 METHINKS@IT@IS@LIKO@A@WEASEL
    275 METHINKS@IT@IS@LIKO@A@WEASEL
    276 METHINKS@IT@IS@LIKO@A@WEASEL
    277 METHINKS@IT@IS@LIKO@A@WEASEL
    278 METHINKS@IT@IS@LIKO@A@WEASEL
    279 METHINKS@IT@IS@LIKO@A@WEASEL
    280 METHINKS@IT@IS@LIKO@A@WEASEL
    281 METHINKS@IT@IS@LIKO@A@WEASEL
    282 METHINKS@IT@IS@LIKO@A@WEASEL
    283 METHINKS@IT@IS@LIKO@A@WEASEL
    284 METHINKS@IT@IS@LIKO@A@WEASEL
    285 METHINKS@IT@IS@LIKO@A@WEASEL
    286 METHINKS@IT@IS@LIKO@A@WEASEL
    287 METHINKS@IT@IS@LIKO@A@WEASEL
    288 METHINKS@IT@IS@LIKO@A@WEASEL
    289 METHINKS@IT@IS@LIKO@A@WEASEL
    290 METHINKS@IT@IS@LIKO@A@WEASEL
    291 METHINKS@IT@IS@LIKO@A@WEASEL
    292 METHINKS@IT@IS@LIKO@A@WEASEL
    293 METHINKS@IT@IS@LIKO@A@WEASEL
    294 METHINKS@IT@IS@LIKE@A@WEASEL

  284. Thanks Atom, and your response makes something clear that has not been clear at all.

    I see that you are doing something very different in the Partitioned search than you are in the Proximity search: in the Partitioned search you are just having one child per generation, while in the Proximity search you are having multiple children per generation. Of course, the first is simpler, but not because of the type of search you are using. We’re not comparing apples to apples here.

    My analysis was assuming that both types of searches had multiple children per generation, out of which the most fit child was picked to become the next parent. I am pretty certain that this is what everyone has been talking about in this thread, not the situation where there is only one child per generation.

    It is also clear that Dawkins in BWM was referring to the multiple children per generation situation, not the single child per generation situation, when he wrote,

    The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase. In this instance the winning phrase of the next generation happened to be …” [pp 47-48]

    Note the plural “progeny”, the phrase “chooses the one”, and the phrase, “the winning phrase of the next generation.” It is clear that Dawkins was using the multi-child situation.

    Therefore, your partitioned search is not really doing what Weasel does, and furthermore, it is simpler, I repeat, not because Partitioned is simpler than Proximity, but rather because one-child per generation is simpler than multi-child per generation.

    I appreciate your replying with specifics, because it brought to light issues that I didn’t even know were part of the discussion.

    Also I see you write in a later post,

    Zachriel is discarding the children as soon as he checks if they are better than the best (which is actaully almost identical to my implementation…I do the same thing)…

    No, you are not doing the same thing, I don’t think. You are comparing each child to the parent and making an immediate decision as to whether the child is more fit or not; and if so, making the child the new parent.

    Zachriel is taking a population of children and finding the best of the children to replace the parent (as Dawkins does.) My understanding is that Zachriel does this by checking each child against the current best child, and then only at the end checking to see if the best child is more fit than the parent. He is not doing the same thing you are.

  285. 285

    Atom, jerry, and Joseph: Zachriel has responded but can’t post here.

    Atom, you write:

    Zachriel is discarding the children as soon as he checks if they are better than the best (which is actaully almost identical to my implementation…I do the same thing),

    Zachriel responds:

    Okay, but that’s not quite what you said before.

    You write:

    you have to evaluate the corrrectness of each letter on each child to get a fitness value, then compare the fitness values of the child strings to a current champion.

    Zachriel responds:

    Of course, because an evolutionary algorithm can only work with the total fitness, so it has to be converted into a simple scalar before comparison with competitors. That’s what makes Weasel an evolutionary algorithm, and why Partition Search is not.

    Jerry, you write:

    If anyone want to see the Weasel generator that was created a couple years at Monash University in Australia follow the link I put up above and press demo at the bottom. It uses a latching function.

    Zachriel responds:

    Then it isn’t an evolutionary algorithm, and it isn’t Weasel.

    Joseph, you write:

    Anything that allows for a reversion is NOT an example of CUMULATIVE selection.

    Zachriel responds:

    Learn the 3-Step. Two steps forward, one step back, and you’ll cross the dance floor!

  286. Small correction: the next to last line in 282 should read “… then only at the end checking making the best child the new parent.” Although there are some algorithms that allow the parent to survive if no child is better, Weasel is of the type where a child must replace the parent.

  287. Arrgg. I meant “… then only at the end making the best child the new parent.”

  288. hazel wrote:

    No, you are not doing the same thing, I don’t think. You are comparing each child to the parent and making an immediate decision as to whether the child is more fit or not; and if so, making the child the new parent.

    No, I am doing the exact same thing. (I have my source code in front of me.) I don’t use the new string to generate the next child/mutant, but the “current string”, which all the children in a generation derive from.

    Again, some pseudo-code for Proximity Search:

    1) Get Mutated String from current string.

    2) Check against the best. If the mutated child is better or equal to best, keep it. If not, get next mutated string, again based on the “current string” parent, not the best string.

    3) At the end of this round, return the best string, which will become the new “current string” parent for the next generation.

    Atom

  289. Dembski and Marks write,

    Partitioned search [12] is a “divide and conquer” procedure best introduced by example. Partitioned search can be applied at different granularities.

    This is intellectual dishonesty. Reference [12] is The Blind Watchmaker, of course. The sentence suggests that the term partitioned search comes from Dawkins. Nothing that follows indicates that the term is actually the invention of Dembski and Marks. Worse yet, I believe that at least one of the authors knew by August 2008 that the Weasel program implements a (1,n)-ES, not partitioned search.

    I see in retrospect that I was a fool in believing that the EvoInfo Lab would be a source of honest scholarship. I regret that I lent my name — not that it’s much of a name — to the Lab. I do NOT regret that I corresponded with regents and administrators of Baylor University on behalf of academic freedom for Bob Marks. It certainly would have been an embarrassment for the institution to host pathetic propaganda like Weasel Ware, but the fact is that it is the right of tenured full professors to present as crackpots. (I understand that Marks had tested limits in various ways, but it should have sufficed for him to put the AAUP-approved disclaimer on his Web pages.)

    While I’m at this, I should say that I have seen no evidence whatsoever that tenure review goes differently for IDists than for other faculty with unorthodox views. Almost all tenure-probationary faculty have the sense to keep their heads down. I’m not saying that it is right that they should have to, but that the “Expelled: No Intelligence Allowed” propaganda depends entirely on selective presentation. Do not take my word for this — check out The Tenure Track discussion board at the website of The Chronicle of Higher Education. The notion that untenured faculty other than IDists can express their views without fear of reprisal is sheer nonsense. Academia is not an ivory tower.

  290. Meant to paste only the first sentence above — picked up two columns copying from PDF.

  291. For any interested, I’m posting two different implementations of weasel-type programs. These posts will follow. I’m not making a point, I just thought some might want to see how these can be implemented, and the output generated. They are both one-method programs coded in C++.

    For brevity and clarity the programs employ no object abstractions, and use the minimum code necessary to produce weasel-like output (including metrics). The fitness functions are implicit in the body of the code, and an inverse Hamming value (of sorts) is used for target evaluation in both cases. I cherry-picked both outputs to get sample sizes between 1000 and 2000 generations.

  292. This is the partitioned search, to use terminology consistent with above comments. It latches and mutates in the evaluation loop. Almost no additional logic is required for latching, rather it is part of the fitness evaluation — only characters that don’t match the target are allowed to mutate. An additional segment of code is appended to the evaluation loop to provide a tendency toward character reversion. The source code is here: partitioned example

    Gen 1: urdfjedhdowenq xzktyqeykkpcu fit: 3%
    Gen 100: tvwgiijop ilei lpkssproqccyk fit: 14%
    Gen 200: nclhiwqxoembi lcko qriiysel fit: 35%
    Gen 300: yudhipihxacwis lnkp x yeaded fit: 42%
    Gen 400: ntohifknxmlmis llkk s deareb fit: 46%
    Gen 500: yjphidkhgbwzis sjke r keatee fit: 46%
    Gen 600: rcphixksqromis hike k weajel fit: 60%
    Gen 700: lrvhinksjjtsis hike c weasel fit: 71%
    Gen 800: edhinksjitrps gike weasel fit: 75%
    Gen 900: weihinksoiwqqs nikb a weasel fit: 71%
    Gen 1000: meahinksbiafis eike a weasel fit: 82%
    Gen 1100: mezhinkstiz is xike a weasel fit: 85%
    Gen 1200: medhinkscic fs dike a weasel fit: 82%
    Gen 1300: menhinks ii ps eike a weasel fit: 85%
    Gen 1400: mebhinks iz xs hike a weasel fit: 85%
    Gen 1500: methinks iq gs like a weasel fit: 92%
    Gen 1600: methinks it fs like a weasel fit: 96%
    Gen 1609: methinks it is like a weasel fit: 100%
    — — — — —
    Target reached
    — — — — —
    Generations: 1609
    Total population: 1609
    Iterations: 45052
    Run time: 44ms
    — — — — —

    Here a mutation rate of 5% was used. The population is always 1, since it’s unnecessary to implement any sort of propagation. This algorithm requires more generations to reach the target, but a total population of no more than the number of generations. Also notable is the metric for iterations, which counts the number of loop iterations for the course of the program. Since only one inner loop is used, the number is relatively low.

  293. This is a proximity reward search. There is no latching of characters, but only 1 out of 500 progeny is selected for the next generation, according to its fitness. The same evaluation is used for target matching as the partition example. proximity example

    Gen 1: wapgbwjsgijiva x knpugbykgen fit: 17%
    Gen 100: mezhcnkszit is like a weasel fit: 89%
    Gen 200: methinkscit is like a ueasel fit: 92%
    Gen 300: qethinks iu iw yike a weasel fit: 85%
    Gen 400: uetognkssit is like b weasel fit: 82%
    Gen 500: methinksvit is uike a weasel fit: 92%
    Gen 600: meuainks it is like a weasel fit: 92%
    Gen 700: methinks it isrlike a wejrel fit: 89%
    Gen 800: methsnkr it is tike a weasel fit: 89%
    Gen 900: methineswit ts likeaa wequel fit: 78%
    Gen 1000: metzinxs it is like f wlaskl fit: 82%
    Gen 1100: methinks ir if likeqw wmasel fit: 82%
    Gen 1200: methi ks kt is fike a weasel fit: 89%
    Gen 1261: methinks it is like a weasel fit: 100%
    — — — — —
    Target reached
    — — — — —
    Generations: 1261
    Total population: 630500
    Iterations: 3.65545e+007
    Run time: 1273ms
    — — — — —

    Here a mutation rate of 5% and a population size of 500 were used. There is a notable difference in the numbers for Total population and Iterations compared to the partition search. The population total is a product of Generations and the population size. The iterations are extreme because two, 2-dimensional inner loops are required to traverse 500 strings for each generation. The source code footprint is only nominally larger than the previous example; however here the amount of data being exchanged in CPU registers is generally an order of magnitude higher or more, by casual observation.

  294. My apologies for the lack of columnar formatting in the output. Preformatted text tags work in the preview, but apparently not in the post itself.

    The output can be pasted into a text editor for a little more clarity.

  295. Folks:

    Looks like much of the above is now debating on code and coding.

    One reminder, the latching and/or quasi-latching can be implemented explicitly or implicitly; as I have specifically pointed out ever since what is 346 – 7 in the previous thread.

    As a second, Joseph reminded us yesterday, at 264, that BW speaks in terms that the program marched forward inexorably, latching and ratcheting the slightest increments in proximity as it went. So, the letter-wise partitioning search interpretation is a very natural — “simple” — one; once we look at the published o/p circa 1986.

    On a third point, notice that the runs published in 1986 were 40+ and 60+, consistent with partitioned search a la Dembski-Marks [median runs length 98 generations] and presentation of good runs. By contrast we seem to be looking at 1,000+ gens here on the current Apollos coding, even with 5% mutation rates, for proximity search. Any ideas why, Apollos? [by not blocking the partitioned search by gens, A. makes it hard to compare, but we have a parallel case int eh EIL partitioned search example, which is consistent with the 1986 runs and the math of the median run length for partitioned search, 98.)

    A further note is the cluster of reversions captured in the proximity example just above:

    Gen 600: meuainks it is like a weasel fit: 92%
    Gen 700: methinks it isrlike a wejrel fit: 89%
    Gen 800: methsnkr it is tike a weasel fit: 89%
    Gen 900: methineswit ts likeaa wequel fit: 78%

    Observe, here, the overt instability of multiple letters and spaces, and how that is easily captured in a skipped-generation sample. This tends to reinforce the point that in Weasel 1986, the observed o/p latching pattern is real, not a mere artifact of sampling.

    It still seems to me that the most likely explanation of Weasel 1986 on balance of evidence is that implicit latching (or quasi-latching) was achieved by a combination of mutation rate and generation size working with a proximity reward filter. (It would be interesting to see the gen size and per letter mutation rate that makes that tend to happen.)

    For Apollos' partitioned search case, I suspect that if we were to divide by 10, and block by generations that way it would be consistent with 100 - 200 gens run [cf. the EIL case]. (That is the generation numbers I think reflect not blocking out as generations ands choosing best of generation. On that hyp, it may be that a generation size of 10 gives us all the advantages we would need.)

    So, if Pendulum’s generation count report that appears at no 1 above is so: 2485, we see that Weasel 1987 is generally consistent with its being a proximity reward search that is sufficiently detuned not to implicitly latch. The fast-long pattern evident in the video would also make a lot of sense.

    Food for further thought all around.

    but, remember: the fundamental downfall of Weasel is that it implements targetted search that rewards mere proximity of non-functional strings. As such it is irrelevant to the issue of natural selection, which must work by rewarding differences in functionality. And, adding in a requirement of significant functionality before proximity is recognised will plainly cause generation time to achieve a rather modest increment of information in a Weasel-type program, plainly, will cause generation run length to explode combinatorially.

    GEM of TKI

  296. PS: Now, re Sev @ 273:

    On a matter that is distasteful but important to cirrect the tendency of seelctive hyperskepticism using ad hominems that divert the thread adn poison its atmosphere. Citing:

    On the other issue of the article by Ian Boyne let me reiterate that it was cited only to illustrate that kairosfocus proper name could easily be found in the media. I did not read the article in detail. I got the impression that it was something to do with Christianity but no really much more than that.

    Severski, you cited an extract that was condescendingly personally abusive and dismissive, in 148 above. That is obvious to anyone who can read English and understand it.

    This example, as I recall, was also much bounced about at Anti Evo in recent weeks, on their taking note of me; specifically to try to dismiss what I have to say by attacking me as a person. (Which speaks volumes on mentality as revealed by behaviour, and the quality — or want of it — of the case being made at that site.)

    So, the idea that you “only” cited to show that my name was accessible on the net — which has never been at issue [Onlookers: I have pointed out that there is a BIG difference between what can be searched out in low traffic corners of the Net and what is being trumpeted in high traffic sites; and that in a context of ad hominems] — simply does not pass the smell test.

    FYI, yet again, I have contact information up so that responsible individuals may communicate with me. This information is in a low traffic corner of the Internet, but one that has enough traffic to do what I need to do. Some of that information is also accessible in archival Internet pages in various locations, which are similarly low traffic. When my name appears in the high traffic ‘net, that tends to surge my spam inbox and some often breaks through to my main mail box. So, I have used the common Internet civility rule that allows one to have a handle, to buffer my inbox. That you and others choose to disrespect that privacy and courtesy, is telling.

    Worse,as ntoed, the example you chose is not innocuous, not by a long shot.

    But, there is a patently obvious duty of care regarding accuracy, context and balance in making adverse comments. So, I am entirely in order to note on the context and content of the criticism made by Mr Boyne, complete with the fact that the Gleaner — which is not exactly going to be a high traffic web site on archived stories — was forced to publish a corrective by the undersigned. In particular:

    1 –> Mr Boyne was propagating a now all too common blood slander that targets Bible-believing Christians, even going so far as to speculate on how such could easily find Bible verses “enough” to use to justify “butcher[ing]” those who differ from them, while making direct comparisons between Islamist terrorists and Bible believing Christians as enemies of liberty. Even when he tried to deny making an immoral equivalency claim like that, he gave a further example: “[Evangelicals in Jamaica etc are] prone to bigotry, intolerance and the desire to impose their will on others just as the Islamic militants.” [Kindly note that telling "JUST AS."]

    2 –> Particular, published examples of the “liberty” that Christians in Jamaica were said to object to were [immodest] dress and the dancehall subculture. (At the general time in question, there was a rash of court cases over onstage lewdness, including “cr_tch patting” [a euphemism!] and “dry h_mping” of patrons at shows. Methinks that is libertinism and/or license or licentiousness, not liberty! Can you understand the crucial difference? And, why there is a need for a protected public space where we can reasonably be assured that especially children will not be subjected to such lewd conduct? I hardly need to point out that dignified protest through letters to the editor bears no material comparison to suicide bombing or honour killings or to videotaped beheadings of kidnapped civilians. Mr Boyne’s examples and alleged parallels to Islamist militants were utterly out of order; to the point of being outright blood slander.)

    3 –> Then, when he tried to dismiss me as making an ill-informed, unwarranted inference from biblical theology to conduct, he was of course gliding over certain inconvenient points that were highlighted in the PRE-sponse:

    [BOYNE:} "The world is a much safer place today because the totalitarian ideology of the Christian Crusaders and the Roman Church was decisively routed by the secular state".

    [GEM of TKI:] . . . Nor is he addressing the vast gap between the biblically illiterate, Christianity of the Middle Ages and the world that resulted from having the reformation sola scriptura principle joined to putting the Bible in the hands of the ordinary man: liberation.

    4 –> That is, even through the bad Gleaner editing at work, you can see that I was pointing to the implications of biblical illiteracy for much of the horrors of the Middle Ages [including among the CLERGY!], the corrective put up by the Biblically based Reformation [starting with Luther's 95 theses of intended debate, which were motivated by Tetzel's abuses and evident Simony over indulgences to help fund church building programmes in Rome; which have no biblical warrant] and, most importantly, the historically vital contrasting fact that the absolutist state of the early modern era was in significant part opposed by the Bible-armed ordinary man in the context of a reformation led by theologians of the ilk of a Samuel Rutherford, whose Lex Rex is a classic of opposition to such tyranny; indeed, when it came out King Charles I was heard to remark that it would hardly get an answer. And, nearly fifteen years on, after the execution of Charles I, the book was burned by the public hangman in Edinburgh, and Rutherford only escaped a treason trial by “inconveniently” dying first. Indeed, his reply to those who came to fetch him was a classic, Biblically based rebuke to corrupt officaldom! (Cf the specific response to Mr Boyne the Gleaner elected not to publish.) But in the end, the Bible-rooted ideas in Lex Rex, and further developed by Locke in his essays on civil government [which are also explicitly shaped by biblical insights at key turns in the argument], were a key step in the rise of modern liberty and democracy. (Cf my discussion here.)

    5 –> In short, Mr Boyne — a singularly well informed commenter on public affairs — was using dismissive mocking rhetoric to glide over facts inconvenient to his case. Facts that show that Bible-rooted thought and action had a lot to do with the rise of modern liberty and democracy. Facts that underscore that Bible believing Christians today are not to be broad-brush dismissed as being morally equivalent to Islamist militants.

    6 –> Not to mention, we have just passed through a century in which the secularist state has killed in excess of 100 millions. (Vox Day’s contrast to the track erecord of Christendom’s princes in a now bygone era is interestng. On average, your C20 secularist tyrant was odds on likely to be responsible for over 20,000 unjustifiable deaths. The typical prince of the era of vanished Christendom, was far, far less luilely to be bloody, over a far longer run.)

    In that context, the fact that you chose to excerpt and use such claims, in such a context, to try to dismiss my concerns on basic Internet civility, tells us a lot, sir. A lot.

    Cho, man, do betta dan dat!

    PLEASE.

    GEM of TKI

  297. My apology on a formatting error, etc.

  298. hazel:

    When you climb a mountain, you occasionally go downhill before going up again. Just because you have an occasional reversal doesn’t mean you aren’t accumulating fitness in respect to the target.

    And sometimes you never make it to the top.

    Also Dawkins should have made it CLEAR that CUMULATIVE selection is really yo-yo selection.

    The way DAWKINS describes and illustrates CUMULATIVE selectioin there isn’t anything that would lead anyone to infer that regression takes place.

    As a matter of fact the only inference one can come away with is that cumulative selection is a ratcheting process that does nopt allow regression.

  299. hazel:

    When you climb a mountain, you occasionally go downhill before going up again.

    And I bet they don’t refer to such a process as “cumulative climbing”.

    Just because you have an occasional reversal doesn’t mean you aren’t accumulating fitness in respect to the target.

    And reversal is the opposite of accumulation.

  300. I have accumulated a certain amount of money in my savings account. Does that mean I have never had a moment when my savings account had less money than it did the day before? :-)

  301. Joseph says,

    The way DAWKINS describes and illustrates CUMULATIVE selectioin there isn’t anything that would lead anyone to infer that regression takes place.

    I saw immediately that the fitness of the parent could decrease from one generation to the next. You should say that it is not evident to the general reader.

    Don’t you think it’s just a bit silly to analyze something from a pop science book as though it came from a technical monograph? (But if you don’t have the wherewithal to read technical monographs, I suppose you can keep putting on a show for the “onlookers” by propping up the straw man and knocking him over.)

    How much teaching have you done, Joseph? A good teacher, when trying to get across a new concept, does not go into all the details. How satisfying is it for you to hear now that, although the fitness of the parent is not strictly increasing, the probability of termination of the program is strictly increasing in the number of generations? The probability of termination approaches 1 as the number of generations goes to infinity. I’ll bet that gives you a warm feeling, deep inside.

    By the way, do you understand that “locking” amounts to teleological suspension of the Second Law of Thermodynamics? That is, errors in genetic transcription are ensured by the Second Law. If you lock in certain letters, you are saying that SLoT is suspended for them, but not for the others. Furthermore, “locking” contradicts a key point of “Darwinian dogma,” which Dawkins presents in the book, that mutation is neutral with respect to fitness.

    What “locking” really explains is Dembski’s idée fixe that everyone applying evolutionary algorithms “smuggles in” information. He’s so sure he’ll see a crime that he’s blind to his own misunderstanding. He had absolutely no business pinning “partitioned search” on Dawkins.

  302. Your point about the flaw in locking is very good, as well as the whole post. I’ve appreciated what you’ve contributed to this conversation – I’m learning a lot.

  303. Sal Gal:

    I saw immediately that the fitness of the parent could decrease from one generation to the next.

    If you saw that from reading “The Blind Watchmaker” then please present the quote. Or even an example- something.

    Don’t you think it’s just a bit silly to analyze something from a pop science book as though it came from a technical monograph?

    I think its a bit silly for anti-IDists to jump on an alleged mistake when upon analyzation no mistake was made.

    The ONLY reason to analyze that silly book is because that silly book was in question.

    That is how it is done- if there is an issue you resolve it using whatever is at your disposal.

    A good teacher, when trying to get across a new concept, does not go into all the details.

    A good teacher doesn’t mislead. A good teacher makes the new concept clear.

    I have done quite a bit of teaching. And I know better than to leave ANYTHING to chance.

    If Dawkins wants us to believe we can fill a tub full of water if we can spill all the water before reaching the tub he should NOT use a loaded word like “cumulative”.

  304. FYI: Wesley Elsberry has pointed out some flaws in Apollos program at http://www.antievolution.org/c.....ntry140931“.

    Sorry about the link – I can’t quite figure out the url syntax.

    And to Atom@285 – can you clarify: does your Partitioned search algorithm create a set of children every generation, out of which the best is chosen to be the next parent. I am trying to understand this one point.

    Thanks.

  305. kairosfocus quotes Apollos’ results, and comments,

    A further note is the cluster of reversions captured in the proximity example just above:

    Gen 600: meuainks it is like a weasel fit: 92%
    Gen 700: methinks it isrlike a wejrel fit: 89%
    Gen 800: methsnkr it is tike a weasel fit: 89%
    Gen 900: methineswit ts likeaa wequel fit: 78%

    Observe, here, the overt instability of multiple letters and spaces, and how that is easily captured in a skipped-generation sample. This tends to reinforce the point that in Weasel 1986, the observed o/p latching pattern is real, not a mere artifact of sampling.

    But just a little bit of thought should show that something is wrong with Apollos’ results.

    If the mutation rate = 5%, then 95% of the time a letter doesn’t mutate. At Gen 600 92% of the letters are correct, so only about 8% x 28 = 2 letters are still unmatched. The chances of both of those not mutating is 95%^2 = 90%. If there is no mutation, the child is identical to the parent. so if even one child has no mutations, there is no way the best child can be less fit than the parent. Since the chances of there being a mutation in a child is 100% – 90% = 10%, the chance of all 500 children having a mutation would be 10% ^ 500 = 10 ^ -500, which all us us would consider impossibly low.

    So the reversions in Apollos’ results must be the result of an error of some kind in the implementation of his code.

  306. Oops, here’s the previous post, correctly formatted

    kairosfocus quotes Apollos’ results, and comments,

    A further note is the cluster of reversions captured in the proximity example just above:

    Gen 600: meuainks it is like a weasel fit: 92%
    Gen 700: methinks it isrlike a wejrel fit: 89%
    Gen 800: methsnkr it is tike a weasel fit: 89%
    Gen 900: methineswit ts likeaa wequel fit: 78%

    Observe, here, the overt instability of multiple letters and spaces, and how that is easily captured in a skipped-generation sample. This tends to reinforce the point that in Weasel 1986, the observed o/p latching pattern is real, not a mere artifact of sampling.

    But just a little bit of thought should show that something is wrong with Apollos’ results.

    If the mutation rate = 5%, then 95% of the time a letter doesn’t mutate. At Gen 600 92% of the letters are correct, so only about 8% x 28 = 2 letters are still unmatched. The chances of both of those not mutating is 95%^2 = 90%. If there is no mutation, the child is identical to the parent. so if even one child has no mutations, there is no way the best child can be less fit than the parent. Since the chances of there being a mutation in a child is 100% – 90% = 10%, the chance of all 500 children having a mutation would be 10% ^ 500 = 10 ^ -500, which all us us would consider impossibly low.

    So the reversions in Apollos’ results must be the result of an error of some kind in the implementation of his code.

  307. hazel,

    I don’t doubt that I’m capable of programming with bugs. I was clear above that I didn’t intend to make a point, but to contrast the two approaches.

    I appreciate that Mr. Elsberry took the time to go through my code and point some things out.

    However the fRand() bug that Mr. Elsberry highlights doesn’t exist when the programs are compiled with MS Visual C++ 2008 Sp1.

    With reference to the code “if( fRand() < mrate )” Mr. Elsberry writes:

    Unfortunately, that will always be true, and every position in every child mutates.

    A simple check of the convenience function would have revealed this.

    I did check the macro to assure the returned values were in the range of [0, 1). Here are a few samples:

    0.14032
    0.159851
    0.766693
    0.795013
    0.127777
    0.483246
    0.0346985
    0.989258
    0.41156

    I just now ran a few trials to test the proportion of true results for the expression to the total number of comparisons.

    76401/1540000 = 0.0497
    27158/560000 = 0.0485
    303120/607600 = 0.0499
    110410/2212000 = 0.0499
    45694/924000 = 0.0495

    The actual rate is a little low, but reasonably close to the specified 0.05 rate.

    The macro is defined this way: #define fRand() (float)rand() / (RAND_MAX + 1)
    Here is a code snippet from the MSDN documentation for rand() usage: (double)rand() / (RAND_MAX + 1)

    I’ll post on the second reference when I get it sorted.

  308. Thanks, Atom. Bugs are inevitable – I know that, and I didn’t mean to offend you. I really appreciate that your willing to be part of this conversation. Getting a program that works so we can test some of the ideas being discussed would be useful.

    Now, a question:

    Do you agree with my analysis that under a partitioned search reversals in fitness, especially towards the end when most letters are in place, would be very extremely unlikely?

  309. Oops, I meant thanks Apollos.

  310. 310

    Atom and others (Joseph, jerry, kairosfocus): I posted a run of Weasel that reverts above [283] and relayed some responses from Zachriel in [285]. I mention it now because both posts were caught in moderation for awhile and might otherwise be lost. Thanks.

  311. This is confusing – it seems like there are now posts upstream that weren’t there before. When someone has posts being moderated, when they finally go through they appear back at the time they were posted, so unless one keeps looking back one doesn’t know they happened. This is really unsatisfactory.

    Not only that, post numbers change, so if one has referred to post 285, for instance, the reference is now wrong because a moderated post has been added.

    Why are posts being held up in moderation at all???!!! This is really weird.

  312. With reference to the code “pop[i] = pop[winner]” there is definitely a logic error here. Wesley R. Elsberry writes:

    for( int i = 0; i < popSize; i++ )
    {
    pop[i] = pop[winner];

    Once i > winner, what is used as a template for additional candidates in the new population is the just-mutated new candidate at the winner index. This means that every time that the first candidate in a population wins, whatever mutation that happens to it will be the basis for the whole new population.

    The problem in my own words is that once “winner = i” if pop[i] mutates then winner[i] is also mutated, and will then propagate onward. This still leaves a reasonable chance that at least one offspring will be a duplicate of the winner from the previous generation. Here’s some output:

    Begin Gen
    ——–
    Prior winner: mbehrozskiyddrktyf rqwwikfho
    ——–
    After mutate: mbehrozskiyddrktyf rqwwikfho
    After mutate: ykqhrozskiyddrktyf rqwkikfho *
    After mutate: mbehrozskiyddrktyf rqwwikbho *
    After mutate: mbehrozskiyddrktyf rhwwikfho *
    After mutate: mbehrozskiyddrktyf rqwwikfho
    After mutate: mbehrozskiyddrktyf rqwwikfhy *
    After mutate: mbesrozskiyddrktyk rqwwikfho *
    After mutate: mbehrozskiyddrktyf rqwwikfqo *
    After mutate: mbghrozskiyddlmtyf rqwwikfvo *
    After mutate: mbehrooskiyddrktyf rqwwijfho *

    Asterisks indicate strings which differ from the winner. This doesn’t turn out to be frequent enough to overcome the problem. This is because the evaluation loop always keeps the lowest winner, instead of the highest, so the winner is copied and mutated sooner rather than later in the propagation loop.

    Making this change:

    from if( score > highScore )
    to if( score >= highScore )

    causes the problem to almost vanish, even with the bug still present.

    The real fix is simple:

    string strWinner = pop[winner];
    for( int i = 0; i < popSize; i++ )
    {
    pop[i] = strWinner;

    Thanks to Mr. Elsberry for spending time with the code. It’s good to know that someone actually went through it, even if the intention wasn’t constructive criticism. Thanks to hazel for bringing it to my attention, no offense was taken.

    My apologies for the error, without excuse. ;-)

  313. It was requested that I post output from the corrected program. This is the proximity reward search with population 500 and mutation rate 0.05:

    Gen 1: mezqjbfsarrvcq vxzxjj kewhrr fit: 21%
    Gen 27: methinks it is like a weasel fit: 100%
    ————–
    Target reached
    ————–
    Generations: 27
    Total population: 13500
    Iterations: 742000
    Run time: 33ms
    ————–

    This is consistent with the expected results, including hazel’s comments above.

  314. David Kellogg,

    “Jerry, you write:

    If anyone want to see the Weasel generator that was created a couple years at Monash University in Australia follow the link I put up above and press demo at the bottom. It uses a latching function.

    Zachriel responds:
    Then it isn’t an evolutionary algorithm, and it isn’t Weasel.”

    The comment seems to imply that I was wrong about something. What did I say that was untrue or misleading. The site said that their generator was like Dawkins program and that Elsberry had consulted with them. I passed this information on. They had two generators, one fairly simple looking and one more complicated which they said was similar to Dawkin’s model.

    I am sorry but all I was trying to do was shed some light on the nature of the program by people who had input from Elsberry. There seemed to be a lot of effort to prove some minutiae about this program and I thought this might help clear it up.

    These programs may be fun but has anyone shown that they mean anything?

  315. I have found all this not only fun, but meaningful. I understand, of course, that such little programs as Weasel are not models of evolution per se, but they have evolved [intended] into very powerful problem-solving techniques and they have taught us a great deal about the power of iterative systems.

    As I and other have repeatedly said, models of reality have to be tested against reality, and no model can encompass all of the reality that it is trying to represent, but techniques such as evolutionary algorithms, fractal generators, and other mathematical tools that have become feasible with the advent of computers have given us many powerful new tools for understanding the world.

    So, yes, I think this all means something.

    I’d also like to point out the value of the kind of discussion some of us have been having – one where questioning and answering about specifics, rather than engaging in rhetoric, has led to greater understanding, even if we continue to disagree about things such as the larger implications of the topic.

    And I’d really like to see someone take charge and un-moderate people so we can live up to the new moderation policy offered by BarryA on another thread. Barry wrote,

    We have no interest in censoring viewpoints, because we believe ID is true and consequently in any full and fair debate we will win — and if we don’t win we either need to learn to debate better or change our position.

    I don’t think moderating posts so they don’t show up for hours is in keeping with the spirit of this quote.

  316. P.S. OK, I’ll quit complaining now. :-)

  317. 317

    hi jerry,

    Thanks for the response. If you read the description at Monash, you’ll see what Zachriel means.

    Note that this demo works slightly different than the model described by Dawkins in his book. We are grateful to Dr. W. R. Elsberry for pointing this out and for highlighting the differences. In the original model the letters do not become fixed. Instead, at each generation (i.e. step) a number of mutant strings are produced from the current copy by randomly changing some letters. The mutants are considered to be chosen for the next generation. The chosen string is the one that most resembles the target string.

    The results of the model presented here and of the original Dawkins model are essentially the same: at each step either a “better” string is produced or something quite similar to the current version is retained. As a usual consequence, current strings are replaced with new strings that have at least as many matches as the previous one. We have created a more elaborated version of the weasel — the genetic algorithm weasel. That version incorporates the selection process as it is described by Dawkins and extends his model with some features commonly found in genetic algorithms.

    In other words, their two demos get to the target around the same time, but by different methods. But it’s the method that models evolution, even slightly, or not. If the criticus don’t understand how it works, their critiques are going to be not so much wrong (though they may be that) as beside the point. This is why the Monash programmers admit that the non-latching one — and that one only — is the one that “incorporates the selection process as it is described by Dawkins.”

    Here’s how Weasel works: Changes take place randomly, at the level of the letter (analogous to the gene); selection takes place holistically, at the level of the whole phrase (analogous to the organism). The problem with the “latching” version is that selection takes place at the level of the letter, and so the model completely misrepresents what Dawkins originally described. If you write a program that chooses what letters survive, you haven’t modeled anything; if you write a program that chooses what phrase survives, you’ve created a model — maybe a crude one, but a model nonetheless — of what happens in nature.

    I’m certainly no expert in this area, and I was relaying Zachriel, but I believe that the debate speaks to a longstanding misunderstanding by the ID critics of what Weasel specifically (and an evolutionary algorithm generally) does, and how such an evolutionary algorithm models evolution even to a limited extent. I believe that this has been pointed out to Dr. Dembski for years, and yet he has persisted in treating Weasel, in his writing and in the EIL program, as though it latched letters.

    David

  318. Okay:

    Initial note: I observe there has been a bug in Apollos’ pgm, for which he has apologised and put up a correction.

    Appreciated.

    The corrected pgm implicitly quasi-locks, it seems at 5%, 500. [And, with such a large population, it may well have multiple- newly- correct- letter, far- skirt members of the binomially distributed population speeding up the approach to the target. That is, this one is tuned for speed, as pointed out above. (I seems clear as well -- even from the buggy example -- that a sufficiently detuned Weasel will show a very slow approach and multiple letter flickbacks. Thus, the point on the material difference between W 1986 and W 1987, per highly contrasting o/p patterns, still stands.)]

    On the underlying issues:

    1–> Weasel, circa 1986, explicitly has rewarded mere proximity to target, without reference to functionality; as the reference in BW at that time to “nonsense” phrases makes clear.

    2 –> As such, Weasel cannot be a reasonable analogue to natural selection [i.e. Mr Dawkins' 'BLIND watchmaker'], as NS must cull based on differential reproductive success, i.e. differential functionality in the environment.

    3 –> In that context, Weasel cannot reasonably be claimed that it is illustrating anything that is properly didactic, as to present this as a premier example of the blind watchmaker at work when it is in fact a foresighted watchmaker at work, is plainly highly misleading. (We can presume on charity that the fallacy involved was unintentional. Apparent confirmations of our expectations, after all, can be quite misleading. Indeed, that issue is a major part of the context of Popper’s work on falsification, and more broadly of the value of testability.)

    4 –> Now, of course Weasel circa 1986 is intended to illustrate the power of cumulative selection to find an otherwise hard to reach target. What it succeeds in showing is the power of foresighted cumulative selection to do that, i.e, it exemplified the power of intelligent — though sub-optimal [there are better search strategies out there] — design. (And yes, that shows the basic problem of the panda’s thumb as an anti-design argument.) Weasel knocks over a question- begging strawman and so fails to address the Hoylean challenge of the threshold of complexity for first life, and onward for body plan level biodiversity. The issue of the origins of functionally specific complex information and evidently irreducibly complex structures in life forms — e.g. the algorithmic information storing and processing system in the heart of the cell — stands unanswered by Weasel . . . and, for that matter, its more sophisticated kin of today. (These are of course the key challenges raised by the rising challenger to neo-Darwinian school, Intelligent Design. Weasel and kin, properly understood, illustrate intelligent design in action, not chance variation + natural selection.)

    5 –> Unfortunately, ever since 1986, Weasel and kin have persuaded a great many onlookers that Weasel as the iconic example, illustrates BLIND cumulative selection. Thus, it has had far more of a rhetorical rather than a properly didactic impact.

    6 –> This should be acknowledged by those who have championed Weasel, and corrected; not excused by remarks on how teachers cannot tell their students “everything” or the like. Sorry, that is simply not good enough. Period.

    7 –> Going beyond that, as Joseph has documented [and as is evident from event the Wiki extracts that were used to try to justify Weasel], given the published o/p of Weasel circa 1986, and the explicit statements in BW, it is objectively a legitimate interpretation that Weasel in the original form implemented partitioned, explicitly latched search. So, the attempts to bash and dismiss those who have taken Mr Dawkins at his word circa 1986 should now cease.

    8 –> On the context of possibilities for implicit latcfhing, and in light of strength of Mr Dawkins’ reported statement that in fact he did not explicitly latch correct letters circa 1986, we have discussed the issue of implicit latching and/or quasi- latching as the most likely explanation of the published 1986 o/p; and I have accepted that this is — on preponderance of evidence — the best current explanation. This too, especially given the law of large numbers [onlookers observe the evidently studied silence on that in the past few days, after many attempted dismissals] is a legitimate understanding of the 1986 o/p.

    9 –> There have also been attempts to state or imply that the 1987 o/p’s were essentially the same as those of 1986. But, since an implicitly latched program with de-tuned parameters will plainly show the multiple flick-back patterns circa 1987 AND a sample of 300 changeable letters with 200 showing “go-correct and hold” is also on the table, such a claim clearly lacks warrant.

    10 –> Finally, let us note that “latching” is a secondary issue; one that came up because in a previous thread GLF tried a threadjack based on trying to argue that pointing to this as an aspect of Weasel circa 1986 was a fundamental and discrediting error on my part. Plainly, I made no error on the evidence in hand and reasonable interpretations of Mr Dawkins’ words in BW. The primary issue is that Weasel is not at all illustrative of a BLIND watchmaker, but instead of a BLIND FORESIGHTED watchmaker.

    _____________

    Thanks are due to the poster of this continuation on a second thread, once the previous one ran over 600 posts. (And, SG, the issue is not at all that of a fixed notion on WmAD’s part, but clarifying and correcting an issue that had become contentious and sidetracked another thread of high significance in its own right; never mind that it illustrated aptly the pattern of selective hyperskepticism in action.)

    GEM of TKI

  319. David Kellogg,

    The issue is NOT whether or not one can make the program “revert”.

    Obviously if you program unreasonably high mutation rates it could do so.

    That misses the point:

    Part of the Marks/ Dembski paper discusses a “partitioned search”.

    To illustrate a partitioned search they refer to the book “The Blind Watchmaker” and the use of the “weasel” program.

    In TBW Dawkins uses “weasel” to illustrate cumulative selection.

    “Cumulative” means “increasing by successive additions”.

    INCREASING BY SUCCESIVE ADDITIONS.

    “Ratchet” means to “move by degrees in one direction only”.

    Increasing by additions means to move by degrees in one direction only.

    Dawkins NEVER mentions that one or more steps can be taken backward. He never says anything about regression.
    Therefor cumulative selection is a ratcheting process as described and illustrated by the “weasel” program in TBW.

    That is once a matching letter is found the process keeps it there. No need to search for what is already present.

    Translating over to nature this would be taken to mean once something useful is found it is kept and improved on.

    IOW it is not found, lost, and found again this time with improvements. By reading TBW that doesn’t fit what Richard is saying at all.

    And he never states that he uses the word “cumulative” in any other way but “increasing by successive additions”.

    How can a process be “cumulative” and at the same time allow you to keep losing what you have?

    We would call that the “yo-yo” selection process.

    And then no one would infer it is a partitioned search.

  320. David, at what is currently 317, makes a critical point about this latching/non-latching issue.

    He writes,

    Here’s how Weasel works: Changes take place randomly, at the level of the letter (analogous to the gene); selection takes place holistically, at the level of the whole phrase (analogous to the organism). The problem with the “latching” version is that selection takes place at the level of the letter, and so the model completely misrepresents what Dawkins originally described. If you write a program that chooses what letters survive, you haven’t modeled anything; if you write a program that chooses what phrase survives, you’ve created a model — maybe a crude one, but a model nonetheless — of what happens in nature.

    This point has also been made by several others upstream in the thread. I’d like to add my 2 cents.

    In the unlatched, proximity case the mutation routine knows nothing about the fitness routine: it is truly random in respect to fitness because nothing about the fitness routine influences how the phrase mutates. The mutation function operates letter by letter without any knowledge of the fitness function, and then later the fitness function acts holistically on the phrase as a whole without regard to what just did or did not mutate. The functions act independently of each other. This accurately models an important principle about how evolution works.

    On the other hand, in the Dembski latching partitioned search, the mutation routine has to constantly know about, or have received information from, the fitness routine in order to function because it has to ask “is this letter correct” before it decides whether to even consider whether to apply the possibility of mutation to the letter. This is teleological – the mutation function has foreknowledge about what will make the phrase more fit (in the form of knowing which letters are already correct), rather than just doing it’s thing and letting fitness sort itself out later. This is not how evolution works.

    ============
    Now before kairosfocus fires off another couple thousand words repeating himself again, let me say that I and everyone else involved here knows that there are some very important ways in which Weasel does not – I repeat, not, model evolution.

    Weasel is a simple pedagogical model (remember it was written over 20 years ago on a Apple in BASIC), and like all models it only can capture a part of reality.

    But this non-latching/latching issue is a critical aspect of reality that Weasel does capture: in Dawkins non-latching model, mutation is random in respect to fitness, and in Dembski’s latching model it is not. The former is truly evolutionary, and the latter is not.

  321. 321

    kairosfocus [318], for this entire thread it has been clear that you are wrong. Your statistics have been wrongly applies (as Wesley Elsberry showed repeatedly), your assumptions have been taken out of thin air, and your evidence has been fantasy (elevating a pedagogical example to a random sample). Why you continue to write as though you were correct all along? The first point in your latest list would seem to make your labors on latching irrelevant rather than wrong. Which is it?

    Joseph [319], is 5% an unreasonably high mutation rate? Dawkins assumes that readers are able to follow him. He never sets limits on what can mutate, and he mentions that the chosen string is closest to the target — meaning closest as a whole. Letters can, therefore, revert, even though the whole will continue to move closer to the target.

    Cumulative can include losses. Zachriel text provides (with citations) the following examples from mountain climbing, which you claimed [299] above would never be used that way:

    Walkers: ‘Easy’ means that the cumulative height climbed during the walk is less than 300ft ‘Moderate’ means that the cumulative climb is between 300ft and 1000ft and ‘hilly’ means the cumulative climb is over 1000ft.

    Runners: The cumulative climb for this run is 16,000 feet and only two people have completed this challenging run!

    Bikers: From the trail head, it’s about 1.5 miles to the new bridge. There are some technical switchbacks on the 5 climbs of this section (cumulative climb 800 feet).

    Climbers: My avocet watch indicated a cumulative climb of 28,650ft. We read the words on the summit memorial to the 2 skiers killed by avalanche in 1988.

  322. hazel:

    But this non-latching/latching issue is a critical aspect of reality that Weasel does capture: in Dawkins non-latching model, mutation is random in respect to fitness, and in Dembski’s latching model it is not.

    Prove it.

    IOW prove that Dembski’s random muation is any different than Dawkins.

    Ya see in the “weasel” program there is a target- and that means FOREKNOWLEDGE.

    All offspriing are compared to the target. And a target equals teleology.

    As for modeling something- Dawkins was NOT modeling evolution with the weasel program.

    He was ILLUSTRATING cumulative selection.

    And the way he illustrated AND described cumulative selection it is a ratcheting process that locks the matching letters into place.

    Deal with it.

  323. Hazel:

    1 –> FAIR COMMENT: Weasel is highly misleading if intended to be pedagogical. (So, it only serves a rhetorical purpose, not a properly educational one.)

    2 –> JUSTIFICATION, STEP 1: DUTY NO 1 of the educator is not to mislead or manipulate; for by definition he deals with the ones who are least likely to be able to spot and correct error.

    3 –> That holds for the public — pop sci — educator as well.

    4 –> BASIC EVIDENCE: Holistic selection or letter by letter selection [implicit or explicit] makes little or no difference. By Mr Dawkins’ direct statement in BW ch 3, courtesy Wiki, Weasel rewards mere proximity not differential function:

    We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.

    5 –> IMPLICAION: This is plainly foresighted choice based on mere proximity increment, not differential functionality, by programmed — intelligently preset — choice.

    6 –> WARRANTED CONCLUSION: So Weasel cannot properly model the claimed blind watchmaker, natural selection. And, that is long before implicit or explicit latching or quasi-latching become an issue.

    ______________

    RECOMMENDATION: WEASEL SHOULD BE WITHDRAWN AND CORRECTED, if it is intended to serve a legitimate didactic purpose.

    GEM of TKI

  324. PS: Illustrate, Am HD:

    il·lus·trate (l-strt, -lstrt)
    v. il·lus·trat·ed, il·lus·trat·ing, il·lus·trates
    v.tr.
    1.
    a. To clarify, as by use of examples or comparisons: The editor illustrated the definition with an example sentence.
    b. To clarify by serving as an example or comparison: The example sentence illustrated the meaning of the word.
    2. To provide (a publication) with explanatory or decorative features: illustrated the book with colorful drawings.
    3. Obsolete To illuminate.
    v.intr.
    To present a clarification, example, or explanation.

    –> that which is materially diverse to the point of being misleading, CANNOT properly illustrate.

  325. David Kellogg,

    So what I said was not only correct about what was on the Monash website but what I pointed to was essentially the same thing as the Dawkins weasel program.

    The two programs are very different and one only has to run them to know that they take a different amount of time and iterations and proceed much differently. I suggest that everyone here who has followed this discussion go to the Monash website and try out the two different versions. The Dawkins version has all sorts of variations and you can put in any word string you want.

  326. 326

    jerry, yes, the Monash website contains both a Weasel and non-Weasel version. I believe Zachriel wanted to stress that the differences are not trivial and that the one you most highlighted was the non-Weasel version.

  327. When I wrote, “But this non-latching/latching issue is a critical aspect of reality that Weasel does capture: in Dawkins non-latching model, mutation is random in respect to fitness, and in Dembski’s latching model it is not,” Joseph wrote,

    Prove it.

    First of all, I’m not sure what Joseph want me to prove. If he is challenging the idea that in the real world of evolution mutation is always random in respect to fitness, then that is a much broader question than I am discussing.

    However, if he wants me to prove that “in Dawkins non-latching model, mutation is random in respect to fitness, and in Dembski’s latching model it is not,” I’m not sure what more I could write than I already did. Maybe Joseph would be willing to answer a few questions about the specifics of the situation.

    1. In the non-latching method, every letter in the phrase has the possibility of mutating, whether it is right or wrong. In other words, the fitness function does not affect the process of mutation while the mutating is going on. Or, still another way of saying this is that the mutation function has no knowledge of the fitness function.

    This is what we mean when we say mutation is random in respect to fitness.

    Is there some part of this, as a description of the non-latching method, that you disagree with. If so, can you state what you think is the case, at approximately the same level of specificity?

    2. In the latching method, the mutation function does have knowledge of the fitness function while mutating is going on, as it has to know to not mutate a correct letter, so the fitness function does affect the mutation process.

    Therefore, the mutation process is not independent of the fitness function, and therefore mutation is not entirely random in respect to fitness.

    Is there some part of this, as a description of the latching method, that you disagree with. If so, can you state what you think is the case, at approximately the same level of specificity?

  328. David Kellogg,

    “jerry, yes, the Monash website contains both a Weasel and non-Weasel version. I believe Zachriel wanted to stress that the differences are not trivial and that the one you most highlighted was the non-Weasel version.”

    Weaselly done. Or done like a true weasel? Methinks it is like a weasel.

  329. 329

    kairosfocus and others:

    An important update: Patrick May, who first sent me the video link that prompted this discussion, has chimed in with a very interesting post on his blog. By using the text of The Blind Watchmaker as a guide to creating a Weasel code, May convincingly answers the question of whether Dawkins implies or suggests latching in the book.

    Answer: he does not.

    He also answers the question of why someone would be fooled into thinking there was latching by virtue of the very limited samples provided in the book. His answer resembles the one I’ve provided upthread. But I’ll let him speak:

    The first conclusion that can be drawn from these results is that the correct letters do appear to be latched in the output, even though there is no explicit latching in the program. The primary reason for this is that only the best phrase from every ten generations is shown. That coarse sampling introduces a bias.

    Careful readers will have noted another source of bias. If more than one phrase in the set of progeny has the same fitness, the selection function arbitrarily keeps the first one found. A different selection algorithm might show more reversion.

    In short, the perception of latching is an artifact of the implementation and sampling bias. Running the program repeatedly and monitoring all generations will, in fact, show occasional reversions of correct letters. The odds of such a reversion are simply rather low.

    Please, read the post and answer: why did you think it should have latching in the first place again?

  330. Hazel:

    Joseph seems to be busy elsewhere.

    I will note a few points, on clarification:

    1 –> There are two varieties of latching that are potentially at work: explicit and implicit — the latter will ratchet just as effectively as the former. The later works by so matching per generation population and mutation rates (probably through trial and error) that there is a an inexorable, cumulative progressing to the target effect. (On preponderance of evidence it is believed that this is what is at work in 1986.)

    2 –> The latter may also be de-tuned enough [by matching mutation rates and population size] to quasi-latch and then to only weakly approach the target after many generations. (This is what I believe to have been at work in the 1987 videotaped run. Cf Apollos’ flawed, accidentally rather detuned run and the generation count and flicking back patterns reported at comment no 1 for the 1987 run.)

    3 –> In the case of partitioned, explicitly latched search we may not just use one member per generation, but we may use a population of competing candidates to be the new champion, with each letter varied at random, save for those letters that are already correct. (The one-member case is the lower extremum.)

    4 –> Such can be “justified” based on the idea that that which is fit is — just that: already fit — so it will survive unchanged. (Thence: simplified illustration and all that . . .)

    5 –> Also, the point Joseph has made in recent days, that Dawkins’ remarks in BW ch 3 lend themselves to the understanding that he explicitly latched the search should not be forgotten. Even at Monash university, absent specific correction from Mr Elsberry, they naturally inferred to explicitly latched partitioned search. [My just now run: partitioned: 88 gens to target, Genetic Algor Weasel, 1676 gens to target (with a lot of flicking back and preservation of at least current proximity). Which resembles 1986 and which resembles 1987 as published and videotaped, respectively? Why?]

    6 –> Again: the bottom-line is that Weasel implements proximity-reward search that does not constrain for functionality. So, as foresighted, targetted search based on reward of proximity without reference to adequacy of functionality or differential functionality, it cannot be a proper illustrative analogue for natural selection, the “blind watchmaker” at the heart of the book in which Weasel first appears. (And BTW, the want of algorithmic functionality . . . recall, DNA has to code for the components assembled to form structures in the real body, which then have to work well enough to be selected for by NS — at each stage as a requirement vitiates the Monash U’s attempted IC counterexample model too.)

    ________________

    In short, Weasel and its progeny reflect a systematic failure to reckon with the full depth of the information generation and functionality challenges posed by models of OOL and of body plan level biodiversity that seek to explain by chance + necessity.

    GEM of TKI

  331. Hazel #249,

    Haven’t checked back in a while, so this may have been discussed already:

    The simplest explanation is NOT explicit latching. There is no reason to think that Dawkins programmed such a rule into Weasel.

    kf was referring to the examples of iterations in the book itself, where letters do not change once they reach the target. So if that’s all the evidence you have to work with explicit locking would be the simplest explanation. It’s not until the video is seen would this presumption be challenged. Or if the program is 100% recreated and then analyzed with various runtime parameters.

    I see that a further discussion launched after this point, but this seems to be a silly argument to make.

  332. Patrick

    Thanks.

    GEM of TKI

    PS: DK, your linked blogger lost me in the first sentence on his incivility, with condescending and slanderous language: if he does not know — or more likely refuses to know — that design thought is not to be equated with creationism, he has no more credibility with me. Full stop.

    It is very plain from the text of BW, inclusive of descriptions as cited by Joseph and by Wiki, that Weasel circa 1986 shows ratcheting, and that the letters as published in a sample that shows 200 instances of latched letters as the o/p moves towards its target inexorably, with no reversals, in a space of 300 characters, that Patrick’s bottomline on this secondary issue is correct. On the report that Dawkins didf not EXPLICITLY latch, we have conclusded implicit latching as the best explanation.

    As to the primary question, Mr Dawkins himself settled at the outset in 1986, that Weasel exhibits proximity based reward without reference to functionality. It thus cannot be the blind watchmaker at work or a reasonable illustration of it. It begs, as opposed to answers the Hoylean challenge of getting to functional, complex bio-information.

  333. PPS: DK, As for “IDiots” as a reference to people who differ with him, sorry. Whatever possessed you to send me to a site that STARTS with that kind of language, DK?

  334. 334

    Patrick [331], no. As Patrick May dmonstrates in the blog to which kairosfocus objects, a person using Dawkins’s text as a guide to coding would arrive at a version without. Also, as I have pointed out for some time, the examples of iterations in the text are nonrandom: they are heavily biased, being selected from among the projeny.

  335. 335

    kairosfocus, you’re wrong: the blog post does not use the term “IDiots” (or “IDiot”), either on that page, or on his site at all. It no more uses that term that Weasel uses latching.

  336. Non-latching implies that mutation is random in respect to fitness.

    Latching implies that mutation is not random in respect to fitness.

    kf and others: do you agree with those statements, yes or no.

    And if not, why not?

    And I’d like to encourage you to focus on just this question.

  337. Hazel:

    Since there is an implicit latching [then quasi-latching, then detuned . . . ] case, your question is mis-framed.

    That is, there is no simple yes-no.

    GEM of TKI

  338. P.S.: kf, i do not see the word IDiot on the blog page that David linked to. I see IDCists: perhaps you misread.

  339. In the non-latching case, the mutation function can mutate both incorrect and correct letters. That is, the mutation function has no knowledge of the fitness function nor is it influenced by the fitness function as it chooses which letters to mutate.

    It is in this sense that we say that mutation is random with respect to fitness.

    Do you agree with this sentence. If not, why not.

  340. Hazel:

    In the implicit latch and/or quasi-latch cases, the trick is that the multi-mutation skirts of the population are sufficiently excluded that the proximity filter locks out regressions, as 0-mutations and one or so good mutations will be caught with practical certainty.

    “Don’t look behind the curtain” does not work here. The concern is the program as a whole.

    On the EXPLICIT latch case, the point can be argued that “all” that is being done is letting what reached to the correct value by random changes plus filtering stay there as we “know” optimal mutations are preserved by the envt.

    But the decisive objection is that the proximity reward on the non-functional invalidates the whole exercise from the start. Latching, etc are very much secondary issues, as I have pointed out from December on. Such, do however point to the key problem: targetted foresighted, non functionality constrained search.

    Weasel is misleading, at its root; begging the very Hoylean question of origination of com,plex, functional bio-information it was supposedly set up to answer.

    GEM of TKI

    PS: I will not go back to that page to see if these old dyslexic eyes misread or not (which is possible . . . ); but the difference in context between the two is a distinction without a difference. NEITHER is materially better; BOTH are equally slanderously false, uncivil and condescending.

  341. “I see that a further discussion launched after this point, but this seems to be a silly argument to make.”

    The whole discussion is silly. “To latch or not to latch, that is the folly.”

    “The most exquisite folly is made of wisdom too fine spun” Ben Franklin

  342. kf: I notice that you didn’t answer the question, but rather repeated points you have made about some other issues.

    Is there some difficulty in just looking at the relationship between the mutation function and fitness function and assessing whether the mutation is random in respect to fitness?

    Let me repeat, more explicitly: the overall net effect of the process each generation or over generations is NOT the issue I am asking you about, and even more so the overall relevance of this exercise to the real world is not what I am asking about.

    I am asking about a much simpler issue: when a phrase mutates in the non-latching situation, is it true or not that the mutation routine does not reference the fitness routine, and is thus random in respect to fitness.

    Can you focus on, and answer, just this one question?

  343. Hazel:

    Pardon; you can properly ask me a question, but you cannot justifiably demand that I answer to fit a flawed, probably rhetorically loaded framing.

    Recall, onlookers: all of this back-forth over latching etc is in light of a rhetorically very loaded context, GLF’s attempt to discredit the undersigned by raising a distractive point on a thread on the self-referentially absurd implications of selective hyperskepticism. GLF’s foolish wager boast gambit still stands exposed for the hollow rhetorical tactic it was.

    And, onlookers, note how there has been a silence on the point that there is such a thing as the law of large numbers, which makes the o/p’s of 1986 credibly good sampling data on the latching of the 1986 o/ps as published. You will search above in vain for a retraction on the former loudly announced claims that the published samples in question were too small to be likely to be truly representative of the o/p. Likewise, you will search in vain for the apology or correction of violation of my privacy. Not to mention, for the explanation of why a denunciation and rhetorical dismissal of me by name by someone indulging in blood slander against Christians and associated enabling behaviour for lewdness in public in the Jamaican media [and for which the newspaper in question was forced to publish a corrective; though they damage controlled even that . . . ], was latched on to and trumpeted, here and at Anti Evo.

    Hazel, do you see why I HAVE to treat this as a rhetorically loaded context, not a quiet collegial afternoon exchange over sipped cups of tea in a Departmental Seminar Room?

    Now, back on your point: I pointed out already, that the root issue on the secondary question of latching is not just module action but module interaction in light of parameter settings etc. Hence, IMPLICIT latching.

    Emergence of — sometimes unexpected — behaviours through such interactions is a well-known characteristic of systems.

    (Indeed, this is one reason why the commonly met with co-optation argument against irreducible complexity is so simplistic — it grossly underestimates the challenges of coupling, interfacing and interaction; as any experienced system designer and developer can tell you. Join that to the key-lock fitting, wholistic behaviour of proteins on folding and agglomerating based on their AA sequences, and you see that the co-optation story is a little less than realistically credible.)

    So, you can allow at-random mutation among members of the population of a generation all you want. But, once pop size, mutation rate and — most importantly — filtering based on mere proximity without a realistic functionality criterion are imposed, under relevant co-tuned conditions [probably accessible through trial runs], the program will credibly lock or quasi-lock letters as they reach their individual targets.

    Next, you can try to make a fine distinction between at random mutations that affect the string as a whole all the way through to the final version — and, how does the pgm know to stop mutating at that point, is that “random” or “blind”? — and the possibility of explicitly latching correct letters by partitioning the search so that once a letter reaches its functionality target that is recognised by locking. Indeed, one can always search for lawyering emanations of penumbras of hints in the text of TBW, circa 1986, to one’s heart’s content.

    But all of that is immaterial.

    For, on a plain, direct and reasonable, objective plain meaning reading of the text [as Joseph has underscored in recent days], the explicitly latched version of the program is directly and strongly supported by the statements and o/p excerpts published in 1986. The example of Monash university over in Australia — which supports “your: side of this exchange — shows that beyond reasonable doubt. They had to be “corrected” by Mr Elsberry to see that hey had “missed” the “correct” evolutionary materialist interpretation of Weasel circa 1986. 9And, yes I am using “scare quotes” to highlight the point.)

    This is therefore what is relevant: is there evidence that explicit latching is a reasonable interpretation of Mr Dawkins’ description and published o/p at that time?

    ANS: Plainly, yes. It is on further reports that Mr Dawkins states that he did not use explicit latching, that we have explored how else he could have programmed Weasel AT THAT TIME. And, on the balance of the evidence, the best explanation is IMPLICIT LATCHING. (In that context, the 1987 o/p is most likely a de-tuned for video impact version.)

    Back to the original frame: The latest rabbit trail over which is more or less random is a third level distraction from the core issue with Weasel.

    For, once mere proximity to target is rewarded without reference to realistic function, then Weasel begs rather than answers the Hoylean question of origin of complex bio-functional information.

    For, it is not a BLIND watchmaker at work.

    Weasel is not a proper illustration of claimed evolutionary processes, and begs the question of getting TO biofunction. It should therefore be withdrawn as a yet another flawed icon of modern evolutionary thought.

    And, on the question of latching, which was raised as a distractor in a previous thread, in an attempt to discredit the undersigned in the first place [and by extension once his name came up Mr Dembski], once we see that the text supports this as a reasonable interpretation, and that here are two credible mechanisms to achieve such, which are both consistent with the published results circa 1986, the exchange should have ended at that point.

    That it has not, simply reveals to the astute observer, that the evident latching of the o/p in Weasel circa 1986 — cf the original post in this thread — is a strong hint on what was fundamentally wrong with this icon of evolution.

    So, clouds of rhetorical squid ink notwithstanding, we know that there is a squid trying desperately to get away behind all the distractive clouds of contention and debate points.

    Hazel, you have raised some significant points that have contributed to a clear enough conclusion. For that, I give you thanks.

    GEM of TKI

  344. 344

    kairosfocus, did you ever define “implicit latching”? What would it mean for a letter to latch “implicitly”? I can’t find such a definition.

    As for the rest, your paragraph beginning “And, onlookers, note how there has been a silence” is full of what I view as errors and misrepresentations. I have to do an errand now, but it’s likely I’ll still be in the moderation pile when this gets posted. Such delays have made it possible for you to ignore (or perhaps be unaware of) various points that I have made in this thread. In any event, you have not responded to them.

  345. “Implicit latching” sounds to me like being “a bit pregnant”. Either a correct letter locks in place until the whole target phrase is reached or it does not. If correct letters can be deselected for whatever reason then there is no latching.

  346. I take it you aren’t going to answer my question. I realize that of course you don’t have to, but since you once again just spent 1000 words repeating things you have said many times before, and have made it clear that you’re not interested in a conversation on specifics, I gather there is no sense in my repeating myself and asking you again, so I won’t.

  347. The above was to kairosfocus: probably obvious, but I should have been clearer.

  348. 348
    George L Farquhar

    Kariosfocus,
    At this URL
    http://www.uncommondescent.com.....ent-300338

    You said

    Weasel sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.

    Concentrate on the first sentence. I include the second as your usual response here is to accuse me of quotemining. The second sentence is irrelevent at the moment.
    Now you say:

    Recall, onlookers: all of this back-forth over latching etc is in light of a rhetorically very loaded context, GLF’s attempt to discredit the undersigned by raising a distractive point on a thread on the self-referentially absurd implications of selective hyperskepticism.

    No, it is you who have attempted to distract with your “hoylian challenge, islands of functionality etc” strawmen.

    The issue is quite clear.

    I offered $100,000 if you could provide a quote from Richard Dawkins that backed up your original position.

    You have not been able to do so.

    You will search above in vain for a retraction on the former loudly announced claims that the published samples in question were too small to be likely to be truly representative of the o/p.

    You did not response to my question at the time. You claim that 200 out of 300 sample points that show no reverting proves your point.

    I asked you at the time – what was the total population of letters? It was not 300 was it?

    You ignore relevant questions and then claim victory? You ignore the fact that people are talking about this exact issue in mathmatical terms and yet proclaim you are right, and it’s been proven?

    Here
    http://tinyurl.com/cytao2
    The probablity of candidate changing a parent’s correct base to an incorrect base is discussed.

    If you are so sure you are right, why not prove it with maths? Who could argue with that?

    the explicitly latched version of the program is directly and strongly supported by the statements and o/p excerpts published in 1986.

    This is simply untrue, as has been documented in detail. Simply ignoring when people point it out does not advance your case.

    Why have you ignored so many questions addressed to you?

    This is therefore what is relevant: is there evidence that explicit latching is a reasonable interpretation of Mr Dawkins’ description and published o/p at that time?

    Quite right. And Richard Dawkins has said that he did not use latching, that latching would have been against the point he was trying to make and he did not even consider using latching.

    Therefore your interpretation is incorrect.

    Accept it. And then tell me how you intend to pay me my $100,000 for winning our bet.

  349. Hazel:

    I take it you aren’t going to answer my question.

    You seem a little slow, Hazel. Over how many weeks, across multiple threads, has KF circumlocuted questions regarding latching? Sure, he inserts a few “quasis” and “implicits”, but at this point it is safe to assume he isn’t going to be painted into a corner over the issue of Weasel’s latching. When it become obvious even to a relatively uneducated onlooker like myself, I think the point has been made. Persisting further is only gilding the lily. Or perhaps gilding the shadow of the lily on the wall of Plato’s Cave.

  350. kairosfocus @343

    Hazel, do you see why I HAVE to treat this as a rhetorically loaded context, not a quiet collegial afternoon exchange over sipped cups of tea in a Departmental Seminar Room?

    KF, I used to think you were one of the most decent, gentlemanly participants here, but your behavior has gone beyond the ridiculous on this topic.

    The only reason you won’t answer Hazel’s simple, non-loaded question is because you are constitutionally incapable of admitting even the slightest error. It is painfully clear that your claims of explicit latching are unfounded. Rather than admit this rather minor mistake, you spew copious amounts of unrelated verbiage in an attempt to distract from that core, essential, point and treat polite correspondents like Hazel rudely.

    I expected better of you.

    JJ

  351. 351

    kairofocus [343], hello again. I want to point out that the claims you have made are subject to alternate reading. I don’t expect you’ll agree on any of this, and I don’t claim that my views are uncontestable. Moreover, I put this in a separate comment in the hope that you can deal with the latching issue in its own post without digression or accusation.

    And, onlookers, note how there has been a silence on the point that there is such a thing as the law of large numbers, which makes the o/p’s of 1986 credibly good sampling data on the latching of the 1986 o/ps as published.

    Wesley Elsberry has made argued that the law of large numbers works against you. You have not responded to mathematical argument on its merits.

    You will search above in vain for a retraction on the former loudly announced claims that the published samples in question were too small to be likely to be truly representative of the o/p.

    I have pointed out that the data are highly non-representative because they are the products of a large selection bias (the best sampple from each generation provided). This was obvious from the text of TBW.

    Likewise, you will search in vain for the apology or correction of violation of my privacy.

    The only person who keeps talking about your name at UD is you. I have mentioned that spam does not go to a proper name but to an email, so your claim of more spam from your proper name being put on another board is spurious. As far as AtBC, that’s not my board. I’m not going to ask them to change a policy that has nothing to do with me.

    Not to mention, for the explanation of why a denunciation and rhetorical dismissal of me by name by someone indulging in blood slander against Christians

    The person in question was responding to something that you signed and published. There was no “blood slander.” There was a claim about the evangelical community that is subject to dispute.

    and associated enabling behaviour for lewdness in public in the Jamaican media [and for which the newspaper in question was forced to publish a corrective; though they damage controlled even that . . . ], was latched on to and trumpeted, here and at Anti Evo.

    That’s a non-issue for me.

    David

  352. 352

    It isn’t 1986 any more. Many independently written and tested implementations of Dawkins’ simple “Weasel program” teaching aid are now publicly available, It’s a simple program and a working script can be written by any reasonably bright teenager in a few minutes.

    The program gives a simple demonstration of the process which Darwin termed “Descent with modification” and with which all animal breeders are familiar.

    It’s a simple filter with two components. Firstly variation is generated: Dawkins chooses a copying algorithm that randomly introduces “mutations”–imperfections–into the copy. Secondly a selection is made: Dawkins uses a predefined target and a simple distance metric suited to the search domain to select the “most fit” copy from which to make the next generation. Anyone who has bred pigeons, dogs or other animals for show where published breed standards apply will recognise the technique.

    Not surprisingly, the simple filter converges quickly on the target, just as competitive breeding of animals produce examples of an animal that match the parameters of the breed remarkably faithfully.

    Notice that I haven’t referred to natural selection anywhere here. The point is that variation exists in nature and can be selected. This was already well established–had been known by breeders for centuries–before Darwin was born.

    The example doesn’t demonstrate natural selection because it isn’t intended to do so. It is as applicable to demonstrating the mechanism by which domesticated animals are bred as to anything else.

    I don’t know why intelligent design advocates would have any problem with this. The Weasel Program demonstrates nothing remotely controversial and does so in a simple and straightforward way that is difficult to misunderstand.

  353. Onlookers:

    Re 344 on:

    There is a successor thread, which answers to new objections.

    In particular the idea that the observed latching on the o/p of Weasel circa 1986 is not real, has been long since addressed in terms not only of the fact that we have a more than adequate sample and direct statements by Mr Dawkins on the matter, bit also that we provided two possible mechanisms, with sufficiently detailed explanations. Onlookers, simply scroll up to the original post above and count: of 300+ places where letters could change, 200+ — well beyond where the law of large numbers weighs in on the likely representativeness of a sample — show letters that once they go correct remain so. A dominant feature of the Weasel o/p circa 1986, and which Mr Dawkins made clear reference to, lwayering over emanations of penumbras of the text notwithstanding.

    With zero exceptions.

    On preponderance of evidence, Weasel circa 1986 latches implicitly off per letter mutation rates interacting with per generation population size and most importantly a selection filter that rewards mere proximity in the teeth of non-functionality. Circa 1987, Weasel was evidently detuned so that we see a different pattern of behaviour: multiple reversions that occur fairly frequently.

    This and other successive variations and versions of Weasel and neo-Weasels should not be confused with the issue that was raised in an agenda-serving, originally threadjacking attempt to discredit those who pointed out the obvious fact of o/p latching of Weasel circa 1986. (That was brought up in a thread that was discussing the pervasive problem of Cliffordian evidentialist form selective hyperskepticism, which continues to be a central intellectual challenge that the evolutionary materialists seem to have. Complete with shoals of red herrings led out to ad hominem- soaked strawmen, ignited to cloud and poison the atmosphere for discussion that might lead to inconvenient truth.)

    Going all the way back to December last, it is that proximity – without- functionality filter that has been highlighted as utterly gutting Weasel of any proper didactic or illustrative credibility. Weasel always has been, a rhetorical exercise in question-begging and misdirection. That is why it has always been controversial, bland declarations to the contrary notwithstanding.

    This can be seen both from what GLF cited adn twisted yet again from 111 in the December thread, and from what he did not cite from 107, e.g.:

    [107:] the problem with the fitness landscape is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. So far in fact — as I discuss in the linked in enough details to show why I say that — that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy.

    But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge.

    [111, excerpted paragraph used by GLF in his threadjack:] Weasel sets a target sentence [check] then once a letter is guessed it preserves it for future iterations of trials [just look at the o/p above in the original post to see that; the issue is not what but how] until the full target is met [cf Dawkins, 1986, Ch 3 TBW: “The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.”]. That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.

    As to GLF’s infamous $100k offer, it was plainly never a serious offer, and was clearly only intended to serve as a gambit on which he hoped to boast that no-one could take him up. In that, he has been sadly exposed for the hollowness of the rhetoric involved. As can be abundantly seen here and in the previous thread.

    As to the recycling of long since adequately answered objections [apparently an evolutionary materialist advocacy strategy on the ID movement; probably on the idea that an often repeated claim, even if unwarranted, may often be perceived as true], you can simply scroll up and see how the cycle of selective hyperskepticism has played out over and over again.

    In short, more than enough has been said, many times over, for any reasonable person to see that Weasel is not legitimate, and has never been, regardless of the issue of “latching.” On this secondary issue, it is plain that Weasel 1986 latches and that we have a reasonable mechanism — or two — to account for that.

    On either explanation [explicit or implicit], the latching points straight back to the core problem: Weasel is targetted, foresighted, designed, active information based search, not a reasonable analogue of any BLIND watchmaker, especially the much vaunted natural selection.

    GEM of TKI

    PS: Onlookers will also observe that to date, there has been no responsible accountability over violation of my privacy, or over gleefully citing an abusive dismissal of me in the Jamaican media that the newspaper in question had to publish a corrective over. Note as well, that dismissal was in service tot he blood slander of plastering Evangelical Christians with the terrorism of IslamIST radicals, and to thereby enable public lewdness for profit (with amateur night also horrendously in play, including corruption of minors).

  354. KF, you are incorrigibly long-winded, verbose, convoluted, and unable to stay on any one topic.

    Kairos, be a gem and focus. :-)

  355. David #334

    Patrick [331], no. As Patrick May dmonstrates in the blog to which kairosfocus objects, a person using Dawkins’s text as a guide to coding would arrive at a version without.

    Did I not say just that?

    “Or if the program is 100% recreated and then analyzed with various runtime parameters.”

    Learn to read carefully before you respond.

    Also, as I have pointed out for some time, the examples of iterations in the text are nonrandom: they are heavily biased, being selected from among the progeny.

    That is the source of the problem. Dawkin’s unwitting mistake has led others into error. An error that has been noticed and dealt with, but people like you seem to have trouble realizing that.

    In any case, this thread is degenerating into a mud-slinging contest so I’m ending it.