Uncommon Descent Serving The Intelligent Design Community

The Weasel lives on, now at PNAS

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Carl Woese

ID critics often complain that ID advocates go ON AND ON (and ON) worrying about Weasel-type models of evolution, as illustrations of how undirected variation and selection can rapidly converge to apparently designed outcomes. No one takes such models seriously as biology, the critics say. Weasels are toys with a strictly limited teaching purpose.

Over to the latest issue of the Proceedings of the National Academy of Sciences (PNAS). Looks like a weasel in the tall grass:

Suppose that we are trying to find a specific unknown word of L letters, each of the letters having been chosen from an alphabet of K letters. We want to find the word by means of a sequence of rounds of guessing letters. A single round consists in guessing all of the letters of the word by choosing, for each letter, a randomly chosen letter from the alphabet. If the correct word is not found, a new sequence is guessed, and the procedure is continued until the correct sequence is found. Under this paradigm the mean number of rounds of guessing until the correct sequence is found is indeed K^L.

But a more appropriate model is the following: After guessing each of the letters, we are told which (if any) of the guessed letters are correct, and then those letters are retained. The second round of guessing is applied only for the incorrect letters that remain after this first round, and so forth. This procedure mimics the “in parallel” evolutionary process.

See Herbert S. Wilf and Warren J. Ewens, “There’s plenty of time for evolution,” PNAS Early Edition. For those who cannot access PNAS behind its paywall, an earlier version of the same paper is available here.

Comments
This is Teleology, and it makes evolutionists look stupid... Many are stupid. The rest are either con-artists or someone like Dawkins who is not stupid or a con-artist, but a person who cannot bring himself to accept the fact that he pissed away his life on a lie.GilDodgen
December 19, 2010
December
12
Dec
19
19
2010
04:55 PM
4
04
55
PM
PDT
The fatal error with RDawkins "Weasel Generator" has been cited over and again. It is embarrassing to evolutionists to have him shop that tired old nonsense around. The Weasel Generator knows beforehand which to save and which to discard. This is Teleology, and it makes evolutionists look stupid to first claim "teleology has no place in evolution" and then to have a huckster trying to pretend otherwise.Rhod
December 16, 2010
December
12
Dec
16
16
2010
02:30 AM
2
02
30
AM
PDT
The premise of the article, of course, would not work for IC systems. @4--"NS is a mechanism that is meant to forestall degradation of the genome;" It certainly appears that way, with all the error correction mechanisms and copying safeguards; it's rather incongruous for a system which is astonishingly impressive at preserving the status quo to be the actual mechanism of change while also preventing it.avocationist
December 15, 2010
December
12
Dec
15
15
2010
05:04 PM
5
05
04
PM
PDT
Another issue is in order to say there is enough time to do something you have to know if the job can be done and what it would take to do it.Joseph
December 15, 2010
December
12
Dec
15
15
2010
01:06 PM
1
01
06
PM
PDT
That paper is really a shame. I had already read the preprint. It seems that now this garbage has safely found its way to official publication, peer reviewed and, I suppose, peer acclaimed.gpuccio
December 15, 2010
December
12
Dec
15
15
2010
10:33 AM
10
10
33
AM
PDT
I think the real question is whether or not NS is even a mechanism. The gist of this paper is that those who question evolution on the grounds that there simply hasn't been enough time are making the [egregious?] error of not correctly taking NS into account. The implication seems to be that when NS is properly understood and taken into account, then everything becomes crystal clear. Oh really? Try finding a clear explanation of what it is that NS is supposed to have done and how. NS is like a magic wand waved over this or that evolutionary problem, and suddenly everything is explained. At the most basic level, a mechanism is something that performs a task. What task does NS perform and exactly and precisely how does it go about doing it? The authors of the paper cited in the OP want you to believe that NS magically knows in advance what the outcome is supposed to look like, so it can conserve the good and filter out the bad. If NS as a mechanism was as well understood as we're led to believe, then we ought to be able to see all sorts of accurate predictions about the effects that this or that selective pressure would produce. Hmmm...where are those, I wonder. Instead, NS is little more than a handy descriptive phrase used to describe certain observations after the fact! "Oh, this trait popped up...well NS did that!" "Really, how?" "We're not sure, but that's how it works, see? All cleared up now?"DonaldM
December 15, 2010
December
12
Dec
15
15
2010
09:43 AM
9
09
43
AM
PDT
"But a more appropriate model is the following: After guessing each of the letters, we are told which (if any) of the guessed letters are correct, and then those letters are retained. The second round of guessing is applied only for the incorrect letters that remain after this first round, and so forth. This procedure mimics the 'in parallel' evolutionary process." This is not how natural selection works. To make it more like natural selection, we start be generating several words. This is the first generation. For each generation: Each word is assigned a reproduction weight based on how close it is to the target word. We are not told which letters are right. Randomly choose from the weighted pool, make a duplicate of the chosen word, with each letter of the word having a small chance of alteration when instantiated in the next generation. Repeat selections from the pool until the next generation is full. Then repeat the generational process. This will of course require modification for sexually-reproducing creatures.EvilSnack
December 15, 2010
December
12
Dec
15
15
2010
05:30 AM
5
05
30
AM
PDT
This paper is Dawkin’s Blind Watchmaker argument redux.
That was my reaction too, although the authors don't mention any weasels, and they analyse a latched model, not an unlatched. (uh oh, I'm about to start a long argument, aren't I?) Hasn't Dr. Dembski got the same result, but in the guise of active information?Heinrich
December 15, 2010
December
12
Dec
15
15
2010
12:35 AM
12
12
35
AM
PDT
Paul, The Darwinian notion that random errors filtered by natural selection (throwing out the bad error stuff and reproducing the good error stuff) can explain everything concerning living systems -- in light of the information age and recognition that bio-systems are information-processing systems with inconceivably sophisticated error-detection and repair algorithms, not to mention the machinery with which this technology is implemented and realized, plus much more -- is simply preposterous, ludicrous, illogical, and completely divorced from any acquaintance with the discoveries of modern science, mathematics, and information theory. (My apologies for the Proust-like sentence -- it seemed appropriate at the time.) The inability of the Darwinist (especially in academia) to recognize this is either the result of a philosophical commitment that will not allow the evidence to speak for itself, or ignorance of the aforementioned hard-science disciplines.GilDodgen
December 14, 2010
December
12
Dec
14
14
2010
07:37 PM
7
07
37
PM
PDT
I looked at Dr. Behe's post at ENV. Like Paul Nelson and Dr. Behe, the first impression is, indeed, Dawkin's redux. In my prior post, I noted that their logic should push them in the exact opposite direction they want to go with their mathematical result. But there are more problems. Take, e.g., Dr. Behe's findings in The Edge of Evolution. It takes 10^20 replications to arrive at the two a.a. substitutions that will protect the parasite from the chloroquine. According the Wilf and Ewen's mathematical results, this should have happened within one infection of one person (10^8 replications). But it didn't. It took twenty to thirty years for the resistance to develop. We have a saying for stuff like this: "Too good to be true!" But, secondly, let's notice this: nowhere in these equations is NS to be found! They're just equations that could apply to anything. Wilf uses a mean distribution of independently and identically distributed (iid) random variables. These could be ANY random variables. What strict application does this have to the biological realm other than considering genes to be iid and then (which is not mentioned at all) to assume permutations wihtout substitution to be taking place. Again, this is an analysis that applies to any random variables, and would, therfore, apply to dice, to roulette wheels and coin flips. Why hasn't this ease of 'searching' be seen before?PaV
December 14, 2010
December
12
Dec
14
14
2010
03:47 PM
3
03
47
PM
PDT
This paper is Dawkin's Blind Watchmaker argument redux. Here's a quote:
In a population admitting a million births in any year, we may expect something on the order of five million such de novo mutations, or about 250 per gene in a genome containing 20,000 genes. There is then little problem about a supply of new mutations in any gene. However only a small proportion of these can be expected to be favorable. We formalize this in the calculations below."
Are we not right in asking the authors: if, using your logic, you claim that it takes but 390 "rounds" [do they mean generations? why aren't they more clear?] of "testing" to find the correct grouping of 20,000 random variables, each one representing a gene, from an alphabet of 40 letters, representing the available mutations, then, if you state "only a small proportion" of the mutations are beneficial, which is really another a way of saying that most mutations are harmful; then your logic implies that most organisms would fail, and they would fail in less than 390 "rounds"? But, of course, we life all around us. So, obviously something is wrong. All of this serves to point out what the brilliant mathematician and astrophysicist, Sir Fred Hoyle, concluded: NS is a mechanism that is meant to forestall degradation of the genome; not its further transformation.PaV
December 14, 2010
December
12
Dec
14
14
2010
02:26 PM
2
02
26
PM
PDT
Dr. Behe has a piece up on ENV comparing this PNAS paper to Groundhog Day with Bill Murray,, perhaps this scene in particular clearly lays out the foresight Darwinists try to smuggle into evolution. Groundhog Day http://www.youtube.com/watch?v=2FumzFeb6q8bornagain77
December 14, 2010
December
12
Dec
14
14
2010
01:44 PM
1
01
44
PM
PDT
I'm telling you some people really think natural selection is a magical ratchet. "Poof the Magic Mutant, a-t-g-c..." Just sayin'Joseph
December 14, 2010
December
12
Dec
14
14
2010
01:14 PM
1
01
14
PM
PDT
After guessing each of the letters, we are told which (if any) of the guessed letters are correct, and then those letters are retained. Obviously verbs "told" and "retained letters" presupose someone with basic intelligence. I don't see how natural selection can "tell" which letters are from Shakespeare and which not. bviously one has to be "evolutionary biologist" to use such peculiar typewriter fancies as evidence for evolution.VMartin
December 14, 2010
December
12
Dec
14
14
2010
12:54 PM
12
12
54
PM
PDT

Leave a Reply