Uncommon Descent Serving The Intelligent Design Community

Dawkins Weasel vs. Blind Search — simplified illustration of No Free Lunch theorems

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I once offered to donate $100 to Darwinist Dave Thomas’ favorite Darwinist organization if he could write an genetic algorithm to solve a password. I wrote a 40-character password on paper and stored it in safe place. To get the $100, his genetic algorithm would have to figure out what the password was. I was even willing to let him have more than a few shots at it. That is, he could write an algorithm which would propose a password, it would connect to my computer, and my computer that had a copy of the password would simply say “pass or fail”. My computer wouldn’t say “you’re getting closer or farther” from the solution it would merely say “pass or fail”. But he wasn’t willing even to go that far. He declined my generous offer. 🙂

Dave Thomas, like Richard Dawkins, advertise the supposed mighty power of genetic algorithms, but when pressed to solve the sort of problems that are relevant to evolution, they are no where to be seen.

Complex functional proteins for a system are notoriously difficult to construct. They are like passwords that must be matched to the right login.

Evolving a functional protein in one context to become functional in another context is not so easy. This is akin to taking a functional password for one account and presuming we could evolve it in steps to become a functional password for another account. Thankfully this doesn’t happen, otherwise thieves could be evolving their bank account passwords to be able to compromise your bank account passwords!

In general, transitionals from one functional protein to another are not selectively favored so as to coax evolution toward a new functional target. If each attempt to evolve a new protein is met with “pass or fail” versus “you’re getting closer or farther”, the search is effectively blind as a random search. The evolutionary search for new functional proteins fails for the same reasons thieves cannot evolve their functional passwords into your functional passwords.

The fact that Dave Thomas declined my offer indicates deep down he understands the fallacious claims of Darwinism and that Dawkins Weasel is a misleading picture of how natural selection in the wild really works when trying to solve problems like protein evolution. He knew he couldn’t take his passwords and evolve them into mine.

Despite this, we hear evolutionists proudly proclaim “evolution doesn’t evolve proteins from scratch, it evolves them from existing ones”. This claim may look promising on the surface, but let me pose this rhetorical question to the readers. You have a functioning password that works for your account, it may even share extreme similarities to other passwords that people have for their accounts. Does that fact give you a better chance of solving their passwords over blind luck? No. But evolutionary biologist are effectively saying that when then say “evolution evolves one protein from another.” So if Darwinian evolution will not evolve proteins what will? Surprise, there is a New mechanism of evolution — POOF….

But these considerations do not hinder Dawkins from advertising Weasel as the way evolution works. In contrast, as reported at UD, real evolution destroys complexity over time. The average of all reported real-time or near-real time lab and field observations is that most adaptive evolution is loss of function, not acquisition of function — Behe’s rule. In fact real evolution is worse than blind search, it can’t even hold on to the complexity that already exists, much less create it. The Blindwatchmaker would dispose of lunches even if they were free.

No Free Lunch theorems are the formalization that shows that Darwinian search is no better than blind search for cases like solving passwords. No Free Lunch would assert that if Dave Thomas’ genetic algorithm solved my password, he likely was privy to specialized information which a random search algorithm didn’t have. By way of analogy, in the case of Dawkins Weasel, if we viewed the phrase: “METHINKS IT IS LIKE A WEASEL” as the target password, then Dawkins pretty much front loaded the desired password to begin with. But Dawkins and Thomas will have no such luck if they don’t have the desired password up front.

But I didn’t give Dave the target answer, hence there was no free lunch ($100 worth) for Dave Thomas.

Comments
Rajan, by refuting NFL applicability to the problem of the population genetics you shoot yourself in the foot. That is called special pleading. If the fitness function changes over time, it is an observation rather than a law by which the organisms organize themselves. Gravity never changes, the mechanics of flight never change. These are natural laws, i.e. immutable. If the fitness function changes than the function is not a natural law by definition but merely an ad hoc mechanism to explain the system. Stop funding evolutionary biology classes and build another Hilton Palace.Loghin
November 25, 2013
November
11
Nov
25
25
2013
01:53 PM
1
01
53
PM
PDT
Just in time. Cornelius Hunter gives some insight into why evolutionists believed proteins could evolve so easily. They thought once upon a time, the sequences of amino acids were random! If they were random, evolution would not face the problems of No Free Lunch. Here is Dr. Hunter's post: http://darwins-god.blogspot.com/2013/11/fred-sanger-protein-sequences-and.htmlscordova
November 24, 2013
November
11
Nov
24
24
2013
03:03 PM
3
03
03
PM
PDT
scordova #48
How the paradox comes into play. What happens when we have a colony of cells that came from one cell. Now we have more homochiral molecules. Do we count this large number as being more improbable? I do, but if I do, this raises the problem of CSI (from homochirality) growing and growing because the cells keep multiplying. One resolution to the paradox is to postulate that CSI of this variety can grow in an open system, that is the solution to the paradox that I accept.
Here I don't agree. I don't understand why you count as new CSI the increase of homochiral molecules due to the reproduction of cells. Cells have the potentiality to reproduce. This reproduction generates new homochiral molecules. No wonder. But no new organization is produced. Simply the potentiality existent just from the beginning in the cells is actualized. This is not an example "that CSI of this variety can grow in an open system". For this reason, at the question "Do we count this larger number [of homochiral molecules] as being more improbable?" I answer "I don't". Differently, if from the colony of cells in the environment (considered as an open system) arises a different kind of organism not present in the potentiality of the primitive cells, then we could say that "new CSI has grown in an open system". This is what evolutionists hope (they trust on "open systems"), but it seems to me that you and I agree that such event is not possible.niwrad
November 24, 2013
November
11
Nov
24
24
2013
01:29 PM
1
01
29
PM
PDT
The bounder!Axel
November 24, 2013
November
11
Nov
24
24
2013
08:00 AM
8
08
00
AM
PDT
Very amusing comment by a Peter Wadeck, below, and all the more so for being entirely plausible. It is currently the last post on the page 'The problem with evolution is that it is controlled by biologists that are on the low end of the intelligence spectrum in the scientific community. I am sure you have heard Dawkins complain that there is too much mathematics coming into biology. He is complaining because a rigorous science would have to reject his theological theory of evolution.' Read more: http://www.ncregister.com/blog/mark-shea/intelligent-design-vs.-the-argument-from-design#ixzz2lZulxmHKAxel
November 24, 2013
November
11
Nov
24
24
2013
07:57 AM
7
07
57
AM
PDT
gpuccio, So nice to hear from you! If I may offer a little personal history about the 2000 coin example. The coin example is obviously analogous to the problem of homochirality in biology. Not only are amino acids racemic in all lab OOL experiments, even if miraculously an OOL experiment generated homochiral amino acids, they won't stay homochiral for very long, they will racemize according to various half-lives. In the case of survival, homochirality is critical since proteins will not fold properly if the amino acid are not homochiral. There is probably some great importance also for homochirality in DNA. From a probability standpoint, for the 2000 coins heads, the number of coins that are heads is the number of bits of CSI -- 2000. It will pass the EF as such. Analogously, for a minimal biological organism , a very large number of amino acids must be homochiral. Let's suppose some absurdly small organism of a million amino acids, the probability of homochirality is on the order of 1 out of 2^1,000,000. The example is close to my heart because that was the beginning of my journey from near agnosticism to strong belief in ID and the Christian faith. There are also examples of design in biology that may not be functional, such as the hierichical organization that Linnaeus and other creationists perceived. It might be possible in principle that some simple chemical process can induce homochirality in a pre-biotic soup, but that is only a speculation, and attempts to bias the ratio of L and D amino acids results in potentially lethal conditions for life, not to mention, homochirality dissipates over time anyway (some half-life are or on the order of decades). Furthermore, Fox unwittingly demonstrate polymerization through heat destroys homochirality. Like the 2000 coins heads, some might argue a simple mechanism created it, but the homochirality arguments was the spark of hope that the Designer left evidence for us to discover that we were designed. He could have chosen to leave us in the dark, but he did not. That's probably why I have focused a little more on these very simple, and somewhat unspectacular examples that will pass the EF, but are not necessarily considered functional. It has some personal significance to me in my journey through ID -- it was my starting point. How the paradox comes into play. What happens when we have a colony of cells that came from one cell. Now we have more homochiral molecules. Do we count this large number as being more improbable? I do, but if I do, this raises the problem of CSI (from homochirality) growing and growing because the cells keep multiplying. One resolution to the paradox is to postulate that CSI of this variety can grow in an open system, that is the solution to the paradox that I accept. Obviously most at IDists at UD (except maybe myself and Mapou) reject the postulate that CSI can grow in an open system via the agency of cells (which we view as AI systems). I posted this thread partly to affirm that I support the NFL theorems in the case of blind search. The theorem's applicability is blatantly obvious for the problem of blind search. But it was also a good opportunity to point out, the application of NFL to open systems might require some consideration and caution. I don't feel I can defend the applicability of NLF theorems in open systems with the same force that it can be defended in the case of blind search. But in sum, Dave Thomas doesn't get a free lunch. :-)scordova
November 24, 2013
November
11
Nov
24
24
2013
07:45 AM
7
07
45
AM
PDT
Sal: Here I am about dFSCI. I will try to answer your question about CSI in "open systems". In your other post, you write:
Suppose I have four sets of fair coins and each set of coins contains 500 fair coins. Let us label each set A, B, C, D. Suppose further that each set of coins is all heads. We assert then then that CSI exists, and each set has 500 bits of CSI. So far so good, but the paradoxes will appear shortly.
I don't think that the problem is correctly defined here. First of all, I would point out that dFSCI (I will use my restricted definition from now on) is not a property of an isolated object: it can only be assessed for an object in relation to a specifically defined system, which is the system that we believe generated the object in its present form. IOWs, a long protein exhibits dFSCI if we consider the natural system of our planet with its natural resources. Defining a system allows us to consider two important variables: the probabilistic resources of the system (IOWs, its natural capacities of implementing random variation); and potential deterministic algorithms that act in the system. If we correctly take into account those variables, then the assessment of dFSCI is perfectly rigorous. In the case of your coin sets, we must specify the setting. Are the coin sets the result of individual coin tosses, and are we sure that the coins are perfectly fair? These question are specially relevant, because you presented an example of complexity which is not necessarily an example of dFSCI. First of all, a series of all heads is not a very good example of "function", but let's accept that it can be defined as a function in a very wide sense. But the most important point is that the information here, while certainly complex, is highly compressible. Therefore, we must really ask ourselves is some necessity algorithm in the system can be responsible of the result. Even if we are sure that the coins were individually tossed, and that no kind of reproducible effect controlled the tossing, unfair coins are a simple possible "necessity cause". What I mean is that, while considering a result whose functional complexity is highly compressible, we must be rally sure that no necessity algorithm in the system can generate that result: in this case, we must be sure that the coins are perfectly fair, and that the tossing is really random, and not subject to systematic, reproducible effects. This answers also your "problem" about one setting being the copy of another. That is a false problem. Again, what we must ask ourselves is: was one set copied from the other by some copying mechanism, already present in the system? In that case, no new information has been generated. Not so, instead, if both sets independently arose from random variation, that is if each of them was generated independently by true random tossing of perfectly fair coins. In that case, the improbability of the whole result multiplies and a design inference (for example, in the form of some kind of foul play). Let's make the example of a protein. Indeed, there is no surprise that a protein is synthesized in a cell which already contains: a) The genetic information for it (the protein gene) b) The transcription and translation system. The existence of billions of molecules of hemoglobin on our planet is certainly not a sing of new dFSCI each time a new molecule is synthesized. We know how each molecule is generated in each case. The appearance of new dFSCI is when for the first time a molecule of hemoglobin appeared on our planet. Beyond the capacities of random variation, and without any reasonable algorithm in the system that could create a new functional protein, without any functional precursor. IOWs. as I have always stated, it's the appearance of new basic protein domains, at multiple, definite times in natural history, that is truly an example of new dFSCI in a system which cannot explain it, and therefore supports a design inference. IOWs, a copy of Hamlet is not new dFSCI (although it implies the complexity of the copying system). But Shakespeare writing Hamlet in the beginning definitely is.gpuccio
November 24, 2013
November
11
Nov
24
24
2013
02:29 AM
2
02
29
AM
PDT
Here of course scordova is perfectly right. EAs can optimize what is specified in the fitness function. If this is unspecified or poorly specified an EA optimizes nothing or next-to-nothing. Natural selection has a "fitness function" poorly specified (survival) then it is a bad EA, absolutely incapable to create ex novo the least system, go figure the giant functional hierarchies of organisms. No EA builds whatever new organization with a fitness function of simple "survival". What a survival EA can do is to tune a parameter of a pre-existing system to help it to survive. A trivial job that has nothing to do with the creation of new CSI systems. Natural selection (meant as a survival EA) in the wild does exactly such trivial job.niwrad
November 24, 2013
November
11
Nov
24
24
2013
12:56 AM
12
12
56
AM
PDT
Sal: A few thoughts. Your post illustrates very well the impossibility for any random/algorithmic system to create new CSI. In essence, genetic algorithms cannot work when the complex functional information cannot be "deconstructed" into functional transitional forms whose sequence distance is in the range or random variation. This is another way to say that they cannot work in a “pass or fail” context, they absolutely need a “you’re getting closer or farther” context. I have repeatedly stated that no such “you’re getting closer or farther” context exists naturally for truly complex digital information, such as protein sequences or software algorithms. The absolute lack of functional simpler intermediates for any basic protein domain is clear empirical demonstration of that. The argument that evolution has no specific target is an old one, and essentially irrelevant. I have many times repeated that, in an already existing biological context, which is already built on very complex solutions, the concept of "any possible function" is of little help. Only a few very specific biological and biochemical functions will work there, and will give the reproductive advantages necessary for NS to take place. Even summing the probabilities of all possible functions that could have that result, the complexity remains so huge that RV is out of the game. Strangely enough, biological evolution does not use hacksaw or acid. Instead, for billions of years the emergence of new functions has been realized by the appearance of new, complex, functional proteins with highly sophisticated biochemical activities that exist nowhere else, and, even more surprisingly, by long protein cascades whose irreducible complexity is beyond any doubt for all reasonable people. Finally, I don't agree with you about the "paradoxes" of assessing CSI in "open systems". I see no difficulty at all in that. In my personal formulation of CSI (dFSCI), complete with rigorous definitions of dFSCI itself, and of how to assess it in some definite system, I believe there is no such difficulty. I would like to write about that, but in this moment I have not the time. Maybe I can do that later.gpuccio
November 24, 2013
November
11
Nov
24
24
2013
12:43 AM
12
12
43
AM
PDT
nullasalus:
And would your position therefore be that ‘genetic algorithms’ don’t have much to do with evolution?
I'm agnostic about genetic algorithms, because I have not studied them enough. As I see it, evolution is driven by change in the environment. How well a simulation corresponds to evolution would depend on how realistic is its simulation of environmental change.Neil Rickert
November 23, 2013
November
11
Nov
23
23
2013
09:35 PM
9
09
35
PM
PDT
If you really can’t see that a search for an ultra-specific pre-specified sequence with no fitness function at all is not a very good analog for biological evolution then I really can’t see the point of corresponding.
On the contrary, the expected result of "a search for an ultra-specific pre-specified sequence with no fitness function at all" is exactly the result of what we see in the wild today. There is no fitness function evolving new proteins in the present day. The model I suggest accords with what is actually observed, not with what evolutionary biologists speculate happened in the past. Why the present day doesn't line up with the claims of evolutionary biology is something evolutionary biologists have to contend with. But sensible theory and real-time or near-real time data are not cooperating with the claims of evolutionary biologists. I'd find it more palatable if evolutionary biologists admitted there was an unknown mechanism that is the primary cause of protein evolution. To keep insisting it is something like selection is to insist on something contrary to theory and empirical evidence.scordova
November 23, 2013
November
11
Nov
23
23
2013
09:17 PM
9
09
17
PM
PDT
If you really can’t see that a search for an ultra-specific pre-specified sequence with no fitness function at all is not a very good analog for biological evolution then I really can’t see the point of corresponding.
This is nonsense on the face of it. 1. There is a fitness function: either you pass or you don't. 2. All sequences are pre-specified, whether or not there is only one or many. In biology, any existing DNA sequence is specified by virtue of having been selected by whatever method. It it can be found, it follow that was in the search space whether or not anybody knew it beforehand. Your objection is noted and rejected.Mapou
November 23, 2013
November
11
Nov
23
23
2013
08:25 PM
8
08
25
PM
PDT
If you really can't see that a search for an ultra-specific pre-specified sequence with no fitness function at all is not a very good analog for biological evolution then I really can't see the point of corresponding.wd400
November 23, 2013
November
11
Nov
23
23
2013
07:59 PM
7
07
59
PM
PDT
Of related note: Some proteins are now shown to be absolutely irreplaceable in their specific biological/chemical reactions for the first cell: Without enzyme, biological reaction essential to life takes 2.3 billion years: UNC study: In 1995, Wolfenden reported that without a particular enzyme, a biological transformation he deemed “absolutely essential” in creating the building blocks of DNA and RNA would take 78 million years.“Now we’ve found a reaction that – again, in the absence of an enzyme – is almost 30 times slower than that,” Wolfenden said. “Its half-life – the time it takes for half the substance to be consumed – is 2.3 billion years, about half the age of the Earth. Enzymes can make that reaction happen in milliseconds.” http://www.med.unc.edu/www/news/2008-news-archives/november/without-enzyme-biological-reaction-essential-to-life-takes-2-3-billion-years-unc-study/?searchterm=Wolfenden "Phosphatase speeds up reactions vital for cell signalling by 10^21 times. Allows essential reactions to take place in a hundreth of a second; without it, it would take a trillion years!" Jonathan Sarfati http://www.pnas.org/content/100/10/5607.abstractbornagain77
November 23, 2013
November
11
Nov
23
23
2013
07:47 PM
7
07
47
PM
PDT
I know the objection to the link below, so here - Objection overruled ! because:the fitness function defines the problem to be solved, not the way to solve it, and it therefore makes little sense to talk about the programmer fine-tuning the fitness function in order to solve the problem Antenna using Genetic algorithmselvaRajan
November 23, 2013
November
11
Nov
23
23
2013
07:44 PM
7
07
44
PM
PDT
Coincidentally, I was just reading about Lenski's work. It's interesting in that actual data and observation is involved rather than simply hopeful conjectures, evolving expectations, and emphatic arm waving. I was left with the following impressions: - The bacteria adapted and optimized for their environment. - All the adaptations involved a loss or degradation of function or complexity. - No novel structures formed: no light-sensitive spots, no injection mechanisms, no new motive structures. - Perhaps the experiment should have involved a greater variety of environmental challenges. - Introducing and tracking DNA fragments from dead organisms should also be interesting. - QQuerius
November 23, 2013
November
11
Nov
23
23
2013
07:29 PM
7
07
29
PM
PDT
From NFL applicability
NFL applies only to algorithms meeting the following conditions: • The algorithm must be a black-box algorithm, i.e. it has no knowledge about the problem it is trying to solve other than the underlying structure of the phase space and the values of the fitness function at the points it has already visited. • In principle, there must be a finite number of points in the phase space and a finite number of possible fitness values. In practice, however, continuous variables can be approximated by rounding to discrete values. • The algorithm must not visit the same point twice. This can be avoided by having the algorithm keep a record of all the points it has visited so far, with their fitness values, so it can avoid repeated visits to a point. This may not be practical in a real computer program, but most real phase spaces are sufficiently vast that revisits are unlikely to occur often, so we can ignore this issue. • The fitness function may remain fixed throughout the execution of the program, or it may vary over time in a manner which is independent of the progress of the algorithm. These two options correspond to Wolpert and Macready's Theorems 1 and 2 respectively. However, the fitness function may not vary in response to the progress of the algorithm. In other words, the algorithm may not deform the fitness landscape
5.3 The No Free Lunch Theorems NFL is not applicable to biological evolution, because biological evolution cannot be represented by any algorithm which satisfies the conditions given above. Unlike simpler evolutionary algorithms, where reproductive success is determined by a comparison of the innate fitness of different individuals, reproductive success in nature is determined by all the contingent events occurring in the lives of the individuals. The fitness function cannot take these events into account, because they depend on interactions with the rest of the population and therefore on the characteristics of other organisms which are also changing under the influence of the algorithm. In other words, the fitness function of biological organisms changes over time in response to changes in the population (of the same species and of other species), violating the final condition listed above
selvaRajan
November 23, 2013
November
11
Nov
23
23
2013
06:57 PM
6
06
57
PM
PDT
They demonstrate the space of solutions is small to begin with, and when in the context of complex systems with matching parts, the solutions space for a given protein is even smaller, not to mention all the other necessary features that need to be in place such as regulation.
A very important point. Even if the (alleged) algorithm of NS stumbles upon a protein which the system needs, the new protein will be disruptive for homeostasis and subsequently kill the organism if it is not perfectly regulated. A most important point which is often ignored by our Darwinian friends.Box
November 23, 2013
November
11
Nov
23
23
2013
05:55 PM
5
05
55
PM
PDT
If genetic algorithms can’t solve passwords, why should they be expected to solve complex proteins. That’s a pretty great non sequitur.
No it's not because a functional protein is composed of an alphabet of amino-acid "letters". Only certain combinations of letters are functional, just like only certain combinations of characters are functional passwords for a given system. Amazing you can't see the analogy since you know proteins are described with alphabetic characters. If a system needs insulin proteins, bone morphogenic proteins won't do.
If you think this example (an unknown pre-specified target and no fitness landscape) has anything to do with evolutionary biology you’ll need to explain how.
Physics and engineering principles pre-specify what will and what will not be functional ahead of time. They demonstrate the space of solutions is small to begin with, and when in the context of complex systems with matching parts, the solutions space for a given protein is even smaller, not to mention all the other necessary features that need to be in place such as regulation.
And organisms don’t “solve” protein structures.
Agree, they either maintain existing proteins, or lose them. We don't observe organisms creating new proteins in the field or lab of any degree of serious complexity above its ancestral form do we? If the evolutionists didn't like my single password challenge to Dave Thomas, I could have made a multiple password space of possible solutions. If the suite of passwords is collectively difficult to solve, he still won't be getting a free lunch.scordova
November 23, 2013
November
11
Nov
23
23
2013
05:36 PM
5
05
36
PM
PDT
If a chef creates a new secret recipe for a special sauce, does he or she create new information? I think so. I can easily imagine an intelligent robot doing the same thing.Mapou
November 23, 2013
November
11
Nov
23
23
2013
05:22 PM
5
05
22
PM
PDT
In my opinion, an AI can increase information by asking simple questions such as "what would happen if I combine A with B and/or C?Mapou
November 23, 2013
November
11
Nov
23
23
2013
05:18 PM
5
05
18
PM
PDT
If genetic algorithms can’t solve passwords, why should they be expected to solve complex proteins. That's a pretty great non sequitur. If you think this example (an unknown pre-specified target and no fitness landscape) has anything to do with evolutionary biology you'll need to explain how. And organisms don't "solve" protein structures.wd400
November 23, 2013
November
11
Nov
23
23
2013
05:17 PM
5
05
17
PM
PDT
Alan Foxes:
Evolutionary processes are not searches.
Wow. This is the first I heard of this. Of course, evolutionary processes are searches. They don't know what they are looking for in particular but they are certainly searching for anything that survives in an astronomical large search space. Needle in the haystack does not do it justice.Mapou
November 23, 2013
November
11
Nov
23
23
2013
05:11 PM
5
05
11
PM
PDT
Richard Lenski’s work shows the power of evolutionary processes in an asexual environment. Given eukaryotes and sexual reproduction, the sky’s the limit!
Lenskis experiments resulted in one(!) significant change (usage of citric acid as carbon source) after 31.500 generations of bacteria. And this wasn't even a "new invention" since the bacteria already had this ability if there was no oxygen present. The change was caused by simple gene duplication that moved the citT gene to a promotor that's active under the presence of oxygen which then allowed the gene to be expressed in this strain. So, is this supposed to be the "power" of evolutionary processes?Sebestyen
November 23, 2013
November
11
Nov
23
23
2013
04:39 PM
4
04
39
PM
PDT
There are an infinite number of ways to build houses of cards, that doesn't make them highly probable based on random positions, orientations, and initial velocities of cards. The protein problem, the password problem, the house of cards problem have comparable statistics of improbability. Lenski's work is an embarrassment to evolutionary claims. ID proponents love his work!scordova
November 23, 2013
November
11
Nov
23
23
2013
04:07 PM
4
04
07
PM
PDT
Neil,
Evolution is not a search. Confining it to a fixed formal setup is unrealistic.
What makes it unrealistic? Can you explain to me the sort of things we cannot or should not expect to evolve, given your objection? What is it about highly specified passwords in the situation Sal has described that makes evolution incapable of 'finding the solution'? And would your position therefore be that 'genetic algorithms' don't have much to do with evolution? Those are, after all, searches.nullasalus
November 23, 2013
November
11
Nov
23
23
2013
04:01 PM
4
04
01
PM
PDT
Thanks for the post at 20 Chance, https://uncommondescent.com/computer-science/dawkins-weasel-vs-blind-search-simplified-illustration-of-no-free-lunch-theorems/#comment-481017 I strongly agree with you. i.e. "Information does not magically materialize" William Dembskibornagain77
November 23, 2013
November
11
Nov
23
23
2013
03:51 PM
3
03
51
PM
PDT
Maybe it will invent a hacksaw. Or maybe it will invent some acid that disolves the lock on the briefcase. Evolution is not a search. Confining it to a fixed formal setup is unrealistic
The problem is that life implements extravagant solutions (like proteins necessary for multicellular life). If life evolved, there needs to be an explanation for why it chose the extravagant solutions, and why, in the present day, evolution in the present day is disposing of extravagance rather quickly (Behe's rule).scordova
November 23, 2013
November
11
Nov
23
23
2013
03:48 PM
3
03
48
PM
PDT
Chance, Depending on how one calculates the CSI of 2000 coins: https://uncommondescent.com/computer-science/the-paradox-in-calculating-csi-numbers-for-2000-coins/ One might get different answers to the question of AI in an open environment.
This automaton is fascinating: see Jaquet-Droz The Writer, and Video: The Writer. It produces a handwritten message with ink and quill on paper, and it’s a product of clock-making genius. However it doesn’t actually produce any information, rather it’s programmed with cams and cam followers to transfer preexisting information from one form/medium to another.
Very similar to Eric Anderson's question about the copies of War and Peace that inspired the 2000-coin paradox. One view would say more copies creates more CSI, another view would say not. You'll notice there isn't agreement about how much CSI there is in 2000 coins -- that is the point. That is the source of the irresolution on this question... I wrote this thread to point out, there is one area, closed systems, where the NFL arguments clearly are in play and are relevant to evolution. I think other areas may not be so clear cut.scordova
November 23, 2013
November
11
Nov
23
23
2013
03:45 PM
3
03
45
PM
PDT
as to:
"But evolutionary processes don’t find particular solutions like passwords. They find any solution!"
Well 'any solution' (not necessarily the solution to the immediate problem evolution NEEDS a solution for at a particular time) would still be a 1 in 10^70 to a 1 in 10^77 chance:
Proteins Did Not Evolve Even According to the Evolutionist’s Own Calculations but so What, Evolution is a Fact - Cornelius Hunter - July 2011 Excerpt: For instance, in one case evolutionists concluded that the number of evolutionary experiments required to evolve their protein (actually it was to evolve only part of a protein and only part of its function) is 10^70 (a one with 70 zeros following it). Yet elsewhere evolutionists computed that the maximum number of evolutionary experiments possible is only 10^43. Even here, giving the evolutionists every advantage, evolution falls short by 27 orders of magnitude. The theory, even by the evolutionist’s own reckoning, is unworkable. Evolution fails by a degree that is incomparable in science. Scientific theories often go wrong, but not by 27 orders of magnitude. And that is conservative. http://darwins-god.blogspot.com/2011/07/response-to-comments-proteins-did-not.html Estimating the prevalence of protein sequences adopting functional enzyme folds: Doug Axe: Excerpt: The prevalence of low-level function in four such experiments indicates that roughly one in 10^64 signature-consistent sequences forms a working domain. Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences. http://www.ncbi.nlm.nih.gov/pubmed/15321723 Correcting Four Misconceptions about my 2004 Article in JMB — May 4th, 2011 by Douglas Axe http://www.biologicinstitute.org/post/19310918874/correcting-four-misconceptions-about-my-2004-article-in Show Me: A Challenge for Martin Poenie - Douglas Axe August 16, 2013 Excerpt: Poenie want to be free to appeal to evolutionary processes for explaining past events without shouldering any responsibility for demonstrating that these processes actually work in the present. That clearly isn't valid. Unless we want to rewrite the rules of science, we have to assume that what doesn't work didn't work. It isn't valid to think that evolution did create new enzymes if it hasn't been demonstrated that it can create new enzymes. And if Poenie really thinks this has been done, then I'd like to present him with an opportunity to prove it. He says, "Recombination can do all the things that Axe thinks are impossible." Can it really? Please show me, Martin! I'll send you a strain of E. coli that lacks the bioF gene, and you show me how recombination, or any other natural process operating in that strain, can create a new gene that does the job of bioF within a few billion years. http://www.evolutionnews.org/2013/08/a_challenge_for075611.html
But hey, even if we take the extremely unrealistic low end probability estimate of evolutionists (1 in a trillion) for finding a specific protein domain we still be faced with this completely absurd scenario:
How Proteins Evolved - Cornelius Hunter - December 2010 Excerpt: Comparing ATP binding with the incredible feats of hemoglobin, for example, is like comparing a tricycle with a jet airplane. And even the one in 10^12 shot, though it pales in comparison to the odds of constructing a more useful protein machine, is no small barrier. If that is what is required to even achieve simple ATP binding, then evolution would need to be incessantly running unsuccessful trials. The machinery to construct, use and benefit from a potential protein product would have to be in place, while failure after failure results. Evolution would make Thomas Edison appear lazy, running millions of trials after millions of trials before finding even the tiniest of function. http://darwins-god.blogspot.com/2010/12/how-proteins-evolved.html
But that absurd scenario is not what we find in biological life, instead we find and an extreme level of fidelity for protein synthesis:
The Ribosome: Perfectionist Protein-maker Trashes Errors Excerpt: The enzyme machine that translates a cell's DNA code into the proteins of life is nothing if not an editorial perfectionist...the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products... To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is "shocking" and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis. http://www.sciencedaily.com/releases/2009/01/090107134529.htm
And exactly how is the evolution new life forms suppose to 'randomly' occur if it is prevented from 'randomly' occurring to the proteins in the first place? Indeed, I want to know where this marvel of the ribosome, the only known protein factory in the world, came to be in the first place:
LIFE: WHAT A CONCEPT! Excerpt: The ribosome,,,, it's the most complicated thing that is present in all organisms.,,, you find that almost the only thing that's in common across all organisms is the ribosome.,,, So the question is, how did that thing come to be? And if I were to be an intelligent design defender, that's what I would focus on; how did the ribosome come to be? George Church - Harvard Wyse Institute http://www.edge.org/documents/life/church_index.html Honors to Researchers Who Probed Atomic Structure of Ribosomes - Robert F. Service Excerpt: "The ribosome’s dance, however, is more like a grand ballet, with dozens of ribosomal proteins and subunits pirouetting with every step while other key biomolecules leap in, carrying other dancers needed to complete the act.” http://creationsafaris.com/crev200910.htm#20091010a Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems - 2012 David J D’Onofrio1*, David L Abel2* and Donald E Johnson3 Excerpt: The DNA polynucleotide molecule consists of a linear sequence of nucleotides, each representing a biological placeholder of adenine (A), cytosine (C), thymine (T) and guanine (G). This quaternary system is analogous to the base two binary scheme native to computational systems. As such, the polynucleotide sequence represents the lowest level of coded information expressed as a form of machine code. Since machine code (and/or micro code) is the lowest form of compiled computer programs, it represents the most primitive level of programming language.,,, An operational analysis of the ribosome has revealed that this molecular machine with all of its parts follows an order of operations to produce a protein product. This order of operations has been detailed in a step-by-step process that has been observed to be self-executable. The ribosome operation has been proposed to be algorithmic (Ralgorithm) because it has been shown to contain a step-by-step process flow allowing for decision control, iterative branching and halting capability. The R-algorithm contains logical structures of linear sequencing, branch and conditional control. All of these features at a minimum meet the definition of an algorithm and when combined with the data from the mRNA, satisfy the rule that Algorithm = data + control. Remembering that mere constraints cannot serve as bona fide formal controls, we therefore conclude that the ribosome is a physical instantiation of an algorithm.,,, The correlation between linguistic properties examined and implemented using Automata theory give us a formalistic tool to study the language and grammar of biological systems in a similar manner to how we study computational cybernetic systems. These examples define a dichotomy in the definition of Prescriptive Information. We therefore suggest that the term Prescriptive Information (PI) be subdivided into two categories: 1) Prescriptive data and 2) Prescribed (executing) algorithm. It is interesting to note that the CPU of an electronic computer is an instance of a prescriptive algorithm instantiated into an electronic circuit, whereas the software under execution is read and processed by the CPU to prescribe the program’s desired output. Both hardware and software are prescriptive. http://www.tbiomed.com/content/pdf/1742-4682-9-8.pdf
bornagain77
November 23, 2013
November
11
Nov
23
23
2013
03:44 PM
3
03
44
PM
PDT
1 2

Leave a Reply