Uncommon Descent Serving The Intelligent Design Community

On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I do enjoy reading ID’s most vehement critics, both in formal publications (such as books and papers) and on the, somewhat less formal, Internet blogosphere. Part of the reason for this is that it gives one something of a re-assurance to observe the vacuous nature of many of the critics’ attempted rebuttals to the challenge offered to neo-Darwinism by ID, and the attempted compensation of its sheer lack of explicative power by the religious ferocity of the associated rhetoric (to paraphrase Lynn Margulis). The prevalent pretense that the causal sufficiency of neo-Darwinism is an open-and-shut case (when no such open-and-shut case for the affirmative exists) never ceases to amuse me.

One such forum where esteemed critics lurk is the Panda’s Thumb blog. A website devoted to holding the Darwinian fort, and one endorsed by the National Center for Selling Evolution Science Education (NCSE). Since many of the Darwinian heavy guns blog for this website, we can conclude that, if consistently demonstrably faulty arguments are common play, the front-line Darwinism defense lobby is in deep water.

Recently, someone referred me to two articles (one, two) on the Panda’s Thumb website (from back in 2007), by Arthur Hunt (professor in Department of Plant and Soil Sciences at the University of Kentucky). The first is entitled “On the evolution of Irreducible Complexity”; the second, “Reality 1, Behe 0” (the latter posted shortly following the publication of Behe’s second book, The Edge of Evolution).

The articles purport to refute Michael Behe’s notion of irreducible complexity. But, as I intend to show here, they do nothing of the kind!

In his first article, Hunt begins,

There has been a spate of interest in the blogosphere recently in the matter of protein evolution, and in particular the proposition that new protein function can evolve. Nick Matzke summarized a review (reference 1) on the subject here. Briefly, the various mechanisms discussed in the review include exon shuffling, gene duplication, retroposition, recruitment of mobile element sequences, lateral gene transfer, gene fusion, and de novo origination. Of all of these, the mechanism that received the least attention was the last – the de novo appearance of new protein-coding genes basically “from scratch”. A few examples are mentioned (such as antifreeze proteins, or AFGPs), and long-time followers of ev/cre discussions will recognize the players. However, what I would argue is the most impressive of such examples is not mentioned by Long et al. (1).

There is no need to discuss the cited Long et al. (2003) paper in any great detail here, as this has already been done by Casey Luskin here (see also Luskin’s further discussion of Anti-Freeze evolution here), and I wish to concern myself with the central element of Hunt’s argument.

Hunt continues,

Below the fold, I will describe an example of de novo appearance of a new protein-coding gene that should open one’s eyes as to the reach of evolutionary processes. To get readers to actually read below the fold, I’ll summarize – what we will learn of is a protein that is not merely a “simple” binding protein, or one with some novel physicochemical properties (like the AFGPs), but rather a gated ion channel. Specifically, a multimeric complex that: 1. permits passage of ions through membranes; 2. and binds a “trigger” that causes the gate to open (from what is otherwise a “closed” state). Recalling that Behe, in Darwin’s Black Box, explicitly calls gated ion channels IC systems, what the following amounts to is an example of the de novo appearance of a multifunctional, IC system.

Hunt is making big promises. But does he deliver? Let me briefly summarise the jist of Hunt’s argument, and then briefly weigh in on it.

The cornerstone of Hunt’s argument is principally concerned with the gene, T-urf13, which, contra Behe’s delineated ‘edge’ of evolution, is supposedly a de novo mitochondrial gene that very quickly evolved from other genes which specified rRNA, in addition to some non-coding DNA elements. The gene specifies a transmembrane protein, which aids in facilitating the passage of hydrophilic molecules across the mitochondrial membrane in maize – opening only when bound on the exterior by particular molecules.

The protein is specific to the mitochondria of maize with Texas male-sterile cytoplasm, and has also been implicated in causing male sterility and sensitivity to T-cytoplasm-specific fungal diseases. Two parts of the T-urf13 gene are homologous to other parts in the maize genome, with a further component being of unknown origin. Hunt maintains that this proves that this gene evolved by Darwinian-like means.

Hunt further maintains that the T-urf13 consists of at least three “CCCs” (recall Behe’s argument advanced in The Edge of Evolution that a double “CCC” is unlikely to be feasible by a Darwinian pathway). Two of these “CCCs”, Hunt argues, come from the binding of each subunit to at minimum two other subunits in order to form the heteromeric complex in the membrane. This entails that each respective subunit have at minimum two protein-binding sites.

Hunt argues for the presence of yet another “CCC”:

[T]he ion channel is gated. It binds a polyketide toxin, and the consequence is an opening of the channel. This is a third binding site. This is not another protein binding site, and I rather suppose that Behe would argue that this isn’t relevant to the Edge of Evolution. But the notion of a “CCC” derives from consideration of changes in a transporter (PfCRT) that alter the interaction with chloroquine; toxin binding by T-urf13 is quite analogous to the interaction between PfCRT and chloroquine. Thus, this third function of T-urf13 is akin to yet another “CCC”.

He also notes that,

It turns out that T-urf13 is a membrane protein, and in membranes it forms oligomeric structures (I am not sure if the stoichiometries have been firmly established, but that it is oligomeric is not in question). This is the first biochemical trait I would ask readers to file away – this protein is capable of protein-protein interactions, between like subunits. This means that the T-urf13 polypeptide must possess interfaces that mediate protein-protein interactions. (Readers may recall Behe and Snokes, who argued that such interfaces are very unlikely to occur by chance.)

[Note: The Behe & Snoke (2004) paper is available here, and their response (2005) to Michael Lynch’s critique is available here.]

Hunt tells us that “the protein dubbed T-urf13 had evolved, in one fell swoop by random shuffling of the maize mitochondrial genome.” If three CCC’s really evolved in “one fell swoop” by specific but random mutations, then Behe’s argument is in trouble. But does any of the research described by Hunt make any progress with regards to demonstrating that this is even plausible? Short answer: no.

Hunt does have a go of guesstimating the probabilistic plausibility of such an event of neo-functionalisation taking place. He tells us, “The bottom line – T-urf13 consists of at least three ‘CCCs’. Running some numbers, we can guesstimate that T-urf13 would need about 10^60 events of some sort in order to occur.”

Look at what Hunt concludes:

Now, recall that we are talking about, not one, but a minimum of three CCC’s. Behe says 1 in 10^60, what actually happened occurred in a total event size of less that 10^30. Obviously, Behe has badly mis-estimated the “Edge of Evolution”. Briefly stated, his “Edge of Evolution” is wrong. [Emphasis in original]

Readers trained in basic logic will take quick note of the circularity involved in this argumentation. Does Hunt offer any evidence that T-urf13 could have plausibly evolved by a Darwinian-type mechanism? No, he doesn’t. In fact, he casually dismisses the mathematics which refutes his whole argument. Here we have a system with a minimum of three CCCs, and since he presupposes as an a priori principle that it must have a Darwinian explanation, this apparently refutes Behe’s argument! This is truly astonishing argumentation. Yes, certain parts of the gene have known homologous counterparts. But, at most, that demonstrates common descent (and even that conclusion is dubious). But a demonstration of homology, or common ancestral derivation, or a progression of forms is not, in and of itself, a causal explanation. Behe himself noted in Darwin’s Black Box, “Although useful for determining lines of descent … comparing sequences cannot show how a complex biochemical system achieved its function—the question that most concerns us in this book.” Since Behe already maintains that all life is derivative of a common ancestor, a demonstration of biochemical or molecular homology is not likely to impress him greatly.

How, then, might Hunt and others successfully show Behe to be wrong about evolution? It’s very simple: show that adequate probabilistic resources existed to facilitate the plausible origin of these types of multi-component-dependent systems. If, indeed, it is the case that each fitness peak lies separated by more than a few specific mutations, it remains difficult to envision how the Darwinian mechanism might adequately facilitate the transition from one peak to another within any reasonable time frame. Douglas Axe, of the biologic institute, showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalisation. If a duplicated gene is neutral (in terms of its cost to the organism), then the  maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). One other study, published in Nature in 2001 by Keefe & Szostak, documented that more than a million million random sequences were required in order to stumble upon a functioning ATP-binding protein, a protein substantially smaller than the transmembrane protein specified by the gene, T-urf13. Douglas Axe has also documented (2004), in the Journal of Molecular Biology, the prohibitive rarity of functional enzymatic binding domains with respect to the vast sea of combinatorial sequence space in a 150 amino-acid long residue (Beta-Lactamase).

What, then, can we conclude? Contrary to his claims, Hunt has failed to provide a detailed and rigorous account of the origin of T-urf13. Hunt also supplies no mathematical demonstration that the de novo origin of such genes is sufficiently probable that it might be justifiably attributed to an unguided or random process, nor does he provide a demonstration that a step-wise pathway exists where novel utility is conferred at every step (being separated by not more than one or two mutations) along the way prior to the emergence of the T-urf13 gene.

The Panda’s Thumb are really going to have to do better than this if they hope to refute Behe!

Comments
MathGrrl [177]:
Ray’s Tierra routinely evolves digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes. By your definition, CSI has been generated by known, observed evolutionary mechanisms with no intelligent agency required.
This statement really surprises me. You say you're familiar with Dembski's work, and, in particular, No Free Lunch. Yet anyone with a fleeting acquantince with Dembski's work would know that, no matter how well something is 'specified', it does not constitute CSI unless its "complexity" is of the order 10^150. In terms of bits, that's around 400. You say 22 bytes, and I'll assume it's 8 bits to the byte, or, 176 bits. This is well below the required 400 bits. You should just know this. Why don't you? As to Tierra, I looked at a pp slideshow Ray Tierra provides. Excuse me, but it's not impressive in the least. And how is it that he was looking for "parasites"? Because "parasites" feed off other organisms, and that was the only way he could end up with anything at all (i.e., he couldn't really produce novel function, but only one thing piggy-backing on another thing's function---said function being inputted). And what was the process by which this wonderful creature, this parasite, "evolved"? By wholesale elimination of some subroutines, or some such thing: that is, by the ELIMINATION of information! This is, of course, exactly what Behe proposes that "evolution" does when faced with a challenge (see his QRB paper). These programs all have hidden information in them. Dembski and Marks are doing wonderful work pointing all this out. In the meantime, you keep insisting on the who, what, where, when an d how, the Designer designed, and insisting yoube given a mathematically rigourous definition of CSI. As to the "what, where, when and how", let me ask you a very simple question: we know that the Cambrian Explosion happened (the fossil record bears this out), please, then, tell me exactly how, where, using what and when did the great proliferation of body-plans (basically phylla) come about? To tell me that "Darwinian evolution" did it is to tell me nothing---for you first assume these mechanisms and then simply turn around and posit them as an explanation. Please give me extensive evidence that documents just how Darwinian processes brought this about. If you can't, then I think all your protestations here have been highly disingenious. As to the definition, Dembski's definition, if properly understood, is sufficiently useful and sufficiently rigorous. As to Schneider, he has no conception at all of what a "specification" is. As to why mathematicians of supposed worth have such difficulty understanding it . . . well, one is only left to surmise that it is because they don't care to understand it. I suspect you're really and truly in this group. I had a go around online with none other than Jeffrey Shallit about such things, and he, too, had no conception whatever of what a "specification" is. In fact, when I asked him for what he considered to be a specification, the first example he tossed in as 'coming off the top of his head' turned out to lead to an almost impossibility, forcing him to redefine his original specification. But his revised specification also had its problems---which I very happily pointed out to him. This was after I pointed out his error in an example he gave as an instance of a very simple algorithm actually producing CSI. The problem your side has is that it doesn't understand what a "specification" is. So far, I've only seen one person who seems to really understand it (I think it was Sober), and he didn't question Dembski about the triviality that most on your side understand to be a "specification" , but, rather, questioned Dembski's assumption that biologically functioning entities are "specified". To some degree this was a fair criticism; but a fair-minded reader would not, I don't think, see this as much of a problem. One final thing, as to Shallit, I gave him two 150-long bit strings: one randomly generated, the other humanly generated (CSI). He couldn't tell the difference. Here's what a "specification" truly is: once I tell you which of the two strings possesses CSI, and 'how' it was generated, then you can "see" the "pattern". IOW, the bit-string can be "translated", or "converted"; whereas, a random bit-string is meaningless, and "converts" into nothing.PaV
March 7, 2011
March
03
Mar
7
07
2011
02:38 PM
2
02
38
PM
PDT
MathGrrl: The counter to Dr. Schneider's assessment is twofold: (1) his understanding of what a pattern consists of is flawed; and (2) given his example, this flawed understanding is on display. How do I "prove" it is flawed? Well, I simply point to No Free Lunch and how Dembski defines and describes it there. The onus is for those scientists/mathematicians who want to disagree with Dembski to understand him correctly. Just read the book. Secondly, you could have disputed my claim that a pattern, to be "specified", has to have meaning or value, and that in the case of biological function this meaning or value is implicit. You chose not to dispute that. Since you did not dispute that, then, very clearly, unless Schneider can point to some kind of "function" (outside of a computer) that his sequence performs, than my criticism of him stands.PaV
March 7, 2011
March
03
Mar
7
07
2011
02:38 PM
2
02
38
PM
PDT
Upright BiPed, Thank you for the straightforward answer. Unfortunately, I find it more confusing than enlightening. Many ID proponents, including Dembski and kairosfocus, indicate that CSI can be calculated for real world genomes. Do you disagree? If not, where does the "system" come into the calculation? Why, exactly, is it impossible in principle for evolutionary mechanisms, which are known to modify genomes in populations, to generate some level of CSI?MathGrrl
March 7, 2011
March
03
Mar
7
07
2011
10:09 AM
10
10
09
AM
PDT
PaV,
I’ll leave off here and await your response.
I appreciate that you took the time to reply, but there isn't enough to respond to in your post. You seem to disagree with Dr. Schneider's assessment, but you haven't provided a rigorous definition of CSI to counter his claims. I would be interested to see how you would calculate CSI for the four scenarios I described above.MathGrrl
March 7, 2011
March
03
Mar
7
07
2011
10:08 AM
10
10
08
AM
PDT
Joseph,
It is only possble to corretly simulate things you understand.
Evolutionary mechanisms are observed and documented. We understand them sufficiently to model them in simulations. Thus far no one has defined CSI rigorously enough to say the same about it.
Interestingly Tom Schneider’s ev has been shown to b a targeted search
I'm familiar with ev, to the extent that I'm working on a variant of it in my spare time. I assure you that there is no explicit target in the simulation. Where do you get the idea that there is?MathGrrl
March 7, 2011
March
03
Mar
7
07
2011
10:08 AM
10
10
08
AM
PDT
"Nothing you have said answers the very simple question: Can CSI, by your yet to be detailed definition, be generated by known evolutionary mechanisms, even in principle? No. It requires the system, and the system cannot be accounted for by evolutionary processes - by definition. FSCI is a subset of information. Information requires both semiotic representations and rules in order to exist at all. THAT is the very simple answer to the "very simple question" you ask. You may ignore it, but that does not make it go away.Upright BiPed
March 6, 2011
March
03
Mar
6
06
2011
05:43 PM
5
05
43
PM
PDT
F/N 2: MG, you should not put words in my mouth that do not belong there, twisting what I have repeatedly said into a strawman. Observe, I have pointed out that the challenge is not to move around and climb hills inside islands of function with nice little fitness functions, but to get the high degree of functionally specific complex information requires to set up a system on such an island. That is why I have spoken of the 409 kbytes in the ev binary file, and compared it to the quantum of information to get a body plan. I then suggested an exercise: write an ev by chance and mechanical necessity, then allow it to move around in an island of function once it has been so formed. That would indeed be possible as a simulation exercise, and it would indeed be comparable as an informational exercise to the origin of a body plan by chance plus necessity. You have raised the issue of weather simulations and whether such sims imply designed weather. Weather is simulated, and such sims have a fair degree of success. But weather is not the result of claimed spontaneous origin of information systems that work on reading and executing coded information. Instead we are dealing with dynamical systems modelled by sets of differential equations and boundary conditions. That is exactly what life is based on. That you would even try to compare the two inadvertently speaks volumes on the failure to understand the issue of digitally coded information in living systems and its significance. That is a sufficiently important distinction to break the analogy you were trying to draw. GEM of TKIkairosfocus
March 6, 2011
March
03
Mar
6
06
2011
05:36 PM
5
05
36
PM
PDT
F/N: On ev et al, I have taken abundant time to explain why there is no informational free lunch, and why the whole procedure used begs key questions. I need not repeat myself at this time.kairosfocus
March 6, 2011
March
03
Mar
6
06
2011
05:22 PM
5
05
22
PM
PDT
MG: Again, DNA functions in living forms,and that function is based on a highly specific organisation of bases. Similarly, proteins specified by DNA fall into fold domains, and carry out specific functions based on the order of their AAs. At simple level this can be readily measured in functionally specific bits, as with any type of file. At the next level, Durston et al have published in the peer reviewed literature measurements of FSC in FITS, for 35 protein families. All of this was pointed out already. All of this you pass over in silence, in your haste to try ti impugn Dembski's CSI metric as described here, esp pp 17 - 24. (Cf UD WAC 27 here. And BTW, the Durston et al FSC metric is related to this.) You go on to say it has never been applied to a biosystem, then when you are corrected, you say not acceptable by your standards. Do you not see that the above comes across as an exercise in selective hyperskepticism on your part? GEM of TKIkairosfocus
March 6, 2011
March
03
Mar
6
06
2011
05:20 PM
5
05
20
PM
PDT
MathGrrl: Here's a quote from the site you linked: Although there is no information about how I got these, since both have a pattern and both have nice significant sequence logos, we have to conclude that both have "Specified Complexity". According to Dembski, we must conclude that they both were generated by an "intelligent designer". This equivocal use of "pattern" seems to be the typical mistake your side makes. A simple specification---a simple string of letters---of amino acids is, in terms of Dembski's formulation, meaningless. For a "pattern" to truly exist it must possess some kind of meaning or value. E.g., you roll a die a thousand times, per Shannon information, and per Dembski, this would be highly "complex"; but it would have NO "specificity". In the example your friend is using, if you compare the two "patterns", they don't match at all. But there is also this other huge difference: the one on the right is a REAL amino acid sequence corresponding to a functioning protein, whereas the one on the right is simply some kind of a sequence (a monkey could have typed it). This is not specificity. It seems most opponents of Dembski fail to understand what Dembski means about "specificity", AND, they "specifically" (pun intended) fail to understand why Dembski presumes that biological patterns (sequences) are "specified". Isn't it obvious that a coding sequence found in nature has meaning or value, and that it contains true information? The analogue, of course, is input sequences that one would make to a computer: you input a list of letters and symbols and punctuation marks, and lo, and behold, the computer does something. But change that sequence---make and error---and nothing happens: the famous "junk in, junk out." I'll leave off here and await your response.PaV
March 6, 2011
March
03
Mar
6
06
2011
05:00 PM
5
05
00
PM
PDT
What a joke: MathGrrl: Are you claiming that it is impossible, even in principle, to test evolutionary mechanisms in digital simulations? It is only possble to corretly simulate things you understand.
By the same logic, are you arguing that the weather is a product of intelligent agency since meteorologists make extensive use of intelligently designed simulations?
Only because we have a pretty good understanding of weather patterns.
Yes, I am familiar with No Free Lunch. The explanation of CSI there is not sufficiently mathematically rigorous to test the claims being made here.
Strange- CSI has more mathematical rigor than anything your position has to offer. People in glass houses, and all. Interestingly Tom Schneider's ev has been shown to b a targeted search and he doesn't like that at all.Joseph
March 6, 2011
March
03
Mar
6
06
2011
04:43 PM
4
04
43
PM
PDT
PaV, Yes, I am familiar with No Free Lunch. The explanation of CSI there is not sufficiently mathematically rigorous to test the claims being made here. Interestingly, Tom Schneider of ev fame commented on the same problem, but he attempted to calculate CSI based on Dembski's explanation. His result is here: http://www-lmmb.ncifcrf.gov/~toms/paper/ev/dembski/specified.complexity.html I suspect that most ID proponents will not agree with his conclusions, hence my request here for a more rigorous formulation.MathGrrl
March 6, 2011
March
03
Mar
6
06
2011
04:35 PM
4
04
35
PM
PDT
Here's the correct post: MathGrrl:
I have read Dembski’s writings on CSI and, to my knowledge, he has never shown how to calculate it for real biological artifacts taking into consideration observed evolutionary mechanisms.
In NFL (No Free Lunch), Dembski performs a calculation for a biological system. Are you familiar with that calculation?PaV
March 6, 2011
March
03
Mar
6
06
2011
04:14 PM
4
04
14
PM
PDT
MathGrrl: I have read Dembski’s writings on CSI and, to my knowledge, he has never shown how to calculate it for real biological artifacts taking into consideration observed evolutionary mechanisms. In NFL (No Free Lunch), Dembski performs a calculation for a biological system. Are you familiar with that calculation?PaV
March 6, 2011
March
03
Mar
6
06
2011
04:13 PM
4
04
13
PM
PDT
kairosfocus,
How many ways do we need to say: THERE IS NO FREE INFORMATIONAL LUNCH?
You keep saying that, but until you define your terms it is impossible to test your claims.
I repeat: the GAs all set up the comparable algorithmic flowcharts [program execution flow and signalling networks . . . ], putting us on deeply isolated islands of function.
You are still missing the distinction between the simulator itself and the model being simulated. Are you claiming that it is impossible, even in principle, to test evolutionary mechanisms in digital simulations? By the same logic, are you arguing that the weather is a product of intelligent agency since meteorologists make extensive use of intelligently designed simulations? That's not a facetious question. I would genuinely like to know if you think that any testing of models is inherently impossible.MathGrrl
March 6, 2011
March
03
Mar
6
06
2011
03:35 PM
3
03
35
PM
PDT
vjtorley, Thank you for your courteous response.
I’m not a biologist, but if you’re looking for a mathematically rigorous definition of CSI, may I suggest that you check out the scientific articles listed at this Website of mine: http://www.angelfire.com/linux/vjtorley/ID.html
I've bookmarked your page since it contains a good selection of ID papers. Unfortunately, I have read those that deal with CSI and none of them provide a mathematically rigorous definition that I could use to model and test some of the claims being made here. I also very much appreciate your attempt to answer the who, what, when, where, and how questions. Again, unfortunately, I don't see any empirical observations that objectively indicate that intelligent agency is required for biological evolution. Behe's examples are the closest, but as has been pointed out by several reviewers, his concept of irreducible complexity fails to take into consideration known evolutionary mechanisms. Evolution can proceed by adding components, removing components, or modifying components. Behe ignores the second two processes, which is why exaptation manages to construct "irreducibly complex" structures. Typically a scientific hypothesis is based on some observations that are not adequately explained by the prevailing theories. In the case of ID, my understanding prior to actively participating here at UD was that CSI was observed in real biological systems. It turns out that not only can no ID proponent show that to be the case, there isn't even a rigorous mathematical definition of CSI or any of its variants that would allow such a measurement to be made. I am genuinely interested in testing the claim that CSI beyond a certain level is an indicator of intelligent agency. Without that rigorous definition, I can't do so. Further, without that definition no one can make any claims about what CSI does or does not indicate.MathGrrl
March 6, 2011
March
03
Mar
6
06
2011
03:33 PM
3
03
33
PM
PDT
Collin,
I wish I could come up with a mathematically rigorous definition of CSI. I thought that Dempski attempted that at one time. I’ll admit that I haven’t read any of his books.
I have read Dembski's writings on CSI and, to my knowledge, he has never shown how to calculate it for real biological artifacts taking into consideration observed evolutionary mechanisms.
But I wold also ask if anyone has ever come up with a mathematically rigorous definition of natural selection. It just seems like a heuristic to me.
Mathematical models are used extensively in various subdisciplines of biology. I recommend looking up "population genetics" for some very interesting examples.MathGrrl
March 6, 2011
March
03
Mar
6
06
2011
03:31 PM
3
03
31
PM
PDT
kairosfocus,
As to whether an explicit or implicit map of an island of function, joined to a hill climbing algorithm that runs you up the hill is a creation of new information that did not previously exist, I think this summary answers the point. And, this has been also said over and over by various persons.
Nothing you have said answers the very simple question: Can CSI, by your yet to be detailed definition, be generated by known evolutionary mechanisms, even in principle? I tried to use your definition from 167, but you refused to explain why my conclusions based on that definition were incorrect. I ask again, if you disagree with my conclusions, please demonstrate how you would calculate the CSI from the four scenarios I described (gene duplication leading to increased protein production, ev evolving binding sites, Tierra evolving parasites, and GAs evolving solutions to the Steiner Problem). That will provide more details on how to calculate the metric that you claim is indicative of intelligent agency.MathGrrl
March 6, 2011
March
03
Mar
6
06
2011
03:29 PM
3
03
29
PM
PDT
F/N 2: The other problem is the migration from one general function to the next by small steps, maintaining function all the way. For three letter words, this is easy enough:
man --> can --> tan --> tam --> tat --> cat --> mat --> bat --> bit --> bot . . .
(Such a simple space is indeed spannable by a branching network of stepping stones.) But, when we put up to the level of say this post and ask for an incremental change that keeps function all the way and then writes a new text, we begin to look at impossibilities. Texts of sufficient length naturally come in islands of function. (The same holds for computer programs.) Maybe, we can do a duplicate then let the duplicate tag along to the function and vary at will. But then, we are looking at needing to cross the functionality threshold in an increasingly large config space, and for the need for integrated function. Another empirical search space barrier emerges. And so on. Islands of function are credibly the natural situation, and pose a serious challenge.kairosfocus
March 6, 2011
March
03
Mar
6
06
2011
10:55 AM
10
10
55
AM
PDT
F/N: un-begging the question is simple to do, as with the second law of thermodynamics and perpetuum mobiles of the second kind: empirically set up an infinite monkeys test, in silicon or in vivo or in vitro, and let's see how it goes. Remember, the issue is to get to at least 1,000 bits of functional information by chance and mechanical necessity without intelligent direction. Here is the wiki article on the results to date:
The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[19] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d... Due to processing power limitations, the program uses a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detects a match" (that is, the RNG generates a certain value or a value within a certain range), the simulator simulates the match by generating matched text . . .
In short, a space of order 10^50 is searchable, but we are talking about spaces of order 10^300, in a cosmos that could have perhaps 10^150 states.kairosfocus
March 6, 2011
March
03
Mar
6
06
2011
10:28 AM
10
10
28
AM
PDT
Dr Bot: Pardon, but the questions ARE being begged. Once we have a known, causally sufficient source for functionally specific, complex information, and analytical reasons -- backed up by the infinite monkeys tests -- to doubt that the other major source of high contingency (i.e. chance) is causally sufficient, we have a good reason to see that such FSCI is a good sign of intelligent cause. When it comes to GAs and the like, I repeat, I am sorry, but the question is being begged. Again, you have a complex, functionally specific system, and you incorporate in it a sandbox level (comparatively speaking) random element that you use to walk around and hill-climb. Problem is, you have an intelligently designed sub function that is mapping that hill. [Here is a related question: does the Mandelbrot set plot contain more functional information than is in the program that specifies it? If you were to use a random walk to pick your points to test, would that change the amount of information in the set plot? After all, the interesting part of the set is really a function map.] You are not creating new function, you are playing with parameters within existing function, seeking a peak. And obviously, since Newton and so on, we have been using algorithms that seek peaks, and can incorporate random walks as part of the algorithms. Next, you are sharply constraining the degree of variability -- you are operating within a sub-space that is in a hot or target zone. You are not addressing a relevant config space gamut: you are not dealing with 10's or MB of variable information, or 100's of k bits. So, we are looking at a strawman, comparable at best to specialisation and adaptation [say, how a Tomcod becomes adapted to Dioxin in its river but in so adapting loses a certain degree of vigour so that except in that river, the ordinary variety prevails], not to origin of novel body plans. And certainly not the algorithms and control mechanisms, as well as system organisation to get to the relevant body plans. You will notice that somewhere above, I pointed out that ev was at 409 kB. This is comparable to a body plan origin challenge. So, again, let us see a case of an ev writing itself out of noise filtered by functionality testing then we can talk seriously. GEM of TKIkairosfocus
March 6, 2011
March
03
Mar
6
06
2011
10:16 AM
10
10
16
AM
PDT
KF I would also like your thoughts on my other observation above:
single bit changes of a configuration do not always equate to single bit changes in the measurable FCSI of that configuration.
Flipping a bit in a computer memory can render a piece of software non-functional - the software may have 1Mb of functional information but flip one bit and it has zero bits of functional information - is this correct or have I misunderstood?DrBot
March 6, 2011
March
03
Mar
6
06
2011
07:23 AM
7
07
23
AM
PDT
KF:
It seems we are going in circles, right from your outset.
Not at all. I am going in a straight line, slowly and methodically. Everyone else apart from MG keeps shooting off at tangents. This is most frustrating, I wish you would stay on topic and address the issues on their merits. What I have been trying to do here is take a systematic and critical look at the claims being made about FCSI so that we come to a detailed understanding of what can and can't generate FCSI, and under what conditions. I am being scientific. I make no apologies for insisting on rigour, I would demand the same from ANYONE discussing a scientific theory. In order to get to a discussion about the origins of FCSI we need to have a thorough look at if and how things that already have FCSI can increase or decrease it, what generates new FCSI and under what constraints. WHEN we have mapped out this space - when we have tested the ideas by applying the to things we know - we have a solid foundation to build on and we can move on to the question of the origins of FCSI. You keep talking about tornados in factories and demand that I explain how a computer could be made under those circumstances - let me spell out for you AGAIN things I have said plenty of times before: I do not believe natural ool theories can account for the origin of life. I believe an intelligence is required. Now can you go back and read my comment at 193 and give me your opinions about the conditions and limitations under which FCSI can increase in a system that already has FCSI.
So, no, you are not creating new information, you are only making it explicit in a way that uses some randomness and creates the misleading impression that chance variation is generating functional information.
I'm not sure this makes sense - think about it - You are saying that moving from the shore of an island of functionality to a hill (by definition an increase in functional specified complexity) does not involve any additional information because there exists an island of functionality. The island of functionality is just a space of possible configurations that can be reached by small changes to any specific configuration on the shore of that island. If we explicitly design a system that is on that shore, then adjust the design so it is now on the top of a hill then by your definition we have not added any new information. This actually extends to the whole search space - we are talking about a space of all possible configurations of matter. It consists of seas of non-function populated by islands of functionality. If, as you imply, finding and moving locations in this search space does not entail the creation of information (because the search space exists) then it is impossible for us as intelligent agents to create any new information when we design something because by definition new information is outside the space of possible configurations, and that places it outside the space of configurations possible in the entire universe. What we are actually talking about is a space of possibile configurations most of which don't exist. When a designer, or a natural process, changes a system in a way that moves it up a hill of functionality to a part of the space that has so far not been explored then I would say that information has beed added - something that didn't exist (but could in theory) now exists.
Within that island of function, there is an implicit or explicit contour map, or an equivalent function that incrementally generates such a map.
We are ultimatly talking about biology. Ths island of function is a space of possible self replicators traversable in small steps. What defines the shape of the island are the physical properties of the creature achievable in small steps and the environment is exists in. Changes to the creature (by accident or design) alter its ability to function. Changes to the environment (also by accident or design) als alter its ability to function.
But, all along we have the preloaded contour map, or the function that gives us the contour map incrementally on demand. All we have added is a technique for moving around on the map and picking up where the map climbs. (And this does not materially change if you insert a mechanism for adjusting the map midstream.)
The world exists, yes. We can observe that it changes due to the actions of intelligent agents (us) and natural forces. Some natural processes that exist in our world are also mechanisms that allow for movement across functional islands. If you are arguing that some natural processes - "a technique for moving around on the map and picking up where the map climbs." - are the product of design because the universe was designed then I would agree because I believe the universe was designed - but when it comes to talking about whether natural processes or intelligence can generate FCSI you are basically saying that any natural process is the result of design - so by implication anything produced by natural process is designed. This renders the debate meaningless because ultimatly there are no natural processes - everything is ultimatly design, even erosion because god created water, and any OOL scenario where life emerges without explicit design is actually design because the universe was designed. This is why I am insisting on rigour - lets understand what can be done within the laws of the designed universe (natural forces), and from that what can't be done and would therefore require intelligence. Back to the question again: To what degree, and in what circumstances can natural forces increase the FCSI of something that already has FCSI? Hill climbing (evolution) - YES Functional context shifting - YES FCSI CAN increase due to natural forces in something with pre-existing FCSI over a threshold. If we can agree on that then we can move on to the second question - can natural forces generate any FCSI if no FCSI already exists?DrBot
March 6, 2011
March
03
Mar
6
06
2011
07:19 AM
7
07
19
AM
PDT
PS: Recall, too, the FSCI threshold limit, beyond about 500 - 1,000 bits. As you will recall form discussions on the infinite monkeys theorem, a space of about 10^50 elements is empirically shown to be searcheable. One of 10^300 or more, is not. (In short, you are strawmannising FSCI above, too.)kairosfocus
March 6, 2011
March
03
Mar
6
06
2011
12:53 AM
12
12
53
AM
PDT
Dr Bot: It seems we are going in circles, right from your outset. How many ways do we need to say:
THERE IS NO FREE INFORMATIONAL LUNCH?
(Maybe, I could invite you to explain how a network of algorithmic processes that is reflexive and exhibits complex integrated specific function, as we may see here in Figs G.8(a) - (c) and G.9, assembles itself out of lucky noise filtered for on only trial and error without intelligent direction? Perhaps that will help you see what I am getting at when I speak of the centrality of the problem of getting to the shores of an island of function?) I repeat: the GAs all set up the comparable algorithmic flowcharts [program execution flow and signalling networks . . . ], putting us on deeply isolated islands of function. Through intelligent design. Within that island of function, there is an implicit or explicit contour map, or an equivalent function that incrementally generates such a map. Then, they start from some point or other low on the map, and feed in a carefully measured quantum of randomness. (Walk a few steps in any direction and see if that puts you up or down slope. How do you know you are on a slope? How do you deal with real life where minor mutations accumulate to embrittle the genome and the few that may move you uphill are battling an overwhelming slow deterioration of the genome pointing to embrittlement and extinction?) A subroutine picks the best performing results -- the ones that are higher [already, we are assuming the island rises from a sea level to a central hill or at least a cluster of such hills]. Blend in various techniques to move uphill, and repeat. But, all along we have the preloaded contour map, or the function that gives us the contour map incrementally on demand. All we have added is a technique for moving around on the map and picking up where the map climbs. (And this does not materially change if you insert a mechanism for adjusting the map midstream.) All of this is designed and purposeful. The controlled injection of random variation could be replaced by running a specified ring on a search pattern grid and picking up the same warmer/colder oracular signals, and the result would not be materially different as the driving force in the result is the map, not the means to move about on it. And, underneath, the fact that you have already placed the entity on the island of function. So, no, you are not creating new information, you are only making it explicit in a way that uses some randomness and creates the misleading impression that chance variation is generating functional information. Again, when you have an ev or avida or the like that assembles itself out of lucky noise and sets up its mapping function the same way, then we can talk. Otherwise, you would simply be showing what is not in dispute, functional entities created by intelligent designers can be designed to adapt within limits to changes in their environment, or to even seek desirable optima. Modern Young Earth Creationists would accept this. That is how far removed the issue is from what is really at stake. So, first, let us un-beg a few questions . . . [And after you show us an ev or avida that assembles itself out of digital noise filtered for performance, then we can talk about he real problem, assembling the PC for it to run on by passing a tornado through the Dell plant at Round Rock TX.) GEM of TKIkairosfocus
March 5, 2011
March
03
Mar
5
05
2011
11:06 AM
11
11
06
AM
PDT
KF. Excellent, we have made some progress. I hope now we can agree the following general principles: 1: A system with CSI can aquire new CSI under certain conditions. Those conditions include: A: Intelligent intervention B: Descent with modification and selection 2: The extent to which CSI can increase under B is not clear and may be severely restricted. Now we can make an observation about another way that CSI can increase (and also decrease) and add it as cause C above: Function is defined by context, therefore a change in context can change function. Take the text string: jsiengoga0 It is functional if it is the password for some on-line service, functionality of the string is defined by that context - if the pasword is changed then that text string becomes non-functional. One single bit change to the following: jsiengoga1 takes us from 80 bits of functional information to zero BUT, if the passcode (the thing that checks for a correct password) flips a bit in the same char then that one bit change takes us from zero function to full function - but the text string has not changed, only the context within which we measure functionality. Now we can conceive of an organism in an environment, and that organism has never been exposed to a particular chemical with toxic properties, but it has as a byproduct of some other metabolic feature an ability to resist the chemicals toxic effects. This (I assume) is not functional as a feature - it doesn't do anything, removing it does not affect the creature. But then its environment (the context) changes, erosion caused by a river unlocks a deposit of this toxic chemical which begins seeping into the river. Now this previously non functional trait gives the organism an advantage - function has been aquired but the creature did not change. The question here is weather the organism has increased its CSI (or perhaps better to say FCSI) without changing its self but by the fact of its environment - the thing that helps define context - changing instead? If a scenario like this can increase FCSI then we ought to add category C to the principles above: A: Intelligent intervention. B: Descent with modification and selection. C: A change in the context that defines function. And from this recognize that under certain conditions FCSI can increase due to pure chance events, and further (looking back at the password example) single bit changes of a configuration do not always equate to single bit changes in the measurable FCSI of that configuration. And to pre-empt you response - this is not a problem for the ID hypothesis, it is all about changes in FCSI in systems that already have FCSI. It is important to be rigorous when critically analyzing these things. If we can agree on what I have outlined above for already living systems then we have a good basis to move the discussion on to talk about the origin of FCSI.DrBot
March 5, 2011
March
03
Mar
5
05
2011
07:23 AM
7
07
23
AM
PDT
Dr Torley: 1,000 bits is of course 125 bytes or 143 or so ASCII characters. It is manifest that a net list of the embryological development regulatory network charts, or the overall metabolism of life chart -- I understand a wall size version is given pride of place in a front office in many Biochem Depts at unis -- will very rapidly run past the 1,000 bit threshold. The sort of complex, integrated functional systems we are looking at reek of design, save to the utterly indoctrinated. That is why my fig I.2 compares a petroleum refinery. (cf this chart here) Gkairosfocus
March 5, 2011
March
03
Mar
5
05
2011
03:14 AM
3
03
14
AM
PDT
Hi kairosfocus, Thank you for your posts. I was bowled over by your metabolic diagram. It certainly is a "fat chart." I'm afraid I can't supply you with an estimate of how long my network would take to reach the 1000-bit threshold. You hit the nail on the head when you wrote: "That tweredun is logically prior to whodunit, when or how." Answers to what/when/where/how questions are bound to be provisional, and a wide range of answers is certainly possible, in the light of what we know. Having said that, it would be a feather in the cap of the ID movement if it could make retrospective predictions about the order (and perhaps timing) of appearance of certain organisms in the fossil record, based purely on considerations relating to FCSI. Just a thought.vjtorley
March 4, 2011
March
03
Mar
4
04
2011
05:53 PM
5
05
53
PM
PDT
PS: in case it is needed again, here is a summary of the cosmological inference on fine tuning, with a particular focus on the implications of H2O and the rule of C in light of the C/O resonance as underscored by Hoyle.kairosfocus
March 4, 2011
March
03
Mar
4
04
2011
05:00 PM
5
05
00
PM
PDT
Dr Torley, Neat nodes, arcs and interfaces diagram . . . How soon do you think it would take for the network shown to run past 1,000 bits of information capacity [= basic yes/no questions] to restate as a net list? (In short, FSCO/I, so designed. Some aspects are probably going to be IC too, which would be a related inference to design. And of course this is about organism feasibility, so again, an issue of getting to islands of function. ) The cell's metabolic reactions network here [warning, fat chart] -- Fig I.2 here (as repeatedly linked) -- is also a similarly impressive illustration of a complex, functionally specific network that is well past 1,000 bits of specifying information. I see your who what where when how stuff, and say, that is one way of reading it. There are other ways, doubtless, and as I pointed out to MG long since, any sufficiently sophisticated nanotech lab several generations beyond Venter could have done it. But of course, once we lift our eyes to the heavens and understand the complex fine tuned functional organisation to make a C-chemistry cell based life supporting cosmos, we see that we already have a serious reason to infer to an extra cosmic necessary being [thus immaterial as matter is radically contingent a la E = m*c^2 . . . as already pointed out], with power, knowledge skill and intent to design and implement such a cosmos. With such in hand, it is very reasonable indeed to infer to a design of original life, and of subsequent body plans up to and including our own. It is ideology, not science that stands in the way of general acceptance of this sort of frame of thought. However, in no wise does a who what where when why how model detract form the basic key design inference, from signs of deisng to the credible conclusion that something with these signs was designed. Different who what where when models could fit the one and same inference. That tweredun is logically prior to whodunit, when or how. (Someone -- above? -- said we don't go looking for murderers if we have reason to accept that a death was natural.) GEM of TKIkairosfocus
March 4, 2011
March
03
Mar
4
04
2011
04:49 PM
4
04
49
PM
PDT
1 5 6 7 8 9 14

Leave a Reply