Uncommon Descent Serving The Intelligent Design Community

On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I do enjoy reading ID’s most vehement critics, both in formal publications (such as books and papers) and on the, somewhat less formal, Internet blogosphere. Part of the reason for this is that it gives one something of a re-assurance to observe the vacuous nature of many of the critics’ attempted rebuttals to the challenge offered to neo-Darwinism by ID, and the attempted compensation of its sheer lack of explicative power by the religious ferocity of the associated rhetoric (to paraphrase Lynn Margulis). The prevalent pretense that the causal sufficiency of neo-Darwinism is an open-and-shut case (when no such open-and-shut case for the affirmative exists) never ceases to amuse me.

One such forum where esteemed critics lurk is the Panda’s Thumb blog. A website devoted to holding the Darwinian fort, and one endorsed by the National Center for Selling Evolution Science Education (NCSE). Since many of the Darwinian heavy guns blog for this website, we can conclude that, if consistently demonstrably faulty arguments are common play, the front-line Darwinism defense lobby is in deep water.

Recently, someone referred me to two articles (one, two) on the Panda’s Thumb website (from back in 2007), by Arthur Hunt (professor in Department of Plant and Soil Sciences at the University of Kentucky). The first is entitled “On the evolution of Irreducible Complexity”; the second, “Reality 1, Behe 0” (the latter posted shortly following the publication of Behe’s second book, The Edge of Evolution).

The articles purport to refute Michael Behe’s notion of irreducible complexity. But, as I intend to show here, they do nothing of the kind!

In his first article, Hunt begins,

There has been a spate of interest in the blogosphere recently in the matter of protein evolution, and in particular the proposition that new protein function can evolve. Nick Matzke summarized a review (reference 1) on the subject here. Briefly, the various mechanisms discussed in the review include exon shuffling, gene duplication, retroposition, recruitment of mobile element sequences, lateral gene transfer, gene fusion, and de novo origination. Of all of these, the mechanism that received the least attention was the last – the de novo appearance of new protein-coding genes basically “from scratch”. A few examples are mentioned (such as antifreeze proteins, or AFGPs), and long-time followers of ev/cre discussions will recognize the players. However, what I would argue is the most impressive of such examples is not mentioned by Long et al. (1).

There is no need to discuss the cited Long et al. (2003) paper in any great detail here, as this has already been done by Casey Luskin here (see also Luskin’s further discussion of Anti-Freeze evolution here), and I wish to concern myself with the central element of Hunt’s argument.

Hunt continues,

Below the fold, I will describe an example of de novo appearance of a new protein-coding gene that should open one’s eyes as to the reach of evolutionary processes. To get readers to actually read below the fold, I’ll summarize – what we will learn of is a protein that is not merely a “simple” binding protein, or one with some novel physicochemical properties (like the AFGPs), but rather a gated ion channel. Specifically, a multimeric complex that: 1. permits passage of ions through membranes; 2. and binds a “trigger” that causes the gate to open (from what is otherwise a “closed” state). Recalling that Behe, in Darwin’s Black Box, explicitly calls gated ion channels IC systems, what the following amounts to is an example of the de novo appearance of a multifunctional, IC system.

Hunt is making big promises. But does he deliver? Let me briefly summarise the jist of Hunt’s argument, and then briefly weigh in on it.

The cornerstone of Hunt’s argument is principally concerned with the gene, T-urf13, which, contra Behe’s delineated ‘edge’ of evolution, is supposedly a de novo mitochondrial gene that very quickly evolved from other genes which specified rRNA, in addition to some non-coding DNA elements. The gene specifies a transmembrane protein, which aids in facilitating the passage of hydrophilic molecules across the mitochondrial membrane in maize – opening only when bound on the exterior by particular molecules.

The protein is specific to the mitochondria of maize with Texas male-sterile cytoplasm, and has also been implicated in causing male sterility and sensitivity to T-cytoplasm-specific fungal diseases. Two parts of the T-urf13 gene are homologous to other parts in the maize genome, with a further component being of unknown origin. Hunt maintains that this proves that this gene evolved by Darwinian-like means.

Hunt further maintains that the T-urf13 consists of at least three “CCCs” (recall Behe’s argument advanced in The Edge of Evolution that a double “CCC” is unlikely to be feasible by a Darwinian pathway). Two of these “CCCs”, Hunt argues, come from the binding of each subunit to at minimum two other subunits in order to form the heteromeric complex in the membrane. This entails that each respective subunit have at minimum two protein-binding sites.

Hunt argues for the presence of yet another “CCC”:

[T]he ion channel is gated. It binds a polyketide toxin, and the consequence is an opening of the channel. This is a third binding site. This is not another protein binding site, and I rather suppose that Behe would argue that this isn’t relevant to the Edge of Evolution. But the notion of a “CCC” derives from consideration of changes in a transporter (PfCRT) that alter the interaction with chloroquine; toxin binding by T-urf13 is quite analogous to the interaction between PfCRT and chloroquine. Thus, this third function of T-urf13 is akin to yet another “CCC”.

He also notes that,

It turns out that T-urf13 is a membrane protein, and in membranes it forms oligomeric structures (I am not sure if the stoichiometries have been firmly established, but that it is oligomeric is not in question). This is the first biochemical trait I would ask readers to file away – this protein is capable of protein-protein interactions, between like subunits. This means that the T-urf13 polypeptide must possess interfaces that mediate protein-protein interactions. (Readers may recall Behe and Snokes, who argued that such interfaces are very unlikely to occur by chance.)

[Note: The Behe & Snoke (2004) paper is available here, and their response (2005) to Michael Lynch’s critique is available here.]

Hunt tells us that “the protein dubbed T-urf13 had evolved, in one fell swoop by random shuffling of the maize mitochondrial genome.” If three CCC’s really evolved in “one fell swoop” by specific but random mutations, then Behe’s argument is in trouble. But does any of the research described by Hunt make any progress with regards to demonstrating that this is even plausible? Short answer: no.

Hunt does have a go of guesstimating the probabilistic plausibility of such an event of neo-functionalisation taking place. He tells us, “The bottom line – T-urf13 consists of at least three ‘CCCs’. Running some numbers, we can guesstimate that T-urf13 would need about 10^60 events of some sort in order to occur.”

Look at what Hunt concludes:

Now, recall that we are talking about, not one, but a minimum of three CCC’s. Behe says 1 in 10^60, what actually happened occurred in a total event size of less that 10^30. Obviously, Behe has badly mis-estimated the “Edge of Evolution”. Briefly stated, his “Edge of Evolution” is wrong. [Emphasis in original]

Readers trained in basic logic will take quick note of the circularity involved in this argumentation. Does Hunt offer any evidence that T-urf13 could have plausibly evolved by a Darwinian-type mechanism? No, he doesn’t. In fact, he casually dismisses the mathematics which refutes his whole argument. Here we have a system with a minimum of three CCCs, and since he presupposes as an a priori principle that it must have a Darwinian explanation, this apparently refutes Behe’s argument! This is truly astonishing argumentation. Yes, certain parts of the gene have known homologous counterparts. But, at most, that demonstrates common descent (and even that conclusion is dubious). But a demonstration of homology, or common ancestral derivation, or a progression of forms is not, in and of itself, a causal explanation. Behe himself noted in Darwin’s Black Box, “Although useful for determining lines of descent … comparing sequences cannot show how a complex biochemical system achieved its function—the question that most concerns us in this book.” Since Behe already maintains that all life is derivative of a common ancestor, a demonstration of biochemical or molecular homology is not likely to impress him greatly.

How, then, might Hunt and others successfully show Behe to be wrong about evolution? It’s very simple: show that adequate probabilistic resources existed to facilitate the plausible origin of these types of multi-component-dependent systems. If, indeed, it is the case that each fitness peak lies separated by more than a few specific mutations, it remains difficult to envision how the Darwinian mechanism might adequately facilitate the transition from one peak to another within any reasonable time frame. Douglas Axe, of the biologic institute, showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalisation. If a duplicated gene is neutral (in terms of its cost to the organism), then the  maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). One other study, published in Nature in 2001 by Keefe & Szostak, documented that more than a million million random sequences were required in order to stumble upon a functioning ATP-binding protein, a protein substantially smaller than the transmembrane protein specified by the gene, T-urf13. Douglas Axe has also documented (2004), in the Journal of Molecular Biology, the prohibitive rarity of functional enzymatic binding domains with respect to the vast sea of combinatorial sequence space in a 150 amino-acid long residue (Beta-Lactamase).

What, then, can we conclude? Contrary to his claims, Hunt has failed to provide a detailed and rigorous account of the origin of T-urf13. Hunt also supplies no mathematical demonstration that the de novo origin of such genes is sufficiently probable that it might be justifiably attributed to an unguided or random process, nor does he provide a demonstration that a step-wise pathway exists where novel utility is conferred at every step (being separated by not more than one or two mutations) along the way prior to the emergence of the T-urf13 gene.

The Panda’s Thumb are really going to have to do better than this if they hope to refute Behe!

Comments
Mathgrrl: I'm not a biologist, but if you're looking for a mathematically rigorous definition of CSI, may I suggest that you check out the scientific articles listed at this Website of mine: http://www.angelfire.com/linux/vjtorley/ID.html If you'd prefer a paper by a scientist who isn't in any way sympathetic to ID, try this one: http://www.pnas.org/content/104/suppl.1/8574.full (Hazen, R.M.; Griffin, P.L.; Carothers, J.M.; Szostak, J.W. 2007, Functional information and the emergence of biocomplexity, Proc Natl Acad Sci U S A, 104 Suppl 1, 8574-81.) The mathematical articles by Professor Dembski should give you a clear understanding of the logic behind the design inference. You asked for details of "who, what, when, where, and how." Some of these requests are reasonable; others are not. If scientists found an advanced technical artifact buried under the Antarctic ice, their first conclusion would be THAT it was designed. The "who, what, when, where, and how" questions would take a lot longer to answer. It's hardly surprising that we don't have good answers to these questions yet. Nevertheless I shall have a go at answering them. Who: Some Intelligent Being outside this cosmos. (I'd be happy to call this Being God, myself.) Evidence: the fact that the cosmos itself is fine-tuned for life. For an up-to-date presentation of the fine-tuning argument, see: The Teleological Argument: An Exploration of the Fine-Tuning of the Universe by Robin Collins. In The Blackwell Companion to Natural Theology. Edited by William Lane Craig and J. P. Moreland. 2009. Blackwell Publishing Ltd. ISBN: 978-1-405-17657-6. For background reading on the fine-tuning argument, see: Universe or Multiverse? A Theistic Perspective by Robin Collins. Well worth reading. See especially Part VI on the beauty of the laws of Nature. God and the Laws of Nature by Robin Collins. (Scroll down and click on the link.) What: See the inside flap of The Edge of Evolution, by Professor Michael Behe, where he lists the various features of the universe which appear to have been fine-tuned, in decreasing order of generality. I've highlighted the biological ones. Here goes: Laws of nature Physical constants Ratios of fundamental constants Amount of matter in the universe Speed of expansion in the universe Properties of elements such as carbon Properties of chemicals such as water Location of solar system in the galaxy Location of planet in the solar system Origin and properties of Earth/Moon Properties of biochemicals such as DNA Origin of life Cells Genetic code Multiprotein complexes (see here) Molecular machines Biological kingdoms Developmental genetic programs Integrated protein networks Phyla Cell types Classes Here are three more that Behe thinks the Intelligent Being might or might not have designed: Orders Families Genera Regarding cell types: in his book, The Edge of Evolution, Professor Michael Behe points out that classes of vertebrate differ in the number of distinct cell types they have: "Although amphibians have about 150 cell types and birds about 200, mammals have about 250" (2008, Free Press, paperback edition, p. 199). Each cell type is quite distinct from the other types in its group. For instance, the cells of the mammary, lacrimal and ceruminous glands share the property of being specialized for secretion through ducts (exocrine secretion), but the substances they secrete are very different: milk, tears and ear wax respectively. Professor Behe argues that the gene regulatory network that is required to specify each cell type is irreducibly complex. There is an old Chinese proverb that a picture is worth a thousand words, and it's certainly true in this case. Readers who want to see what a gene regulatory network looks like for a tissue type called endomesoderm, in simple sea urchins, can click here. It's well worth having a look at. The resemblance to a logic circuit is striking, and the impression of design overwhelming. Behe estimates that the number of protein factors involved in the gene regulatory network for each cell type is about ten, and argues that it appears to be irreducibly complex. On the basis of scientific observations of a very large number of mutations in Nature (especially in the parasite Plasmodium falciparum, the HIV virus and the bacterium Escherichia coli), Behe calculates that during the history of life on earth, Darwinian evolution would be unable to generate a system with more than three inter-dependent components. He concludes that the cell types that characterize a class of organisms are very likely to be designed (ibid., pp. 198-199). When: Front-loading supporters argue that it could have been as far back as the Big Bang. While this is true for the laws of Nature, I do not think this is possible for the emergence of life on Earth. I used to be a front-loader myself, but after reading physicist Robert Sheldon's thought-provoking article, The Front-Loading Fiction (July 1, 2009). I now believe that the Intelligent Designer has manipulated DNA and proteins on millions of occasions in the Earth's history. (I say millions, because there are millions of kinds of proteins in Nature, and the other biological tasks listed above seem to have required far fewer acts of manipulation by the Intelligent Designer.) And if each and every family of organisms was designed by the Intelligent Being, as Behe seems to believe, then the last act of manipulation could have been no more than 10 million years ago - say, back in the Miocene. (I don't know of any families that have appeared since then.) "Millions of manipulations" might sound like a lot of work for a Designer, but I would argue that: (i) front-loading would have been even more work, as it would have involved designing everything in Nature to a ridiculously high level of precision to ensure that some billiard ball-style collisions between atoms shortly after the Big Bang would lead to the emergence of life on Earth ten billion years later. Nature isn't that precise; length is quantized in units of 1.616 x 10^-35 meters (a Planck length). (ii) the Earth is billions of years old, so millions of manipulations still only works out at one every 1,000 or so years. Hardly difficult for a Deity. Where: Depends on which act of manipulation we're talking about. Can any evolutionary biologist tell me where the first bird or mammal originated? How: By manipulating genes. Sorry I can't be much more specific than that, but I'm not a biologist, and I would hardly expect to know the M.O. of aliens whose minds were millions of years ahead of mine - much less the modus operandi of a cosmic Designer. Hope that answers some of your questions.vjtorley
March 4, 2011
March
03
Mar
4
04
2011
01:37 PM
1
01
37
PM
PDT
Dr Bot:
I will spell out the question being debated again. Can an evolutionary mechanism generate ANY NEW CSI in a population of self replicating organisms or does the creation of ANY NEW CSI always require the intervention of a designer – This directly addresses the question of whether CSI is always and reliably an indicator of design
1- You are equivocating by using "evolutionary mechanism"- ID is not anti-evolution. Replace evolutionary with blind watchmaker. 2- No intervention required. Did dawkins intervene with his "weasel" program once it started? No. So there is the problem, right there. You obviously don't understand ID and think the way to leasrn about it is by visiting blogs and asking random questions. And the biggest concern is they don't even need to consider ID. If ID didn't exist they still wouldn't have any positive evidence for their position. But I guess they like trolling...Joseph
March 4, 2011
March
03
Mar
4
04
2011
01:03 PM
1
01
03
PM
PDT
Bot,
UB: Along with this, you did also state (repeatedly) that ID claims a designer “intervenes” in the course of evolution of life on this planet. You were flatly told that ID does not make that claim.
Plenty of ID proponents believe that gaps in the fossil record are best explained by intervention – e.g. the Cambrian Explosion.
There are those in the ID camp who think the Cambrian could have been the result of intervention. There are others that think the Cambrian is the result of front-loading. Others very likely think something else entirely. So what? None of these arguments are central to the thesis that design can be detected in nature.
Those that promote and pursue ID as a theory, and who believe this, are therefore making that very claim.
What shall we say of a materialist who writes in his or her next profound best-seller that biological observation x provides evidence that no designer God is necessary to explain life? Shall we re-write the definition of the Theory of Evolution? Will the Theory itself need to be reformulated to say “change in allele frequencies over time, plus observation x” ? Or, is the theory one thing, and the speculations which extend from that theory another thing? - - - - - - - In any case, my comments to Mathgrrl stand unchanged. She wants to dismiss and repeat the mistakes in her assumptions, she wants to ignore being corrected, she wants to ask something from ID that she cannot answer herself, and more than anything, she wants to make conclusions about the output of a system while ignoring that it is the system itself that must be explained. What do we say about someone who (for the sake of rhetoric) pursues an almost trivial question, while demonstrating incorrect assumptions, and blatantly ignoring the critical issues at hand?Upright BiPed
March 4, 2011
March
03
Mar
4
04
2011
12:38 PM
12
12
38
PM
PDT
Mattgrrl, I wish I could come up with a mathematically rigorous definition of CSI. I thought that Dempski attempted that at one time. I'll admit that I haven't read any of his books. But I wold also ask if anyone has ever come up with a mathematically rigorous definition of natural selection. It just seems like a heuristic to me.Collin
March 4, 2011
March
03
Mar
4
04
2011
12:06 PM
12
12
06
PM
PDT
PS: In short we are asking on body plan level evo, not OOL. When you answer at this level we can go on to getting the PC out of a tornado at Round rock, comparable to Darwin's warm little pond.kairosfocus
March 4, 2011
March
03
Mar
4
04
2011
10:01 AM
10
10
01
AM
PDT
Kairos Thanks for the link http://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#dna_optim I'm using wrong terminology because I only try to explain it mechanistically. I'm wondering if DrBot or Mathgrrl see forward error correction in the process described in #170?Eugen
March 4, 2011
March
03
Mar
4
04
2011
09:13 AM
9
09
13
AM
PDT
MG: Please look at your behaviour above, when you have repeatedly been informed on the relevant definitions. You are now indulging in projection. As to whether an explicit or implicit map of an island of function, joined to a hill climbing algorithm that runs you up the hill is a creation of new information that did not previously exist, I think this summary answers the point. And, this has been also said over and over by various persons. Once you define an objective function and an optimising algorithm, you have loaded in a lot of information. You are simply using a random number generator to do your hill climbing. Again, where you need to really go is to show us an ev or a tiera or an avida that write themselves, algorithms, data structures and codes, out of lucky noise; i.e. get to the island of function by chance plus necessity without intelligent guidance. Moving around in a pre-set up environment on a designed algorithm simply show the power of design, and how designs can incorporate a random search element. We will give you the PC instead of asking you to get it out of a tornado through the Dell Plant at Round rock. GEM of TKI PS: And Dr Bot, that is your answer too, an answer you have been refusing to face.kairosfocus
March 4, 2011
March
03
Mar
4
04
2011
08:50 AM
8
08
50
AM
PDT
DrBot,
I will spell out the question being debated again. Can an evolutionary mechanism generate ANY NEW CSI in a population of self replicating organisms or does the creation of ANY NEW CSI always require the intervention of a designer – This directly addresses the question of whether CSI is always and reliably an indicator of design.
Thank you, that is exactly the question I would like some assistance in answering.MathGrrl
March 4, 2011
March
03
Mar
4
04
2011
08:12 AM
8
08
12
AM
PDT
kairosfocus,
Why do you insist on twisting the words of those you deal with?
Why do you insist on replying rudely rather than addressing the points I am making? Is the civility requirement here not applicable to ID proponents? I am doing you the courtesy of taking your claims seriously enough to spend time testing them. In order to do that, I need to know the mathematically rigorous definition of CSI. You replied that CSI it can be measured simply as the number of bits in the string exhibiting the specified function. Based on your definition, I demonstrated how both biological evolution and simulations of biological evolution can generate CSI. If you disagree with my conclusions, please demonstrate how you would calculate the CSI from the four scenarios I described (gene duplication leading to increased protein production, ev evolving binding sites, Tierra evolving parasites, and GAs evolving solutions to the Steiner Problem). That will provide more details on how to calculate the metric that you claim is indicative of intelligent agency. If you don't wish to assist me with this effort, simply say so. Politely, if you can.MathGrrl
March 4, 2011
March
03
Mar
4
04
2011
08:11 AM
8
08
11
AM
PDT
KF, MathGrrl is asking if an evolutionary process can move you around on an island of functionality, and if moving up to a hill on that island by means of an evolutionary process means that the organism generates CSI. You seem to accept that, but instead of simply saying - yes, moving up a hill of functionality involves and increase (the generation of) CSI and an evolutionary mechanism can achieve this - you try and switch the question to one about finding an island of functionality, and lace your replies with ad hominems. MathGrrl is not addressing the question of finding islands of functionality, she is addressing the question of travel ON islands of functionality. Please can you address MathGrrls points on their merits! I will spell out the question being debated again. Can an evolutionary mechanism generate ANY NEW CSI in a population of self replicating organisms or does the creation of ANY NEW CSI always require the intervention of a designer - This directly addresses the question of whether CSI is always and reliably an indicator of design. UB:
Along with this, you did also state (repeatedly) that ID claims a designer “intervenes” in the course of evolution of life on this planet. You were flatly told that ID does not make that claim.
Plenty of ID proponents believe that gaps in the fossil record are best explained by intervention - e.g. the Cambrian Explosion. Those that promote and persue ID as a theory, and who believe this, are therefore making that very claim.DrBot
March 4, 2011
March
03
Mar
4
04
2011
07:40 AM
7
07
40
AM
PDT
MG: Why do you insist on twisting the words of those you deal with? [I now have to seriously ask: are you simply playing the troll? If your level of behaviour does not improve shortly, I will -- regretfully -- have to resort to "don't feed dah trollz."] Above, I pointed out that the type of bits measure we commonly encounter in dealing with ICTs is a measure of functionally specific bits. For instance, I just uploaded a gif of 5 kbits to my blog. That picture of the Alice programming icon is quite specific and functional thank you -- took some searching. The simple FSCI metric extends this -- as can be seen above, and in the always linked and in the UD weak arguments correctives as you have been pointed to but have just as often ignored -- by inserting a threshold, 500 - 1,000 bits. Beyond that point, we are dealing with config spaces of 2^1,000 or more possibilities. That is, 1.07*10^301, where the whole cosmos we live in of 10^80 atoms would only go through 10^150 Planck time states across its thermodynamic lifespan, where a Planck time is 10^20 times faster than the fastest nuclear interactions. In short, the cosmos search capacity relative to such a space is an effective zero, as is discussed here and here, the latter of which will give you a specific definition of CSI, not just the easier to work with FSCI. If the cosmos cannot search the space worth more than an effective zero, then we can be confident that the only credible source of a functionally specific config in that space will be intelligence. It is blatantly obvious that neither you nor any of your fellow evo mat advocates can show us a case where FSCI beyond that threshold is produced by undirected chance plus necessity. GA's playing as evolution sims, like ev etc, all are designed and do hill climbing exercises in very carefully designed sandboxes: moving around INSIDE islands of function, not getting to them. Why are such then triumphantly announced to the world -- through a bait and switch sales tactic -- as though they show how chance plus necessity can create functional info and support the claims of macroevolution on chance variation and natural selection? BECAUSE YOU HAVE NO REAL EVIDENCE. That is the take-home message, MG. Don't you feel ashamed of associating yourself with such shoddy salesman tactics to promote what plainly cannot stand on its merits? GEM of TKIkairosfocus
March 4, 2011
March
03
Mar
4
04
2011
06:33 AM
6
06
33
AM
PDT
kairosfocus,
For instance, the commonly encountered metric of functionally specific bits can be very simply assessed for protein coding DNA, at 2 bits per base;
If this is your definiition of CSI, known evolutionary mechanisms are demonstrably capable of generating it in both real and simulated environments. Consider the specification of "Produces X amount of protein Y." A simple gene duplication, even without subsequent modification of the duplicate, can increase production from less than X to greater than X. By your definition, CSI has been generated by a known, observed evolutionary mechanism with no intelligent agency involved. Schneider's ev uses the specification of "A nucleotide that binds to exactly N sites within the genome." Using only simplified forms of known, observed evolutionary mechanisms, ev routinely evolves genomes that meet the specification. The length of the genome required to meet this specification can be quite long, depending on the value of N. By your definition, CSI has been generated by those mechanisms. (ev is particularly interesting because it is based directly on Schneider's PhD work with real biological organisms.) Ray's Tierra routinely evolves digital organisms with a number of specifications. One I find interesting is "Acts as a parasite on other digital organisms in the simulation." The length of the shortest parasite is at least 22 bytes. By your definition, CSI has been generated by known, observed evolutionary mechanisms with no intelligent agency required. The Steiner Problem solutions described at the site linked above use the specification "Computes a close approximation to the shortest connected path between a set of points." The length of the genomes required to meet this specification depends on the number of points, but can certainly be hundreds of bits. By your definition, these GAs generate CSI via known, observed evolutionary mechanisms with no intelligent agency required. By the standard you set here, CSI is by no means an indicator that an intelligent agent is involved in the creation of a particular artifact.MathGrrl
March 4, 2011
March
03
Mar
4
04
2011
05:46 AM
5
05
46
AM
PDT
Eugen: This, from the always linked: ____________ >> even more interesting is the observation by Hurst, Haig and Freeland, that the actual protein-forming code used by DNA is [near-] optimal. As Vogel reports (HT: Mike Gene) in the 1998 Science article "Tracking the History of the Genetic Code," Science [281: 329]: . . . in 1991, evolutionary biologists Laurence Hurst of the University of Bath in England and David Haig of Harvard University showed that of all the possible codes made from the four bases and the 20 amino acids, the natural code is among the best at minimizing the effect of mutations. They found that single-base changes in a codon are likely to substitute a chemically similar amino acid and therefore make only minimal changes to the final protein. Now [circa 1998] Hurst's graduate student Stephen Freeland at Cambridge University in England has taken the analysis a step farther by taking into account the kinds of mistakes that are most likely to occur. First, the bases fall into two size classes, and mutations that swap bases of similar size are more common than mutations that switch base sizes. Second, during protein synthesis the first and third members of a codon are much more likely to be misread than the second one. When those mistake frequencies are factored in, the natural code looks even better: Only one of a million randomly generated codes was more error-proof. [3] [Emphases added] As the pseudonymous Mike Gene then summarises, when various biosynthetic pathway restrictions [the codes seem to come from families sharing an initial letter] and better metrics of amino acid similarity are factored in, it is arguable that the code becomes essentially optimal. So, he poses the obvious logical question: . . . the take home message from these studies, and several others, is that nature's code is very good at buffering against deleterious mutations. This theme nicely fits with many other findings that continue to underscore how cells have layers and layers of safeguards and proof-reading mechanisms to ensure minimal error rates. Thus, contrary to Miller's assertion, the "universal code" is easily explained from an ID perspective - if you have designed a code that is very good at buffering against deleterious mutations, why not reuse it again and again? In short, not only is the DNA code a code that functions in an algorithmic context, but of the range of possible code assignments, the actual one we see seems very close to optimal against the impacts of random changes. Further, the codons themselves fall into a highly structured pattern, as "amino acids from the same biosynthetic pathway are generally assigned to codons sharing the same first base." [Taylor and Coates 1989, cited, Freeland SJ, Knight RD, Landweber LF, Hurst LD. 2000 in "Early fixation of an optimal genetic code." Mol Biol Evol 17(4):511-8. (HT: MG.)] That is, the DNA code itself is significantly non-random in how it assigns base pairs to amino acids. (But also, I must note that such suggests an inference: (a) the coding assignments are not driven by the mechanical necessity of the underlying chemistry of chaining either nucleic acids or proteins, and (b) they are not a matter of random chance. An observation of (c) a structured coding pattern tied to the one-stage-removed chemistry of synthesis of the amino acids that are components to be subsequently chained to form proteins therefore strongly supports that (d) the code is an intelligent act of an orderly-minded, purposeful designer. For, of the three key causal factors, if neither chance nor necessity is credibly decisive, that lends itself to the conclusion that intentional choice (here, tied to a prior component assembly stage!) is at work. In short, intelligent design.) Gene tellingly concludes: . . . there are two very good (and obvious) reasons for a designer to have employed the same code in bacteria and eukaryotes: 1) The code is extremely good at preventing deleterious amino acid substitutions and; 2) the shared code allows for the lateral transfer of genetic material and facilitates symbiotic unions. That Miller thought ID incapable of explaining the code, and Pace thought the shared code proved the common descent of bacteria and eukaryotes, only shows how an a priori commitment to non-teleological explanations creates a large intellectual blind spot. >> _____________ Food for thought. GEM of TKIkairosfocus
March 3, 2011
March
03
Mar
3
03
2011
11:47 PM
11
11
47
PM
PDT
I have no idea why the comment before 174 was not accepted but its pointless letting 174 through without don't you thinkzeroseven
March 3, 2011
March
03
Mar
3
03
2011
06:16 PM
6
06
16
PM
PDT
Sorry, UB, not UP.zeroseven
March 3, 2011
March
03
Mar
3
03
2011
01:04 PM
1
01
04
PM
PDT
Mathgrrl at 163,
As I thought I made clear, I’m discussing biological evolution, not abiogenesis.
Your attempt to extricate yourself from the entaglement of your gameplaying is duly noted. Unfortunately for you the actual text of this conversation does not allow it. Here is the exact wording of the question you sought to have answered when you joined this thread: "Can you explain, in empirical, observational evidence backed up steps, how the organised, functionally specific complexity of the flagellum, the eyes and immune systems originated per the design thesis?" Please note the emphasized word. Along with this, you did also state (repeatedly) that ID claims a designer "intervenes" in the course of evolution of life on this planet. You were flatly told that ID does not make that claim. However, you were entirely unimpeded by that correction, and went on making the statement over and over again. So on the one hand, you ask for evidence of a claim that ID doesn't make, then on the other hand you ignore the evidence for the claim that ID does make, and bewteen the two you can always feign being misunderstood. One has to wonder, why should you be taken seriously at all?Upright BiPed
March 3, 2011
March
03
Mar
3
03
2011
09:29 AM
9
09
29
AM
PDT
PS 2: Wiki on genome size:
Genome size correlates with a range of features at the cell and organism levels, including cell size, cell division rate, and, depending on the taxon, body size, metabolic rate, developmental rate, organ complexity, geographical distribution, and/or extinction risk (for recent reviews, see Bennett and Leitch 2005;[8] Gregory 2005[9]). Based on completely sequenced genome data currently (as of April 2009) available, log-transformed gene number forms a linear correlation with log-transformed genome size in bacteria, archea, viruses, and organelles combined whereas a nonlinear (semi-natural log) correlation in eukaryotes (Hou and Lin 2009 [10]). The nonlinear correlation for eukaryotes, although claim of its existence contrasts the previous view that no correlation exists for this group of organisms, reflects disproportinately fast increasing noncoding DNA in increasingly large eukaryotic genomes. Although sequenced genome data are practically biased toward small genomes, which may compromise the accuracy of the empirically derived correlation, and the ultimate proof of the correlation remains to be obtained by sequencing some of the largest eukaryotic genomes, current data do not seem to rule out a correlation.
Sounds like a body-plan complexity linked scaling of regulatory software to me.kairosfocus
March 3, 2011
March
03
Mar
3
03
2011
09:11 AM
9
09
11
AM
PDT
PS: Of course, over the past few years, a lot of junk dna is turning out not to be so junky anymore, but to be involved in regulatory functions. Bricks by themselves do not a house make.kairosfocus
March 3, 2011
March
03
Mar
3
03
2011
09:05 AM
9
09
05
AM
PDT
Hi Kairos I was looking at codon table and noticed something interesting. a-- The redundancy of codon to amino acid mapping is so typical of forward error correcting (FEC) methods. These methods are absolutely critical in modern digital data transfer and streaming. They are used in one way data transfers and that is exactly what we have in this case. b---There is variable strength to error correction capability in codon to amino acid map. Some amino acids are assigned 6 codons as if there was need to make sure these are properly translated. It's possible some amino acids are more important or critical than others. There is some optimization at work here. c--- Property of the group of amino acids that uses the most codons is polar and small sized. There has to be a good reason why they get the largest share of codon assignments but I don't know it now. d--- If error rate is high there could be another layer of redundancy to this system as several amino acids have the same property .Possibly any amino acid of the same property will do the same job in the chain which will make protein. e--- If error rate is low than this layer of redundancy by group property could be used to fine tune protein by modulating folding tension. This could be done by selecting different sized amino acid from the same property group. One more thing. I wonder why was I moderated out couple days ago. I know I did not say anything inappropriate.Eugen
March 3, 2011
March
03
Mar
3
03
2011
07:58 AM
7
07
58
AM
PDT
F/n 2: genome sizes for bacteria and kin:
The size of Bacterial chromosomes ranges from 0.6 Mbp to over 10 Mbp, and the size of Archael chromosomes range from 0.5 Mbp to 5.8 Mbp . . . . The smallest Archae genome identified thus far is from Nanoarchaeum equtans, a tiny obligate symbiont with a genome size of 0.491 Mbp (491 Kbp). This organism lacks genes required for synthesis of lipids, amino acids, nucleotides, and vitamins, and hence must grow in close association with another organism which provides these nutrients. The smallest Bacterial genome identified thus far is from Mycoplasma genitalium, an obligate intracellular pathogen with a genome size of 0.58 Mbp (580 Kbp). M. genitalium is restricted to the intracellular niche because it lacks genes encoding enzymes required for amino acid biosynthesis and the peptidoglycan cell wall, genes encoding TCA cycle enzymes, and many other biosynthetic genes. In contrast to such obligate intracellular bacteria, free-living bacteria must dedicate many genes toward the biosynthesis and transport of nutrients and building blocks. The smallest free-living organisms have a genome size over 1 Mbp . . . . prokaryotes tend to have very little junk DNA (typically less than 15% of the genome) and eukaryotes have substantial amounts of junk DNA.
kairosfocus
March 3, 2011
March
03
Mar
3
03
2011
07:22 AM
7
07
22
AM
PDT
F/N: Biological evolution, onlookers, is a neatly question-begging and ambiguous term. First, the question of the root of Darwin's tree of life is begged, but we can neatly set aside the problem of getting to a metabolising, self-replicating organism by making the datum line that we will not go there. But, without that root, the tree of chance plus necessity driven evolution has no basis to stand on. By now it is blatant that the only empirically credible explanation for the FSCO/I to do that in the simplest life forms is intelligent design. Once that is seen to be designed, there is no reason to try to insist that at later stages, the major body plans were products of chance variation and natural selection. Common design would at once remove all the unnecessary problems with the naturalistic theory of origins, but the problem is that there is an acting prejudice, for a designer of life inferred on the FSCI in life, is seen as opening the door to an intelligent designer who may be beyond the cosmos. (Never mind that he evidence of the fine tuning of the cosmos -- also pointed out above and pointedly ignored by the willfully obtuse -- has blasted that door off its hinges long since.) Evolution in the sense of small empirically observed changes of already functioning organisms in populations, is a non-controversial fact. Indeed, Young Earth creationists accept it and see it as part of the design impressed by the Creator. Where the problem comes up is the unwarranted extrapolation to the claimed origin of novel body plans, dozens of times over across the history of life. As I excerpted and discussed yesterday, we are talking 10's to 100's of millions of bases of new functional, integrated information that has to work starting from the embryo. We are invited -- with absolutely no direct empirical evidence, and every type and degree of counter-evidence dismissed -- to take this as having happened by simple accumulation of micro changes. Why? because of Lewontinian a priori evolutionary materialism. That is not science, it is closed minded ideology. When we see ev and the like, we see these are parallel to micro evo, and we are being invited to simply swallow the extrapolation, never mind the issue of getting to novel islands of function, rather than simple hill-climbing per a designed algorithm within an island of function. If the simulators could show us -- notice, this is a stated empirical test -- an ev [Sun 4 binary 409 k bytes, a bit bigger than but comparable in storage capacity scale to a unicellular genome] that originated by chance variation and trial and error selection, then was able to move on to optimise a functioning system by hill climbing that would be different. But they cannot and on solid thermodynamics analysis, they will credibly never be able to do that. GEM of TKIkairosfocus
March 3, 2011
March
03
Mar
3
03
2011
07:08 AM
7
07
08
AM
PDT
MG: You are simply insistently repeating long since cogently answered talking points. For instance, the commonly encountered metric of functionally specific bits can be very simply assessed for protein coding DNA, at 2 bits per base; a 300 AA protein coding region in DNA then uses 3 * 300 * 2 = 1,800 bits of functionally specific information; there are hundreds or thousands of such zones in a typical genome. At the next level up, Durson et al -- as already discussed and linked above -- have given measured values in FITS fro 35 protein families. This was published in the per reviewed literature in 2007. But, since many do not know that, this talking point can still be used persuasively. Since you were already warned again above, it is plain that you are not inclined to follow the truth. A sobering conclusion to have to draw, but one that is well-warranted. When you show signs of responsiveness to the truth, then there can be progress. G'day GEM of TKIkairosfocus
March 3, 2011
March
03
Mar
3
03
2011
06:29 AM
6
06
29
AM
PDT
Collin,
If such an experiment were done, how would we know when we succeeded? I mean, with the monkey/typewriter experiement we would know when we got Shakespear. But that refers to information that we have already assembled and assigned meaning to it. How would we find the “meaning” in the CSI of the experiment?
This is exactly why I'm asking for a mathematically rigorous definition of CSI. The ID claim appears to be that CSI of more than a certain amount cannot be generated by known evolutionary mechanisms. I would like to test that claim, but thus far no one has defined CSI with sufficient rigor that I can implement a measurement of it in software. Are you able to help?MathGrrl
March 3, 2011
March
03
Mar
3
03
2011
05:10 AM
5
05
10
AM
PDT
kairosfocus,
The precise problem with ev et al as has already been pointed out, is that they put the contained small scale random variation on an island of function and set on the task of hill climbing. Through a bait and switch, this — not disputed by anyone — is then offered as evidence that we can get to the shores of such an island of function by the same means. Not so. In short, question-begging and bait and switch.
I have made it very clear in my posts to this thread that I am discussing biological evolution, not abiogenesis. The only "bait and switch" allegation that could be made is against someone who, either deliberately or through careless reading, tries to change the topic to abiogenesis. DrBot has clarified this as well, so I'm sure that my position was stated clearly. To be sure I understand your position, please let us know if you believe that CSI can be generated by known evolutionary mechanisms or not.MathGrrl
March 3, 2011
March
03
Mar
3
03
2011
05:09 AM
5
05
09
AM
PDT
kairosfocus,
Further glancing down the list of posts since 140, it becomes evident that you are being willfully obtuse.
What is evident is that you are being willfully uncivil to someone who is not just interested in understanding the positive evidence for ID, but who is also willing to go to the effort of testing the concepts. That's hardly the response one expects from proponents who have confidence in their theory. From this point forward I will be ignoring your rudeness and focusing on the empirical evidence. That doesn't mean I don't notice your behavior.
CSI has long since been defined, and quantified, as you can see from even the UD weak argument correctives, top right this and every UD page. FSCI, is a very simple metric, the same sort of functional bits you use every time you say this file is 169 kbytes.
If this is so, please demonstrate how to objectively calculate CSI for a real biological system and talk me through how I could make this measurement for digital organisms in evolutionary simulations. I have read all the references provided and had a long discussion with gpuccio on Mark Frank's blog without getting this simple question answered. I'm willing to test your hypothesis, but I need rigorous definitions of your terms to do so. This is not an unreasonable request. I'm frankly surprised that examples aren't readily available, given how much CSI and its variants are touted as indicators of intelligent agency.MathGrrl
March 3, 2011
March
03
Mar
3
03
2011
05:07 AM
5
05
07
AM
PDT
Upright BiPed,
Please point to a single one of the “hundreds of thousands of peer-reviewed articles” that answers the “how” and “when” and “where” of Life’s origins.
As I thought I made clear, I'm discussing biological evolution, not abiogenesis. There is extensive literature documenting the research into what, when, where, and how of biological evolution, as anyone who participates in discussions such as this should know. Abiogenesis is also a fertile research field, albeit less mature than that of biological evolution. At least two papers on this topic have been referenced here on UD in the past few days. To find more, go to Pubmed and enter "biogenesis" in the search field. I see over fifteen thousand papers in the results. Can you provide references to any empirical evidence detailing the who, what, when, where, and how of ID?MathGrrl
March 3, 2011
March
03
Mar
3
03
2011
05:05 AM
5
05
05
AM
PDT
kairosfocus,
In short, MG, the time for rhetorical gamesmanship through stunts like you carried out at 140 is over.
Your tone and implication go beyond uncivil to positively rude. There is no gamesmanship on my part -- I am genuinely interested in learning how to objectively measure CSI. You can read the thread on Mark Frank's blog to see how much time I've invested in attempting to do so. If you don't wish to assist me, simply say so, but please keep your baseless insults to yourself.MathGrrl
March 3, 2011
March
03
Mar
3
03
2011
05:04 AM
5
05
04
AM
PDT
kairosfocus,
Re 140, do let me know, is it not commonly known that Ev et al are programs written by individuals? is it not known that these programs have in them digitally coded, symbolic functional info beyond 125 bytes? Does not that show that again the FSCI criterion reliably identifies such an entity as designed, per the design inference, confirmed by direct knowledge? In short, your remark just above comes across as willfully obtuse.
I would suggest that your remarks come across as projection. I was very clear in my post to distinguish between the simulator and the digital organisms that arise in the simulation. The ev, Tierra, and Steiner Problem simulators are obviously designed by programmers. The digital organisms that evolve within them are the product of simplified versions of the evolutionary mechanisms we observe in real world biological systems. Understanding the difference is essential to rationally discuss the results of these GAs. I would further note that you have completely ignored my very clear response about CSI and it's variants. I'll copy it again here for your convenience: Thus far neither dFSCI nor any other CSI variant have been rigorously defined and no demonstrations of how to calculate these metrics for real world biological systems have been provided. Given that level of definition, they are essentially meaningless terms. Would you care to rigorously define CSI (or your FSCI variant) such that I can measure it in an evolutionary simulation?MathGrrl
March 3, 2011
March
03
Mar
3
03
2011
05:03 AM
5
05
03
AM
PDT
FR/N 2: Also, please note that hill climbing algorithms are just that: intelligently designed algorithms. That would also hold for life forms:they are obviously designed to adapt within certain limits, and the controlled random search used by the immune system is a paradigm of how that can use chance as a part of the algorithm.kairosfocus
March 3, 2011
March
03
Mar
3
03
2011
04:53 AM
4
04
53
AM
PDT
F/N: Dr Bot, please recall: logically speaking, chance can produce any continent outcome. The problem, being that logical possibility is not to be confused with empirical credibility. For, when the config spaces get large enough -- and 1,000 bits [ or 1.07 * 10^301 possibilities] [is a good threshold -- special configs or clusters [i.e. the FSCI ones] then run into the issue that a search even on the gamut of the whole observed cosmos 10^150 possible Planck-time states, this being 10^20 times faster than a strong nuclear force interaction] is so maximally unlikely to scan enough of the space to make a difference, that the only reasonable, empirically well supported explanation of FSCI is intelligence. That is why, sight unseen, you explain this post by intelligence, not lucky noise on the Internet.kairosfocus
March 3, 2011
March
03
Mar
3
03
2011
04:51 AM
4
04
51
AM
PDT
1 6 7 8 9 10 14

Leave a Reply