Uncommon Descent Serving The Intelligent Design Community

On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I do enjoy reading ID’s most vehement critics, both in formal publications (such as books and papers) and on the, somewhat less formal, Internet blogosphere. Part of the reason for this is that it gives one something of a re-assurance to observe the vacuous nature of many of the critics’ attempted rebuttals to the challenge offered to neo-Darwinism by ID, and the attempted compensation of its sheer lack of explicative power by the religious ferocity of the associated rhetoric (to paraphrase Lynn Margulis). The prevalent pretense that the causal sufficiency of neo-Darwinism is an open-and-shut case (when no such open-and-shut case for the affirmative exists) never ceases to amuse me.

One such forum where esteemed critics lurk is the Panda’s Thumb blog. A website devoted to holding the Darwinian fort, and one endorsed by the National Center for Selling Evolution Science Education (NCSE). Since many of the Darwinian heavy guns blog for this website, we can conclude that, if consistently demonstrably faulty arguments are common play, the front-line Darwinism defense lobby is in deep water.

Recently, someone referred me to two articles (one, two) on the Panda’s Thumb website (from back in 2007), by Arthur Hunt (professor in Department of Plant and Soil Sciences at the University of Kentucky). The first is entitled “On the evolution of Irreducible Complexity”; the second, “Reality 1, Behe 0” (the latter posted shortly following the publication of Behe’s second book, The Edge of Evolution).

The articles purport to refute Michael Behe’s notion of irreducible complexity. But, as I intend to show here, they do nothing of the kind!

In his first article, Hunt begins,

There has been a spate of interest in the blogosphere recently in the matter of protein evolution, and in particular the proposition that new protein function can evolve. Nick Matzke summarized a review (reference 1) on the subject here. Briefly, the various mechanisms discussed in the review include exon shuffling, gene duplication, retroposition, recruitment of mobile element sequences, lateral gene transfer, gene fusion, and de novo origination. Of all of these, the mechanism that received the least attention was the last – the de novo appearance of new protein-coding genes basically “from scratch”. A few examples are mentioned (such as antifreeze proteins, or AFGPs), and long-time followers of ev/cre discussions will recognize the players. However, what I would argue is the most impressive of such examples is not mentioned by Long et al. (1).

There is no need to discuss the cited Long et al. (2003) paper in any great detail here, as this has already been done by Casey Luskin here (see also Luskin’s further discussion of Anti-Freeze evolution here), and I wish to concern myself with the central element of Hunt’s argument.

Hunt continues,

Below the fold, I will describe an example of de novo appearance of a new protein-coding gene that should open one’s eyes as to the reach of evolutionary processes. To get readers to actually read below the fold, I’ll summarize – what we will learn of is a protein that is not merely a “simple” binding protein, or one with some novel physicochemical properties (like the AFGPs), but rather a gated ion channel. Specifically, a multimeric complex that: 1. permits passage of ions through membranes; 2. and binds a “trigger” that causes the gate to open (from what is otherwise a “closed” state). Recalling that Behe, in Darwin’s Black Box, explicitly calls gated ion channels IC systems, what the following amounts to is an example of the de novo appearance of a multifunctional, IC system.

Hunt is making big promises. But does he deliver? Let me briefly summarise the jist of Hunt’s argument, and then briefly weigh in on it.

The cornerstone of Hunt’s argument is principally concerned with the gene, T-urf13, which, contra Behe’s delineated ‘edge’ of evolution, is supposedly a de novo mitochondrial gene that very quickly evolved from other genes which specified rRNA, in addition to some non-coding DNA elements. The gene specifies a transmembrane protein, which aids in facilitating the passage of hydrophilic molecules across the mitochondrial membrane in maize – opening only when bound on the exterior by particular molecules.

The protein is specific to the mitochondria of maize with Texas male-sterile cytoplasm, and has also been implicated in causing male sterility and sensitivity to T-cytoplasm-specific fungal diseases. Two parts of the T-urf13 gene are homologous to other parts in the maize genome, with a further component being of unknown origin. Hunt maintains that this proves that this gene evolved by Darwinian-like means.

Hunt further maintains that the T-urf13 consists of at least three “CCCs” (recall Behe’s argument advanced in The Edge of Evolution that a double “CCC” is unlikely to be feasible by a Darwinian pathway). Two of these “CCCs”, Hunt argues, come from the binding of each subunit to at minimum two other subunits in order to form the heteromeric complex in the membrane. This entails that each respective subunit have at minimum two protein-binding sites.

Hunt argues for the presence of yet another “CCC”:

[T]he ion channel is gated. It binds a polyketide toxin, and the consequence is an opening of the channel. This is a third binding site. This is not another protein binding site, and I rather suppose that Behe would argue that this isn’t relevant to the Edge of Evolution. But the notion of a “CCC” derives from consideration of changes in a transporter (PfCRT) that alter the interaction with chloroquine; toxin binding by T-urf13 is quite analogous to the interaction between PfCRT and chloroquine. Thus, this third function of T-urf13 is akin to yet another “CCC”.

He also notes that,

It turns out that T-urf13 is a membrane protein, and in membranes it forms oligomeric structures (I am not sure if the stoichiometries have been firmly established, but that it is oligomeric is not in question). This is the first biochemical trait I would ask readers to file away – this protein is capable of protein-protein interactions, between like subunits. This means that the T-urf13 polypeptide must possess interfaces that mediate protein-protein interactions. (Readers may recall Behe and Snokes, who argued that such interfaces are very unlikely to occur by chance.)

[Note: The Behe & Snoke (2004) paper is available here, and their response (2005) to Michael Lynch’s critique is available here.]

Hunt tells us that “the protein dubbed T-urf13 had evolved, in one fell swoop by random shuffling of the maize mitochondrial genome.” If three CCC’s really evolved in “one fell swoop” by specific but random mutations, then Behe’s argument is in trouble. But does any of the research described by Hunt make any progress with regards to demonstrating that this is even plausible? Short answer: no.

Hunt does have a go of guesstimating the probabilistic plausibility of such an event of neo-functionalisation taking place. He tells us, “The bottom line – T-urf13 consists of at least three ‘CCCs’. Running some numbers, we can guesstimate that T-urf13 would need about 10^60 events of some sort in order to occur.”

Look at what Hunt concludes:

Now, recall that we are talking about, not one, but a minimum of three CCC’s. Behe says 1 in 10^60, what actually happened occurred in a total event size of less that 10^30. Obviously, Behe has badly mis-estimated the “Edge of Evolution”. Briefly stated, his “Edge of Evolution” is wrong. [Emphasis in original]

Readers trained in basic logic will take quick note of the circularity involved in this argumentation. Does Hunt offer any evidence that T-urf13 could have plausibly evolved by a Darwinian-type mechanism? No, he doesn’t. In fact, he casually dismisses the mathematics which refutes his whole argument. Here we have a system with a minimum of three CCCs, and since he presupposes as an a priori principle that it must have a Darwinian explanation, this apparently refutes Behe’s argument! This is truly astonishing argumentation. Yes, certain parts of the gene have known homologous counterparts. But, at most, that demonstrates common descent (and even that conclusion is dubious). But a demonstration of homology, or common ancestral derivation, or a progression of forms is not, in and of itself, a causal explanation. Behe himself noted in Darwin’s Black Box, “Although useful for determining lines of descent … comparing sequences cannot show how a complex biochemical system achieved its function—the question that most concerns us in this book.” Since Behe already maintains that all life is derivative of a common ancestor, a demonstration of biochemical or molecular homology is not likely to impress him greatly.

How, then, might Hunt and others successfully show Behe to be wrong about evolution? It’s very simple: show that adequate probabilistic resources existed to facilitate the plausible origin of these types of multi-component-dependent systems. If, indeed, it is the case that each fitness peak lies separated by more than a few specific mutations, it remains difficult to envision how the Darwinian mechanism might adequately facilitate the transition from one peak to another within any reasonable time frame. Douglas Axe, of the biologic institute, showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalisation. If a duplicated gene is neutral (in terms of its cost to the organism), then the  maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). One other study, published in Nature in 2001 by Keefe & Szostak, documented that more than a million million random sequences were required in order to stumble upon a functioning ATP-binding protein, a protein substantially smaller than the transmembrane protein specified by the gene, T-urf13. Douglas Axe has also documented (2004), in the Journal of Molecular Biology, the prohibitive rarity of functional enzymatic binding domains with respect to the vast sea of combinatorial sequence space in a 150 amino-acid long residue (Beta-Lactamase).

What, then, can we conclude? Contrary to his claims, Hunt has failed to provide a detailed and rigorous account of the origin of T-urf13. Hunt also supplies no mathematical demonstration that the de novo origin of such genes is sufficiently probable that it might be justifiably attributed to an unguided or random process, nor does he provide a demonstration that a step-wise pathway exists where novel utility is conferred at every step (being separated by not more than one or two mutations) along the way prior to the emergence of the T-urf13 gene.

The Panda’s Thumb are really going to have to do better than this if they hope to refute Behe!

Comments
MathGrrl: MathGrrl: "I found your calculation related to titin to be confusing, frankly. You didn’t provide a mathematically rigorous definition of CSI that I saw and you didn’t go into as much detail as did vjtorley." Seriously? Which part did you have trouble with as compared to the definition of CSI that was provided by Dembski in his "Specification ..." paper. Did you actually read through all the comments I provided in those links? Do you have any questions that were not brought up in those links that I didn't provide answers to? BTW, for the sake of argument, since I haven't had the time to go through vjtorley's calculations, I accept his conclusion since it is perfectly consistent with what I have been trying to explain to you. Evolutionary Algorithms will indeed produce CSI, but only if CSI previously exists. But EAs will not generate CSI *where non exists.* Now what complaint do you have? MathGrrl: "If you believe that your version of CSI is equivalent to what Dembski has published and you further believe that it is a reliable indicator of intelligent agency, please provide your rigorous definition and demonstrate how you arrive at a different answer than did vjtorley for the scenario he analyzed." 1. I would probably arrive at the same answer as vjtorley ... and if not the exact same answer, at least the same conclusion. That should ahve been obvious to you if you actually read through the links I provided for you, which is what you seemed to state you did do. 2. I defended, in greater depth than anyone here, my calculation of CSI as being the same as Dembski's calculation in one of the Telic Thoughts Threads that I linked to. I provided exact quotes from Dembski's paper along with his examples, comparing them to my own, showing how I have calculated for CSI in the same way. Did you or did you not read through that link? If so, what problems do you have? MathGrrl: "Applying your definition to the other three scenarios I described would also be very helpful to others attempting to recreate your calculations." Ask me again this summer, if you are truly interested, and we will go through them together. At the moment, I don't have time for further calculations. In fact, I've already provided at least one (of the protein Titin) with detailed explanation, and it is now your turn. You are actually starting to sound like a "creationist" where no matter how many examples myself and others such as KF, vjtorley, and Dembski give, it "isn't good enough." I don't have time for those games. MathGrrl: "Let’s get right down to the math, right here in this thread" Sure, feel free to bring up the problems you have *that I have not already responded to in those links* with my previous calculation. In the end, no one here has shown the origination of CSI without previous CSI. That is the point that I have been attempting to get you to understand. I am not arguing against anyone showing that evolution can produce a pattern that can be measured as containing CSI. I agree, and have told you on at least a few occassions, that an EA can produce a CSI pattern as an outcome. However, that CSI is only produced from a sufficiently complex program that itself, as has already been explained to you by myself and others (especially KF), also contains CSI since their structure is at least on the same level of complex specificity as our comments on this blog. If you disagree with me, and in order to show a flaw in my argument, you will have to actually show a situation where CSI was produced by scratch or an EA was generated which then in turn produced CSI, by only law+chance absent intelligent input or any previous CSI. My aforementioned experiment will test this concept of the inability for law+chance to *generate* CSI (when none existed previously) nicely. If you really had a case either for your position or a case against mine, you would pulling out the evidence of such a simulation and showing the calculations just like the rest of us have been providing calculations. The fact that you refuse to do so after the relevant concepts have been explained to death and measurements provided by at least 3 sources, shows me that all you are interested in is simple dismissal of our arguments and the continual propagation of misinformation. I'm done here unless you can actually bring a critique to the table that you are willing to defend with calculations and the experiment that I suggested, or if you are willing to articulate an actual concern, that I haven't already covered, with any of my explanations or calculations. In fact it appears that you need to, for the second time, seriously read through those links I provided for you before you come back here to continue to "slough off" and simply dismiss and ignore almost everything that myself and others have explained and continue to explain.CJYman
March 17, 2011
March
03
Mar
17
17
2011
10:11 AM
10
10
11
AM
PDT
And Jon, exactly what physical evidence has been presented to suggest that information has increased over and above what was already present in life? NONE! If you do know of any unambiguous cases of the functional complexity of an organism increasing above the 2 protein-protein binding site limit of Dr. Behe please do tell. For me the proof is in the pudding, so MathGrrl can hypothesize all day long in her imaginary world of evolutionary algorithms, which were designed by humans by the way, but that bothers me not in the least for I know of the extreme poverty of evidence she faces in real life for actually demonstrating Darwinian precepts to be true. For me empirical overrules imagination all day long, as it should for anyone. Ask yourself Jon, if Darwinian evolution were true, why in blue blazes are we not flooded with thousands upon thousands of examples when we request proof? Please Jon, tell me exactly why MathGrrl is reduced to arguing for extremely trivial gains in functional information within human engineered evolutionary algorithms? Does it not strike you in the least bit odd that she would even have to argue from such a diminished position in the first place? Should she not instead be arguing from countless examples in the real world that she wishes she could produce if Darwinism were true?bornagain77
March 17, 2011
March
03
Mar
17
17
2011
09:25 AM
9
09
25
AM
PDT
#394 UB 1) So can you confirm that as far as you are concerned the only part of life to contain information and therefore CSI is DNA. The bacterial flagellum and the immune system are not examples of CSI. 2) Assuming that is true, I assume the symbols in a string of DNA are the bases. What do they symbolise?markf
March 17, 2011
March
03
Mar
17
17
2011
09:23 AM
9
09
23
AM
PDT
MG, 386: There you go again. You have had history [Orgel et al], you have had concepts, you have had verbal definitions, you have had quantitative metric and calculations, you have been shown how your own posts instantiate the phenomenon and you simply sweep them away as "hundreds of words." I am sorry, I now conclude this is a case of none being so blind as one who WILL not see. Your problem -- pardon directness -- is not want of adequate concept and models for CSI, it is the fallacy of the closed, ideologised mind. I suggest you start here to fix it. I have a constitutional crisis brewing, an economic mess, an up-coming budget issue, and a regional sustainable energy challenge now being compounded by the implications of issues linked to the mess playing out in Japan and how that is coming across on our TV etc screens. Dr Torley (who graciously gave up hours of his time on end to try to help you) is IN Japan. I think after nearly 400 posts, a lot of effort has been expended to try to help you, including exactly the sorts of definitions and calculations you demand again. The evidence is, you don't want to be helped, you only want to throw up selectively hyperskeptical objections to comfort yourself with the idea that CSI is ill defined and meaningless. I notice, that, after dismissing he concept of CSI as meaningless, and being confronted with Orgel's presentation of the same concepts, you have ducked the challenge of explaining to us whether or no Orgel was meaningless in his remarks, and why. That tells me all I need to know . . . and I don't need to use the T-word. Good day, madam. GEM of TKIkairosfocus
March 17, 2011
March
03
Mar
17
17
2011
09:09 AM
9
09
09
AM
PDT
Mark, Haemoglobin is the product of information, in the same way a gear is. It is produced through information in order to serve a function.Upright BiPed
March 17, 2011
March
03
Mar
17
17
2011
08:45 AM
8
08
45
AM
PDT
MF: Plainly, the info on a protein is a copy through mRNA of the info in DNA. DNA is the primary info source, and proteins are the functional, working expression of that info. They of course back-encode to the DNA code that specified them [up to a certain degree of redundancy], but that is not additional info.kairosfocus
March 17, 2011
March
03
Mar
17
17
2011
08:29 AM
8
08
29
AM
PDT
MathGrrl, I'm no mathematician. I just wanted to clarify what's being defined. If specification is not a quantity but an either/or property, then the question is whether information can evolve from a non-specified to a specified form, correct? I don't think it can, but I wouldn't know how to show that mathematically. Speaking more philosophically, if specification is not a quantity, then information should not be able to be "kind of" or "partly" specified. That would be a big obstacle to any evolutionary model.QuiteID
March 17, 2011
March
03
Mar
17
17
2011
08:17 AM
8
08
17
AM
PDT
bornagain, Your latest comment to me isn't any more comprehensible than your previous efforts. You have already admitted that you are not a mathematician and don't fully understand the concepts your are speaking so confidently about. Instead, you state that you rely on the expert opinion of others. Yet, as this comment thread lays bear, those experts don't even agree among themselves what CSI is (some even have completely different acronyms), how it is calculated, or even if normal non-teleological biological processes can generate CSI. And none, save, vtorley (who seems to agree that nonteleological Darwinian processes can create CSI), have even attempted a calculation. This whole thread ought to be disconcerting for the ID supporter. It certainly is for mejon specter
March 17, 2011
March
03
Mar
17
17
2011
08:04 AM
8
08
04
AM
PDT
UB at 383 The symbols I am referring to are those contained within nucleic sequencing (genetic code), I think that was fairly obvious from my comments So does that mean a protein such as haemoglobin does not contain information? Is DNA the only part of life that contains information?markf
March 17, 2011
March
03
Mar
17
17
2011
07:52 AM
7
07
52
AM
PDT
F/N 4: Since much of the above is wranglings about definitions, here is Wiki in the guise of admission against interest: ________________ >> A definition is a passage that explains the meaning of a term (a word, phrase or other set of symbols), or a type of thing. The term to be defined is the definiendum (plural definienda). A term may have many different senses or meanings. For each such specific sense, a definiens (plural definientia) is a cluster of words that defines that term . . . . Like other words, the term definition has subtly different meanings in different contexts. A definition may be descriptive of the general use meaning, or stipulative of the speaker's immediate intentional meaning. For example, in formal languages like mathematics, a 'stipulative' definition guides a specific discussion. A descriptive definition can be shown to be "right" or "wrong" by comparison to general usage, but a stipulative definition can only be disproved by showing a logical contradiction [3]. A precising definition extends the descriptive dictionary definition (lexical definition) of a term for a specific purpose by including additional criteria that narrow down the set of things meeting the definition . . . . An intensional definition, also called a coactive definition, specifies the necessary and sufficient conditions for a thing being a member of a specific set. Any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition. An extensional definition, also called a denotative definition, of a concept or term specifies its extension. It is a list naming every object that is a member of a specific set. So, for example, an intensional definition of 'Prime Minister' might be the most senior minister of a cabinet in the executive branch of government in a parliamentary system. An extensional definition would be a list of all past, present and future prime ministers. One important form of the extensional definition is ostensive definition. This gives the meaning of a term by pointing, in the case of an individual, to the thing itself, or in the case of a class, to examples of the right kind. So you can explain who Alice (an individual) is by pointing her out to me; or what a rabbit (a class) is by pointing at several and expecting me to 'catch on' . . . . a genus of a definition provides a means by which to specify an is-a relationship, and the non-genus portions of the differentia of a definition provides a means by which to specify a has-a relationship. When a system of definitions is constructed with genera and differentiae, the definitions can be thought of as nodes forming a hierarchy or—more generally—a directed acyclic graph; a node that has no predecessors is a most general definition; each node along a directed path is more differentiated (or more derived) than its predecessors, and a node with no successors is a most differentiated (or a most derived) definition. When a definition, S, is the tail of all of its successors (that is, S has at least one successor and all of the direct successors of S are most differentiated definitions), then S is often called a species and each of its direct successors is often called an individual or an entity; the differentia of an individual is called an identity. >> ________________ In short, not all definitions are of the same order, and different types of definition have both meaningfulness and practical or analytical utility. As well, concept and cases come before precisting definitions and descriptive models, which is where the mathematical model comes from. And, in our context, the CSI, FSC and FSCI models given above are jut that, models responsive to a reality -- function + specificity + complexity in an organisation that has to meet a criterion that is observably functional and specifying -- commonly encountered in language, in technology and in the living cell. This also brings up the further factor: all of this is based on our experience of the world as active, intelligent observers and designers. So, we can begin form that base of experience in developing descriptions, models, definitions, theories etc.kairosfocus
March 17, 2011
March
03
Mar
17
17
2011
07:15 AM
7
07
15
AM
PDT
F/N 3: To see how applicable that nodes and arcs view is, let us think about sound, speech and alphabetic writing. Sound is analogue, compressions and rarefactions of the air or something like that, that propagates. To get to speech, we have sufficiently distinct vocal tract sounds, that can be combined as clusters of phonemes. Phonemes are then represented by essentially arbitrary symbols, like what became our A I gather started out as a stylised Ox-head or something like that. Sets of distinct symbols, chained in space as a string then represented phonemes, which are discretised from sounds: w-o-r-d-s. Such strings of symbols then can be transformed into bits, by coding schemes such as ASCII. But here we are, digital from analogue, strung together in string structures. And we then focus analytically on the strings. Do you see why this can also be used for the wire-mesh for a 3-d object, and for 5he exploded diagram that shows how components are to be integrated to form a functional entity? Then, by specifying rules and symbols, we can describe the blueprint as a nodes and arcs mesh [and cluster of vectors, i.e. a matrix]. So now we have a way to use the dFSCI analysis to address complex organised functional entities.kairosfocus
March 17, 2011
March
03
Mar
17
17
2011
07:04 AM
7
07
04
AM
PDT
KF, thank you for standing in the face of absurdity, and pointing it out. You and others have repeatedly enagaged the mathematical concepts of CSI in addressintg Mathgrrl's questions. My problem, however, doesn't come from the mathematics of CSI, but instead comes from Mathgrrls ultimate conclusion that "evolutionary mechanisms can create CSI". I have made my case, but she refuses to address it.Upright BiPed
March 17, 2011
March
03
Mar
17
17
2011
07:00 AM
7
07
00
AM
PDT
kairosfocus, Hundreds of words are not as valuable in this context as a single calculation. You can explain for as long as you like how wonderful CSI is as a concept, but until you go to the level of effort that vjtorley did to actually clarify the definition and show how to compute it, your claims are unfounded. All I've been asking for throughout this discussion is a mathematically rigorous definition of CSI and some detailed examples of how to calculate it. That is not an unreasonable request. As I noted in my post 238, if I asked for similar detail about a metric proposed by one of my colleagues, she'd fill whole whiteboards with far more than I requested. vjtorley has set the bar here. Are you willing to try to clear it?MathGrrl
March 17, 2011
March
03
Mar
17
17
2011
06:50 AM
6
06
50
AM
PDT
QuiteID,
I thought “specified” is not measurable in the same way that “information” is. I thought information was either specified or non-specified. In other words, you measure the same thing in either case, you just determine first whether it’s specified or not. The same may be true with complexity, as suggested by the term CSI. After all, “complex” and “specified” are characteristics of the thing being measured — information. They’re either there or they’re not.
You've put your finger on a couple of the parts of the definition of CSI that I find least mathematically rigorous. These issues are a big part of why I raised the questions I have on this thread. If ID proponents are going to claim, as they do, that CSI is a reliable metric for detecting the intervention of intelligent agency, they must define that metric with sufficient rigor that it can be objectively measured by anyone interested in doing so. My goal, as I've explained repeatedly throughout this discussion, is to actually test the claims made about CSI. The four scenarios I described in my post 177 above are my attempt to get enough information to be able to perform that calculation. If you are willing to provide the level of detail that vjtorley did, I'm very interested in looking at your calculations.MathGrrl
March 17, 2011
March
03
Mar
17
17
2011
06:41 AM
6
06
41
AM
PDT
F/N 2: MG, are you wiling to assert that Orgel's remarks above -- which are the basis for both the broader term CSI and the more focussed one FSCI; just notice: "Organization, then, is functional[ly specific] complexity and carries information" -- are "meaningless"? Why or why not?kairosfocus
March 17, 2011
March
03
Mar
17
17
2011
06:40 AM
6
06
40
AM
PDT
Mark at 376 The symbols I am referring to are those contained within nucleic sequencing (genetic code), I think that was fairly obvious from my comments.Upright BiPed
March 17, 2011
March
03
Mar
17
17
2011
06:38 AM
6
06
38
AM
PDT
F/N: It is helpful to remind ourselves, again, on Orgel on CSI -- i.e the general descriptive term [and notice how that term in its root OOL context naturally invites a focus on FSCI as key subset] is prior to the mathematical models, and antedates Dembski's involvement by 20 or more years: ____________ >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.] >>kairosfocus
March 17, 2011
March
03
Mar
17
17
2011
06:35 AM
6
06
35
AM
PDT
All this, in order to make compounding protests on the alleged lack of adequate definition of CSI! I am sorry, MG, but your remarks above are patently self-referentially absurd [using what you deny in order to object to it], and reflect willful obtuseness and selective hyperskepticism. This, you have sustained for WEEKS. I don't doubt that elsewhere you are trespassing on the patience and efforts above to try to make Dr Torley and others seem to not know what they are talking about and onwards that he core concepts of design theory are ill defined nonsense. I shall be direct: SUCH IS A SELF-REFERENTIALLY ABSURD STRAWMAN OF YOUR OWN MAKING. I think the time has more than come for you to start from basics and get your own concepts right: a: what is a digital vs an analogue quantity? b: what is a bit or binary digit? c: for a cluster of n bits, how many possible states or configurations are there? d: For 1,000 bits how many are there? e: How does this compare to the ~ 10^150 Planck time states for the 10^80 atoms of our observed cosmos across a working life of 10^25 s or about 50 mn times the time said to have elapsed since the big bang? f: What is a symbolic code, and how does it work? g: For a complex, digital coded symbolic, linguistic system like this post, what would happen very rapidly to function if more and more random changes are introduced in the coded characters? For algorithmically functional coded systems [i.e. with descriptive and prescriptive information that makes an executing machine do something]? h: Thus, does it make sense to speak of islands of function in the space of possible configs? Why or why not? i: Once we are past 1,000 bits, is it reasonable that any undirected process on the gamut of our cosmos would generate dFSCI? j: Has it ever been observed that a process of chance plus mechanical necessity staring from an arbitrary initial configuration has constructed an algorithmically or linguistically functional system beyond 1,000 bits storage capacity? (Systems that start within an island of function and per an algorithm hill climb to better performance, are NOT cases in point.) k: Have we seen intelligent beings create such dFSCI rich systems? l: Is this the routine and empirically reliable source for such systems, once we directly know the source? m: taking the infinite monkeys theorem, and in light of the statistical thermodynamics principles thereby illustrated, is it analysitcally reasonable to expect that the pattern just outlined, that the routine reliable source of dFSCI is intelligence, will be overturned observationally? n: Is such dFSCI then a reliable sign of design? Why or why not? o: Other complex functional [parts brought together to achieve function] entities that do not use codes directly, can be reduced to such, often by a nodes, interfaces and arcs mesh with specifications, where the structure of yes/no decisions to construct the entity gives a bit metric of the blueprint. Can these be seen as represented by such structured codes? Why or why not? p: is or is not this broader FSCI -- where the specificity is constrained by and relates to the function [i.e. if the specificity and the function are not coupled, the entity is not FSCI] a sign of design? Why or why not? q: When therefore you see such dFSCI AND related functional, specific complex organisation in the living cell, is or is this not a sign pointing to design as the best abductive explanation? Why or why not? GEM of TKIkairosfocus
March 17, 2011
March
03
Mar
17
17
2011
06:18 AM
6
06
18
AM
PDT
MG: Do you understand the absurdity of:
1 --> Using a digital computer [even if a smartphone etc, that is what it is . . .] to compose and post an alphanumeric textual message in contextually evasive English 2 --> Thus producing 588 7-bit [128 state] functionally specific ASCII characters, of ~1.1 *10^1,239 possible configs, vastly beyond the search capacity of the observed cosmos (but easily within the reach of mind, per massive observation) 3 --> Which can be described specifically, analysed, calculated upon and the like using standard, routinely used simple digital communication techniques (as has just been done) 4 --> Where, in light of say the functionality and structure of DNA and its genetic code, such dFSCI is the materially relevant subset of complex, specified information [CSI], 5 --> CSI here being used as a general description that is specified quantitatively in various models, such as the simple FSCI metric already presented above, by Durston et al in their FITS metric for FSC, and Dembski's metric from his Specification paper (and other models) as well as other adaptations [metrics in real world contexts may have different approaches and models that are good enough once fit for purpose] 6 --> Where all along the Dembski metric on CSI has been in the UD WAC's top right this and every UD page, no 27: >> A more general approach to the definition and quantification of CSI can be found in a 2005 paper by Dembski: “Specification: The Pattern That Signifies Intelligence”. For instance, on pp. 17 – 24, he argues: define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [X] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases pS(t) and also by the maximum number of binary search-events in our observed universe 10^120] X = – log2[10^120 ·pS(T)·P(T|H)]. To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so pS(T) = 4. Calculation yields X = -361, i.e. < 1, so that such a hand is not improbable enough that the – rather conservative — ? metric would conclude “design beyond reasonable doubt.” (If you see such a hand in the narrower scope of a card game, though, you would be very reasonable to suspect cheating.) Debates over Dembski’s models and metrics notwithstanding, the basic point of a specification is that it stipulates a relatively small target zone in so large a configuration space that the reasonably available search resources — on the assumption of a chance-based information-generating process — will have extremely low odds of hitting the target. So low, that random information generation becomes an inferior and empirically unreasonable explanation relative to the well-known, empirically observed source of CSI: design. >> 7 --> Where also, biologically relevant cases and onward adaptations have been given above in this thread and elsewhere.
[ . . . ]kairosfocus
March 17, 2011
March
03
Mar
17
17
2011
06:17 AM
6
06
17
AM
PDT
jon, though the universe is shown to be Theistic in its basis from quantum mechanics, as opposed to Deistic, or materialistic, as the universe could have been found to be, that does not preclude one from seeing design that was further implemented into the universe by God. The ONLY thing that acknowledging the truth of a Theistic universe does is to show that to even ask if materialistic 'non-teleological' processes were ever involved in creating the unparalleled levels of design we find in life is completely nonsensical since materialism is in fact falsified as an explanation for reality in the first place. ,,, Furthermore Jon, all normal 'non-teleological' adaptations we observe in the life, in which God is merely sustaining life and the universe, always come at a cost of the information that was originally encoded in the life form, if we ever did see the functional complexity of a life form increase over its original 'optimal' form, we can can be sure that God intervened since the universe is shown to be theistic in its basis!.bornagain77
March 17, 2011
March
03
Mar
17
17
2011
05:52 AM
5
05
52
AM
PDT
Kuartus:
You believe that as a result of the universe being designed, then everything IN the universe must also be designed.
Well, that was essentially the argument that bornagain was using to wave off my questions. If you are not making that argument, then perhaps we can make progress here.
I hope this clears things up.
It clears up the fact that you are not making the same argument as bornagain. However, it still does not address my original question to bornagain of how one differentiates between a teleological Darwinian process and a non-teleological process. Can you take a run at that?jon specter
March 17, 2011
March
03
Mar
17
17
2011
05:11 AM
5
05
11
AM
PDT
MathGrrl, I think your question may be malformed. Maybe I don't understand this well enough, and please accept my apologies if I am wrong. However, I thought "specified" is not measurable in the same way that "information" is. I thought information was either specified or non-specified. In other words, you measure the same thing in either case, you just determine first whether it's specified or not. The same may be true with complexity, as suggested by the term CSI. After all, "complex" and "specified" are characteristics of the thing being measured -- information. They're either there or they're not.QuiteID
March 17, 2011
March
03
Mar
17
17
2011
04:42 AM
4
04
42
AM
PDT
UB Your argument seems to be that CSI (indeed all information) involves symbols and only a designer can create symbols. I don't think there is a single neat definition of "symbol". But perhaps you can clarify this with an example. What are the symbols in the haemoglobin molecule and what do they symbolise?markf
March 17, 2011
March
03
Mar
17
17
2011
03:46 AM
3
03
46
AM
PDT
Mathgrrl, Once again, you post a quote of mine, then follow it with a statement that addresses absolutely nothing whatsoever of the case I have made against your position. This all stands to reason of course - the case I've presented is not something you can address, for to do so would immediately eliminate your position. In other words, the strength of the evidence against your claim is in direct proportion to the willful ignorance you've put on display. Given the situation where your claim is at the mercy of the evidence, this is not likely to change.Upright BiPed
March 16, 2011
March
03
Mar
16
16
2011
11:06 PM
11
11
06
PM
PDT
MathGrrl, in case you wondering if evolution has been 'tested' with mathematical rigor (you know to avoid the been one-sided). It has been tested and failed: Whale Evolution Vs. Population Genetics - Richard Sternberg PhD. in Evolutionary Biology - video http://www.metacafe.com/watch/4165203/ Waiting Longer for Two Mutations - Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that 'for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years' (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless "using their model" gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model. http://www.discovery.org/a/9461 This following calculation by geneticist John Sanford for 'fixing' a beneficial mutation, or for creating a new gene, in humans, gives equally absurd numbers that once again render the Darwinian scenario of humans evolving from apes completely false: Dr. Sanford calculates it would take 12 million years to “fix” a single base pair mutation into a population. He further calculates that to create a gene with 1000 base pairs, it would take 12 million x 1000 or 12 billion years. This is obviously too slow to support the creation of the human genome containing 3 billion base pairs. http://www.detectingtruth.com/?p=66 Indeed, math is not kind to Darwinism in the least when considering the probability of humans 'randomly' evolving: In Barrow and Tippler's book The Anthropic Cosmological Principle, they list ten steps necessary in the course of human evolution, each of which, is so improbable that if left to happen by chance alone, the sun would have ceased to be a main sequence star and would have incinerated the earth. They estimate that the odds of the evolution (by chance) of the human genome is somewhere between 4 to the negative 180th power, to the 110,000th power, and 4 to the negative 360th power, to the 110,000th power. Therefore, if evolution did occur, it literally would have been a miracle and evidence for the existence of God. William Lane Craig William Lane Craig - If Human Evolution Did Occur It Was A Miracle - video http://www.youtube.com/watch?v=GUxm8dXLRpA Along that same line: Darwin and the Mathematicians - David Berlinski “The formation within geological time of a human body by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components.” Kurt Gödel, was a preeminent mathematician who is considered one of the greatest to have ever lived. Of Note: Godel was a Theist! http://www.evolutionnews.org/2009/11/darwin_and_the_mathematicians.html “Darwin’s theory is easily the dumbest idea ever taken seriously by science." Granville Sewell - Professor Of Mathematics - University Of Texas - El Pasobornagain77
March 16, 2011
March
03
Mar
16
16
2011
08:01 PM
8
08
01
PM
PDT
Mathgirl, I think you are hilarious. This is what I think you should do. Send Dr. Dembski an email telling him to provide a mathematically rigorous definition of CSI for you. There, problem solved.kuartus
March 16, 2011
March
03
Mar
16
16
2011
07:49 PM
7
07
49
PM
PDT
kairosfocus, Your post suffers from the same problems as those by Upright BiPed: No math. Provide your rigorous mathematical definition of CSI, show how vjtorley's calculations are incorrect, and provide example calculations for the four scenarios I described and we'll be able to have a rational conversation about whether or not CSI is a reasonable metric for identifying intelligent agency. Until you do that, any claims you make about CSI are quite literally meaningless. ID is supposed to be a scientific theory. Let's work together to provide the mathematical basis to make it testable.MathGrrl
March 16, 2011
March
03
Mar
16
16
2011
07:08 PM
7
07
08
PM
PDT
Upright BiPed,
My comments to you have centered around the singular statement you repeatedly make. That being that “evolutionary mechanisms” have the ability to “create” CSI. At this point I have probably made it stupidly clear that I refute that conclusion as a matter of empirical observation. “Evolutionary mechanisms” cannot make CSI.
Since you continue to fail to provide a rigorous mathematical definition of CSI, I can only go by the one provided by Dembski in Specification: The Pattern That Signifies Intelligence. vjtorley, an ID proponent of impeccable credentials, demonstrated how Dembski's own words lead to the conclusion that CSI can be generated by known evolutionary mechanisms. If you want to refute that conclusion, you're going to need to provide a clear definition of your terms and detailed calculations. Bluster, bloviation, and incivility are not adequate substitutes.MathGrrl
March 16, 2011
March
03
Mar
16
16
2011
07:02 PM
7
07
02
PM
PDT
Actually, anyone reviewing this thread would find me justified in saying that about you.
My comments to you have centered around the singular statement you repeatedly make. That being that "evolutionary mechanisms" have the ability to "create" CSI. At this point I have probably made it stupidly clear that I refute that conclusion as a matter of empirical observation. "Evolutionary mechanisms" cannot make CSI. I have made my argument on that point abundantly clear. You steadfastly refused to engage that argument because it's a no winner for you. That is what readers will see.Upright BiPed
March 16, 2011
March
03
Mar
16
16
2011
05:42 PM
5
05
42
PM
PDT
Dr Torley Been busy elsewhere, but must pause to wish you and Japan well in the face of a devastating disaster. Participants and onlookers: I passed by UD just now for the first time in some days, having been busy elsewhere. I see this thread still goes in circles driven by MG's refusal to acknowledge what is in front of her. Digitally coded, functionally specific, complex information is a commonplace entity, it is the base of software, modern communications and related fields. It also happens to be what we express in written language [and in phonemes], as well as music. It is naturally measured in bits [a metric of chained, contextually effective yes/no decisions], which are of course functionally specific. To date, we have literally trillions of cases in point of such dFSCI. We know how to convert to a bit metric, and routinely do so. We buy and sell hard drives, CDs, DVD.s SD Cards, and memory sticks and chips by their capacity in [functionally specific] bits. Bits at work, in ways familiar from or materially similar to the familiar technologies of C21 life and work and play. Anyone who tries to tell me that this is not an adequately defined concept and metric, is immediately in utter disconnect from digital reality in C21. Indeed, I am immediately suspicious that we are seeing willful selective hyperskepticism. The only effective answer to that is to point out the absurdity of using computer technology to deny the fundamental reality of such technologies: bits at work, under intelligent direction. MG, sorry if that cap fits, but it plainly does. Now, here is a simple threshold-based metric for such digitally expressed FSCI, long since presented in the UD weak argument correctives [top right this and every UD page], no 28:
For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.) On massive evidence, such cases are reliably the product of intelligent design, once we independently know the causal story. So, we are entitled to (provisionally of course; as per usual with scientific work) induce that FSCI is a reliable, empirically observable sign of design . . .
Or, if you want that boiled down to a formula, let us do so as I do in my always linked: _______________ >> we can construct a rule of thumb functionally specific bit metric for FSCI: a] Let contingency [C] be defined as 1/0 by comparison with a suitable exemplar [i.e. this is an operational definition on family resemblance], e.g. a tossed die that on similar tosses may come up in any one of six states: 1/ 2/ 3/ 4/ 5/ 6. That is, diverse configurations of the component parts or of outcomes under similar initial circumstances must be credibly possible. b] Let specificity [S] be identified as 1/0 through specific functionality [FS] or by compressibility of description of the specific information [KS] or similar means that identify specific target zones in the wider configuration space. [Often we speak of this as "islands of function" in "a sea of non-function." (That is, if moderate application of random noise altering the bit patterns will beyond a certain point destroy function [notoriously common for engineered systems that require working parts mutually co-adapted at an operating point, and also for software and even text in a recognisable language] or move it out of the target zone, then the topology is similar to that of islands in a sea.)] c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, with 500 - 1,000 bits serving as the threshold for "probably" to "morally certainly" sufficiently complex to meet the FSCI/CSI threshold. d] Define the vector {C, S, B}>/b> based on the above [as we would take distance travelled and time required, D and t: {D, t}], and take the element product C*S*B [as we would take the element ratio D/t to get speed]. e] Now we identify the simple FSCI metric, X: C*S*B = X, the required FSCI/CSI-metric in [functionally] specified bits. Once we are beyond 500 - 1,000 functionally specific bits, we are comfortably beyond a threshold of sufficient complex and specific functionality that the search resources of the observed universe would by far and away most likely be fruitlessly exhausted on the sea of non-functional states if a random walk based search (or generally equivalent process) were used to try to get to shores of function on islands of such complex, specific function. >> _______________ If you want more sophisticated metrics, they have been provided aplenty above, and have all been brushed aside in the haste to hyperskeptical dismissal. Including the Durston metric which put meat on the islands of function or hot zone or target zone concept used by Dembski, and published a metric in FITS for 35 protein families. VJT has provided straight and modified versions of a calculation on the Dembski model. But we do not need to do that. All we need to do is to challenge MG to provide a case where, beyond 1,000 bits: 1 --> Symbolic codes -- glyphs and rules for meaningful and functional combinations -- originate by undirected forces of chance and necessity. 2 --> Algorithms, program statements, data structures and correlated physical implementing machinery to cause function similarly originated by chance plus blind necessity. 3 --> Consequently, cybernetic functionality emerged without intelligent direction and control. __________ There are of course no such cases, that is why MG is resorting to selective hyperskepticism. But, we literally have billions of cases where such systems originate through intelligent direction and control (including the sort of genetic or evolutionary algorithms, so called she is still trying to throw up as an objection). So, on inference to best explanation [the underlying epistemological frame of origins science], it is clear that we have a best and empirically reliable explanation for cases of dFSCI. Now, simply look at a cell based organism, and observe the genes and associated regulatory networks. dFSCI at work, in a cybernetic code based system. On inference to best explanation, backed up by billions of test cases, design. QED. G'day GEM of TKIkairosfocus
March 16, 2011
March
03
Mar
16
16
2011
04:29 PM
4
04
29
PM
PDT
1 2 3 14

Leave a Reply