Uncommon Descent Serving The Intelligent Design Community

On The Calculation Of CSI

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

My thanks to Jonathan M. for passing my suggestion for a CSI thread on and a very special thanks to Denyse O’Leary for inviting me to offer a guest post.

[This post has been advanced to enable a continued discussion on a vital issue. Other newer stories are posted below. – O’Leary ]

In the abstract of Specification: The Pattern That Signifies Intelligence, William Demski asks “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” Many ID proponents answer this question emphatically in the affirmative, claiming that Complex Specified Information is a metric that clearly indicates intelligent agency.

As someone with a strong interest in computational biology, evolutionary algorithms, and genetic programming, this strikes me as the most readily testable claim made by ID proponents. For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it. Unfortunately, what I’ve found is quite a bit of confusion about the details of CSI, even among its strongest advocates.

My first detailed discussion was with UD regular gpuccio, in a series of four threads hosted by Mark Frank. While we didn’t come to any resolution, we did cover a number of details that might be of interest to others following the topic.

CSI came up again in a recent thread here on UD. I asked the participants there to assist me in better understanding CSI by providing a rigorous mathematical definition and showing how to calculate it for four scenarios:

  1. A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”
  2. Tom Schneider’s ev evolves genomes using only simplified forms of known, observed evolutionary mechanisms, that meet the specification of “A nucleotide that binds to exactly N sites within the genome.” The length of the genome required to meet this specification can be quite long, depending on the value of N. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.)
  3. Tom Ray’s Tierra routinely results in digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes, but takes thousands of generations to evolve.
  4. The various Steiner Problem solutions from a programming challenge a few years ago have genomes that can easily be hundreds of bits. The specification for these genomes is “Computes a close approximation to the shortest connected path between a set of points.”

vjtorley very kindly and forthrightly addressed the first scenario in detail. His conclusion is:

I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

In that same thread, at least one other ID proponent agrees that known evolutionary mechanisms can generate CSI. At least two others disagree.

I hope we can resolve the issues in this thread. My goal is still to understand CSI in sufficient detail to be able to objectively measure it in both biological systems and digital models of those systems. To that end, I hope some ID proponents will be willing to answer some questions and provide some information:

  1. Do you agree with vjtorley’s calculation of CSI?
  2. Do you agree with his conclusion that CSI can be generated by known evolutionary mechanisms (gene duplication, in this case)?
  3. If you disagree with either, please show an equally detailed calculation so that I can understand how you compute CSI in that scenario.
  4. If your definition of CSI is different from that used by vjtorley, please provide a mathematically rigorous definition of your version of CSI.
  5. In addition to the gene duplication example, please show how to calculate CSI using your definition for the other three scenarios I’ve described.

Discussion of the general topic of CSI is, of course, interesting, but calculations at least as detailed as those provided by vjtorley are essential to eliminating ambiguity. Please show your work supporting any claims.

Thank you in advance for helping me understand CSI. Let’s do some math!

Comments
JemimaRacktouey
MathGrrl has already made it quite clear that the book has insufficient information present to allow CSI to be calculated for the 4 examples in question.
I doubt that she has read the book. Her posts and questions tells me she hasn't.
An interesting claim. But groundless without further explanation or justification.
You have it backwards- MathGrrl neds to explain how/ why her examples are good/ valid.
Why don’t you present a few non-bogus scenarios and then calculate, if you can, the CSI present in those scenarios.
I have already told her how to do it and presented a paper in comment 12 that tells her how to do it.Joseph
March 26, 2011
March
03
Mar
26
26
2011
05:45 AM
5
05
45
AM
PDT
QuiteID
But I also think that for it to be useful scientifically, it should be calculable.
Exactly so. If "CSI" objectively exists then you should be able to explain the methodology to calculate it and then expect independent calculation of the exact same figure (within reason) from multiple sources for the same artifact. Currently I get the impression that "CSI" is simply the (file)size of the object in question to which we add the knowledge that it was designed and so design is claimed. E.G the claims that the bac flag or the cell is "full" of CSI but no actual figure can be stated. If no figure is known, how is it known for sure that the actual value is non-zero? Or KF's claims that CSI=FSCI and as such can be calculated directly from the file size, e.g:
11 –>We can compose a simple metric that would capture the idea: Where function is f, and takes values 1 or 0 [as in pass/fail], complexity threshold is c [1 if over 1,000 bits, 0 otherwise] and number of bits used is b, we can measure FSCI in functionally specific bits, as the simple product: FX = f*c*b, in functionally specific bits 12 –> Actually, we commonly see such a measure; e.g. when we see that a document is say 197 kbits long, that means it is functional as say an Open Office Writer document, is complex and uses 197 k bits storage space.
Taken from here: https://uncommondescent.com/intelligent-design/background-on-orderly-random-and-functional-sequence-complexity/ If FSCI==CSI then CSI=Filesize*X. If it's that simple one wonders why KF has not calculated it for the four example scenarios. Of course, this all seems to hinge on knowing the artifact in question is designed in advance. Which sort of defeats the entire point of CSI in the first place. Things are designed because they have lots of CSI, and CSI is only present when things are designed. Therefore design.JemimaRacktouey
March 26, 2011
March
03
Mar
26
26
2011
05:43 AM
5
05
43
AM
PDT
Jemima, Whether or not CSI can be calculated, the definition is clear enough that we can at least see if there is complexity and specification. Do you deny that the bacterial flagellum has those things? Even if it cannot be calculated rigorously, I think that it still leads to a valid inference of design just as Mount Rushmore leads to a valid inference of design as does a knife in the back of a dead person. It's POSSIBLE that the knife was blown by the wind in just such a way as to imbed in someone's back, but unlikely, therefore the design inference is valid. Complexity and specificity, I submit, is the same thing.Collin
March 26, 2011
March
03
Mar
26
26
2011
05:41 AM
5
05
41
AM
PDT
Joseph,
3- “No Free Lunch” is readily available. The concept of CSI is thoroughly discussed in it. The math he used to arrive at the 500 bits as the complexity level is all laid out in that book.
MathGrrl has already made it quite clear that the book has insufficient information present to allow CSI to be calculated for the 4 examples in question. If you disagree, why don't you perform the required calculations and show how you went about it? That is after all the purpose of this thread. If you cannot then presumably you will not make further claims regarding the general applicability of CSI to enable design detection.
5- Your scenarios are till bogus
An interesting claim. But groundless without further explanation or justification. Why don't you present a few non-bogus scenarios and then calculate, if you can, the CSI present in those scenarios. That may be enough to allow a general principle to be derived which can be used as an objective "design detector".JemimaRacktouey
March 26, 2011
March
03
Mar
26
26
2011
05:32 AM
5
05
32
AM
PDT
JemimaRacktouey, You write:
It’s almost as if you are saying that until MathGrrl explains the origin of life you won’t help calculate CSI? Very strange. Or even that you know you can’t calculate CSI but it does not matter because the materialists cannot explain the origin of life. Equally strange.
You put your finger on something that I have noticed but have had a hard time articulating. I don't know what to make of it though. I like the concept of CSI: it makes sense to me intuitively as a sign of intelligence. But I also think that for it to be useful scientifically, it should be calculable.QuiteID
March 26, 2011
March
03
Mar
26
26
2011
05:27 AM
5
05
27
AM
PDT
Joseph
Have you ever watched “My Cousin Vinny”? Do you remember what Ms Mona Lisa Vito said when the prosecuter tried to test her automobile knowledge? Well that applies to what MathGrrl is doing…
Oddly I agree. In the final, climactic section of the movie "Ms Mona Lisa Vito" demonstrates an expert knowledge of automobiles superior to that of anyone else in the courtroom. It seems that you have hit the nail on the head!JemimaRacktouey
March 26, 2011
March
03
Mar
26
26
2011
05:22 AM
5
05
22
AM
PDT
My apologies for the delay in replying. I'm at a workshop this weekend that is giving me less personal time than I expected. I'll be back online this evening.MathGrrl
March 26, 2011
March
03
Mar
26
26
2011
04:43 AM
4
04
43
AM
PDT
Kairosfocus
Please see the just above to see how a simple CSI metric can be developed
It would be more productive if you were to simply develop the metric for the 4 scenarios outlined in the OP. If you are able to provide instructions on how to develop CSI metrics then why are you unable to apply such instructions to MathGrrl.
The rhetoric that tries to obfuscate the reality is just that, selectively hyperskeptical rhetoric.
Pardon me, but the rhetoric that is obfuscating reality is coming from you. For example:
Namely, FSCI and CSI — and the two cannot be conceptually separated, MG whether we deal with Orgel-Wicken or Dembski c 1998 on — are real, are only seen to come from intelligence, are beyond the search capacity of the observed cosmos, and lie at the heart of C-chemistry, cell based life.,
While that may or may not be true it's irrelevant. The issue at hand is if CSI can be computed for the 4 scenarios given. Not what the ultimate origin of any such CSI measured is. It's almost as if you are saying that until MathGrrl explains the origin of life you won't help calculate CSI? Very strange. Or even that you know you can't calculate CSI but it does not matter because the materialists cannot explain the origin of life. Equally strange. Your attempts to cloud the issue have been noted. If you could compute the CSI for the scenarios in question I have no doubt that doing such would require less effort then typing several very large posts one after the other all in an attempt to explain why you can't compute CSI for the examples given but it does not matter because MathGrrl cannot explain it's ultimate origin anyway. And these posts seem to be very similar to previous posts you have written anyway, so it seems that whatever the issue is at hand you can re-use the same talking points over and over. The fact that you can't compute the CSI for the scenarios has not been masked by your attempts to throw red herrings into the mix I'm afraid.
Absent a priori evolutionary materialism straight-jacketing science in our day, we would have long since drawn the obvious and plainly well warranted conclusion: the cell is a deliberately engineered technology. Whodunit, we do not yet know, but that tweredun is plain.
You are not forced to wear the evolutionary materialism straight-jacket. You can take it off and conduct your own research free of any enforced rules about materialism. I guess you "know" that the cell is a deliberately engineered technology but somehow I suspect that you cannot calculate the CSI present in "the cell" Despite this you will no doubt will claim that it has CSI present anyway. And lots of it. So far that's all that seems to have happened on this thread. CSI cannot be calculated but you instinctively know that there is "lots", which indicates design, therefore ID.JemimaRacktouey
March 26, 2011
March
03
Mar
26
26
2011
04:43 AM
4
04
43
AM
PDT
vjtorley On your calculation of the CSI of the bacterial flagellum.  You have done a lot of work on this and it deserves a full answer.  I don’t have the time this weekend.  So I am going to summarise the problems (both yours and Dembski’s) as a sort of marker and try to get back to it in the week.   1) It is irrelevant whether you measure a probability in the conventional way as a value between 0 and 1 or in bits by taking minus 1times the log to the base 2.  Surprised you raised that to be honest.   2) This highlighted something I had never realised before.  When calculating this probability he makes a rather basic error.  If you have n independent events and the probability of single event having outcome x is p then the probability of at least one event having outcome x is not np. It is (1 – (1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is just wrong – even if you could make sense of Phi_s(T ) and P(T|H).   3) I apologise, I had forgotten that Dembski does define specificity to include not just those patterns which are simpler but those patterns which are simpler and less probable.  His only justification for this is that by doing so he ensures that “S is ruling out large targets” (top of page 19).  This seems a bit ad hoc but never mind. He can define specificity what way he wishes. To assess the specification of an outcome according to this definition we have two imponderables: (a) What other patterns are at least as simple as the one this outcome conforms to – where “simple” means Kolgomorov complexity (which remembers is not computable) (b) Which of these patterns are less probable that the one we observe.  How on earth do we know for a real situation? More to the point, when he comes up with the estimate of Phi_s(T) for the bacterial flagellum (pp 24 – 25) he forgets all about the “less probable” criterion and just looks for “simpler”.   4) T in his bacterial flagellum example is defined conceptually as “bidirectional rotary motor-driven propeller” and it is this he uses to estimate Phi_s(T). .  This is not necessarily (in fact is  almost certainly not) the same as the exact configuration of proteins.  We have no idea what other configurations of proteins might achieve this effect.  So when you insert P(T|H) into the formula it is a different T!   5) You want to argue that the assuming all amino acids are equally likely and independent of each other is what Dembski means by H in his definition of CSI.  I agree that in this case, as in a number of others, he is unclear.  If we adopt this definition of H then there are a few problems. While the choice of the "raw chance" version of H is pretty clear in this specific case it is not in general well defined.  It uses Bernouilli’s principle of indifference and as I am sure you know, Keynes (among others) has shown that this principle does not necessarily lead to a unique solution (we discussed this in the context of cosmic design some months ago). But if we accept the common sense interpretation for amino acids then there are many EA that lead to massive increases in CSI – gene duplication being an example (as discussed above). I am sorry this is not properly explained with examples and references – but I wanted to get the overall points before my memory and enthusiasm fade.   Cheers Markmarkf
March 26, 2011
March
03
Mar
26
26
2011
02:46 AM
2
02
46
AM
PDT
M. Holcumbrink Please see the just above to see how a simple CSI metric can be developed, and on cases where more sophisticated ones have been developed. The rhetoric that tries to obfuscate the reality is just that, selectively hyperskeptical rhetoric. Also, note that we can extend the above to implicit cases by observing that they fit in with a nodes, arcs and interfaces network, a Wicken "wiring diagram." You are doubtless familiar with blueprints [and the underlying drawings], exploded views, wireframe meshes and the like. These can all be reduced to network lists that give a linguistic data structure description from which the drawing can be made and the parts and the system constructed. Such a structured summary of course can be reduced to bits -- a chain of elementary yes/no decisions -- and then measured using the X-metric just suggested as the simple case. More sophisticated metrics have been developed, described and linked. Just, they have been ignored or brushed aside on one excuse or another. And, as to the notion that once one has created oodles of FSCI to set up a genetic algorithm, intelligently, one can then look at its hill climbing that uses highly controlled limited random processes, and say voila, unaided blind chance and mechanical necessity are creating CSI, that is self-refuting on its face. GEM of TKIkairosfocus
March 26, 2011
March
03
Mar
26
26
2011
01:37 AM
1
01
37
AM
PDT
As far as metrics go, I stand by my earlier point: unless one is willing to accept that there is a basic simple metric on commonly used information concepts and analysis, then one will hyperskeptically dismiss more complex metrics -- for these more complex metrics rest on the same basic analysis. (The Durston FITS metric rests on H, the average information per symbol, and the Dembski type CSI metric rests on the analysis of configuration spaces and isolated hot zones AKA islands of function or targets. So, in reality, the above is an exercise in exposing circular, selectively hyperskeptical, crankish thinking. Pardon bluntness. To reject the reality of FSCI etc, one has to reject such basic and commonly accepted phenomena and metrics that it is at once revealing that something has gone wrong. Worse, to post a significant remark putting up such hyperskepticism requires one to produce examples of such FSCI, as was already analysed at 45 above; showing how such FSCI, a subset of CSI, is routinely produced by intgelligence. Self referential inconsistency anyone? a: We start with the Shannon-Hartley based metric of information carrying capacity, first expressed as a negative log measure [cf my summary of the more or less standard derivation here in my always linked]. b: We add the semiotic agent, AKA the intelligent observer. Such is capable of recognising linguistic or algorithmic function vs non function. The intelligent, judging, observer is of course a key -- though often implicit -- part of science, engineering and measurement generally. c: We introduce a simple metric for FSCI (a subset of CSI relevant to the DNA in the cell), X: X = C*S*B d: S -- specificity, as seen by isolation of functionality on islands, testable by seeing what significant random noise does to the ability to function. What would white noise mixed in do to the functionality of the text string in this post, or to the functionality of ev as a program etc? [Almost self evident.] Use a simple 1/0 value for yes/no. e: C -- complexity, and here a threshold of 1,000 bits of info carrying capacity used to store the message is good enough for government work. Cf the infinite monkeys analysis for why. Again a simple 1/0 for yes/no. f: B, for number of bits. A 300 AA funcitonal protein that folds and works in a specific task in the cell uses 1,800 bits of D/mRNA storage. 3 letters per codon, 300 codons, 2 bits per 4-state base. g: So, if something is (i) functionally specific, and (ii) complex beyond the threshold and as well (iii) has a specific bit value, it passes the two thresholds of specified complexity, and its number of stored bits [which by the threshold for C is beyond 1,000] is a value of FSCI in functionally specific bits. h: This is a commonplace in digital technology. i: And, as we see here complexity and specificity incorporated, it is a subset of CSI. Thus, the set, CSI is non-empty. Other more complex metrics and models build on this foundation. G'day GEM of TKIkairosfocus
March 26, 2011
March
03
Mar
26
26
2011
01:25 AM
1
01
25
AM
PDT
Onlookers: Passed back after several days; pausing for a moment from the crises in hand. (Start with a territory where recurrent budget is 80% of GDP, needing to move to self-sustaining growth, post a major natural disaster that has stripped away 2/3 of land, infrastructure and population; and dealing with an increasingly reluctant principal aid partner . . . multiply by a poorly structured new constitution with dangerous and unbalanced provisions . . . ) Back to this thread. I am astonished to see the mantra that Complex Specified Information -- itself a description of a long since OBSERVED and commented on phenomenon as common as posts in this thread -- is "undefined" AKA "meaningless" still being tossed around. Selective hyperskepticism on steroids. In fact, the reality of FSCI and wider CSI needs to be first acknowledged, perhaps by recognising certain features of posts in this thread, then mathematical models and metrics need to address the observed realities adequately. Let's get some basic facts straight for the record, yet once again (and I refer the interested parties to the UD weak argument correctives, especially nos 26 - 30, which have given the concept, the actual intellectual roots in the work of Orgel [and Wicken and Yockey et al], and summaries and links on specific, mathematically grounded metric models that inter alia have been used to generate FSC metrics in FITS for 35 protein families. In short, much of the above is plainly an exercise in dismissive rhetoric triumphing over unwelcome reality. FSCI, and the wider concept CSI, are DESCRIPTIONS OF OBSERVED FACTS, not definitions that need to be turned out just so or they can be dismissed to one's heart's content. The attempt to be dismissive, therefore shows itself for what it is: denial of patent but unwelcome reality for evolutionary materialists. Namely, FSCI and CSI -- and the two cannot be conceptually separated, MG whether we deal with Orgel-Wicken or Dembski c 1998 on -- are real, are only seen to come from intelligence, are beyond the search capacity of the observed cosmos, and lie at the heart of C-chemistry, cell based life. Absent a priori evolutionary materialism straight-jacketing science in our day, we would have long since drawn the obvious and plainly well warranted conclusion: the cell is a deliberately engineered technology. Whodunit, we do not yet know, but that tweredun is plain. So, now, let us cite (and kindly cf the link here) those who recognised the facts and acknowledged that they need to be explained: ORGEL, 1973: _________________ >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> ___________________ Wicken, 1979: ____________________ >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. >> ____________________ Again, for the record. [ . . . ]kairosfocus
March 26, 2011
March
03
Mar
26
26
2011
01:23 AM
1
01
23
AM
PDT
Hi MathGrrl, Yes, I go by Mung, but feel free to think of me as Thomas, as in Doubting Thomas. Could you please point me to any post in this thread in which you display competence in any of the following: 1. computational biology 2. evolutionary algorithms 3. genetic programming My time is valuable, and I hate wasting it. No doubt that's due to some evolutionary adaptation of which I am completely ignorant. Also, it seems to me that at least one corollary of the argument in your OP is that any paper which has claimed to generate CSI is false, because the the authors cannot even define CSI, much less generate it. Would you agree? RegardsMung
March 25, 2011
March
03
Mar
25
25
2011
10:04 PM
10
10
04
PM
PDT
PaV:
The hard part is establishing the “chance hypothesis”. That requires examination of the program, its elements, how it interacts, its final outputs, etc, etc. Then, this “chance hypothesis” generates a rejection region. The details of that could be difficult.
Okay, dumb question, but this I don't get. If you don't have the time to fully flesh out the chance hypothesis, on what basis do you reject that it is capable of generating CSI?jon specter
March 25, 2011
March
03
Mar
25
25
2011
09:34 PM
9
09
34
PM
PDT
Indium #238: “But the problems with CSI will not go away. Unless this thread is deleted, everybody can link here in the future. The failure of the UD crowd to give a working definition is there for all to see”. I come from the engineering world, and I am very familiar with very sophisticated integrated systems. My world is all about design, and there are many facets to this world, some of which are… 1) materials: properties, tolerances, configurations, interfaces 2) controls: inputs, computations & outputs 3) data processing: storage, retrieval, compression, expression, regulation, utilization 4) automated manufacturing and assembly: jigs, fixtures, tooling 5) energy: storage, utilization, transmission 6) efficiency 7) optimization… and the list goes on. But what is astounding to me is that all of the engineering principles I learned about in school are utilized in biological systems. All of it is there! So I guess my point would be that the ability to calculate CSI (or lack thereof) does not effect my belief that life has been designed in the slightest. If life has not been designed, how would it be any different than it is? The evidence for the design of life is exquisite, and the more we learn, the more mind-boggling it is to me that anyone would come to any other conclusion. The whole CSI calculation just seems like pointless busy work to me. There are certain single piece parts in the aerospace industry that have thousands of features, which means it contains an enormous amount of information. A single rectangular chunk of aluminum, on the other hand, has very little information. I haven’t the faintest clue how to go about calculating the CSI that is contained in either of them, but I know that the difference between the two is considerable. But my inability to give mathgrrl a rigorous equation by which to calculate it does not change the fact that CSI is there, and that there is a lot of it. And it doesn’t change the fact that if I found a similar piece part buried in the sands of the Sahara I would conclude unequivocally that the thing had an intelligent source. So when I see ion powered turbines that are controlled by sophisticated sensory inputs and outputs, and whose manufacture and assembly is regulated by the most sophisticated software known to modern man, I will likewise conclude that it has an intelligent source, CSI be damned.M. Holcumbrink
March 25, 2011
March
03
Mar
25
25
2011
09:00 PM
9
09
00
PM
PDT
I am not a scientist, but have friends who are (in fields unrelated to biology). Some of them have spent years working on developing their ideas, testing, revising, testing some more, revising some more, etc. It probably isn’t unheard of that some scientists will spend decades going through the same process to fully develop their scientific ideas. As I put somewhat flippantly in a previous reply, that is what scientists do.
Yes, that is what they do. Except some scientists are paid to do it, and able to get grants for their research, and some are not.tragic mishap
March 25, 2011
March
03
Mar
25
25
2011
08:22 PM
8
08
22
PM
PDT
As to indium's claim that gene duplication does anything in the real world; Michael Behe Hasn't Been Refuted on the Flagellum! Excerpt: Douglas Axe of the Biologic Institute showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalization. If a duplicated gene is neutral (in terms of its cost to the organism), then the maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). http://www.evolutionnews.org/2011/03/michael_behe_hasnt_been_refute044801.html The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations Douglas D. Axe* Excerpt: In particular, I use an explicit model of a structured bacterial population, similar to the island model of Maruyama and Kimura, to examine the limits on complex adaptations during the evolution of paralogous genes—genes related by duplication of an ancestral gene. Although substantial functional innovation is thought to be possible within paralogous families, the tight limits on the value of d found here (d ? 2 for the maladaptive case, and d ? 6 for the neutral case) mean that the mutational jumps in this process cannot have been very large. http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.4/BIO-C.2010.4 Is gene duplication a viable explanation for the origination of biological information and complexity? - December 2010 - Excerpt: The totality of the evidence reveals that, although duplication can and does facilitate important adaptations by tinkering with existing compounds, molecular evolution is nonetheless constrained in each and every case. Therefore, although the process of gene duplication and subsequent random mutation has certainly contributed to the size and diversity of the genome, it is alone insufficient in explaining the origination of the highly complex information pertinent to the essential functioning of living organisms. © 2010 Wiley Periodicals, Inc. Complexity, 2011 http://onlinelibrary.wiley.com/doi/10.1002/cplx.20365/abstract Evolution by Gene Duplication Falsified - December 2010 Excerpt: The various postduplication mechanisms entailing random mutations and recombinations considered were observed to tweak, tinker, copy, cut, divide, and shuffle existing genetic information around, but fell short of generating genuinely distinct and entirely novel functionality. Contrary to Darwin’s view of the plasticity of biological features, successive modification and selection in genes does indeed appear to have real and inherent limits: it can serve to alter the sequence, size, and function of a gene to an extent, but this almost always amounts to a variation on the same theme—as with RNASE1B in colobine monkeys. The conservation of all-important motifs within gene families, such as the homeobox or the MADS-box motif, attests to the fact that gene duplication results in the copying and preservation of biological information, and not its transformation as something original. http://www.creationsafaris.com/crev201101.htm#20110103abornagain77
March 25, 2011
March
03
Mar
25
25
2011
07:54 PM
7
07
54
PM
PDT
As a follow-up to [241], let's point out what one finds at Schneider's blogsite. We see a graph. Bits of information/nucleotide has increased. Wow. But . . . also notice---as I've already pointed out to MathGrrl---the increase quickly peters out. It flat-lines. And then notice that when "selection" is removed, the "information" is all lost. Well, the "selection" that Schneider alludes to comes exactly from the function that ferrets out the number of mistakes. Once this ferreting is turned off (which, though not in the form of a target sequence comes [per Dembski in the article] from "fitness functions"), voila, no new information. MathGrrl: Once again, what are your connections to Schneider? Would you like to tell us for the sake of full disclosure?PaV
March 25, 2011
March
03
Mar
25
25
2011
06:35 PM
6
06
35
PM
PDT
Indium: First, are you some new rare-earth element? Second, the math is easy. Yes, the math is that difficult in "Specification"; but it isn't that easy either. But, again, this is the easy part. The hard part is establishing the "chance hypothesis". That requires examination of the program, its elements, how it interacts, its final outputs, etc, etc. Then, this "chance hypothesis" generates a rejection region. The details of that could be difficult. Crunching numbers is easy. Trying to figure out just what they mean is the difficult side of things.PaV
March 25, 2011
March
03
Mar
25
25
2011
06:27 PM
6
06
27
PM
PDT
I found this little article by Wm. Dembski. Interestingly, he saw exactly the problem I saw when I looked a little more closely at Tom Schneider's blog giving info on ev. "Mistakes" Who figures this out? How is it figured out? The answer is the programmer. And this is where information is smuggled in. As Dr. Dembski points out, this is no more than a more sophisticated version of Dawkin's "Me thinks it is a weasel" self-correcting version of Darwinism. Interestingly, if you try and get to Dembski's paper from Schneider's blog, you won't get access. And more interestingly, is the fact that he tells us just what Dembski objected to---well, he tells us half of what Dembski objected to . . . . and then proceeds to tell us how Dembski was wrong. You see, MathGrrl, this is why I won't waste my time with your outrageous request.PaV
March 25, 2011
March
03
Mar
25
25
2011
06:22 PM
6
06
22
PM
PDT
Denyse, Have you ever watched "My Cousin Vinny"? Do you remember what Ms. Mona Lisa Vito said when the prosecuter tried to test her automobile knowledge? Well that applies to what MathGrrl is doing...Joseph
March 25, 2011
March
03
Mar
25
25
2011
05:13 PM
5
05
13
PM
PDT
I came here as a supporter, but am about to say something that will make me about as welcome as a skunk at a garden party. PAV says:
Why do think she is entitled to something that would be painstaking work to produce?
I am not a scientist, but have friends who are (in fields unrelated to biology). Some of them have spent years working on developing their ideas, testing, revising, testing some more, revising some more, etc. It probably isn't unheard of that some scientists will spend decades going through the same process to fully develop their scientific ideas. As I put somewhat flippantly in a previous reply, that is what scientists do. Yet you are reluctant to put any effort into developing one of the key tools in the intelligent design toolchest. Why? Because Mathgrrl won't believe you no matter what? Why do you require her approval? Why not do it for the silent onlookers? Why not do it solely to advance ID to the next level? Why not do it so some future ID scientist has a foundation to work from? Why not do it just for the thrill of discovery?jon specter
March 25, 2011
March
03
Mar
25
25
2011
04:10 PM
4
04
10
PM
PDT
PaV in comment 32: The EASIEST part of CSI is the calculation of complexity. And certainly, as Dembski presents it in his paper on “Specification”, it is a more complicated, world-encompassing approach; but the simplified version is a simple negative log calculation of improbability. Some 8th graders could do the calculation. PaV in comment 227: Why do think she is entitled to something that would be painstaking work to produce? Huh, what now? Painstaking 8th grade mathematics? Also, it is interesting to see how people here get more and more defensive and even hostile. You seem to want to force Mathgrrl out of here. But the problems with CSI will not go away. Unless this thread is deleted, everybody can link here in the future. The failure of the UD crowd to give a working definition is there for all to see. So, maybe somebody can do the calculations for this simple case: 11->11.11 (duplication) 11.11-> 11.01 (divergence) So, 11 -> 11.01 = increase in information. One could argue that this does not happen in nature or that the secification doesn´t change in these cases. But Zhang et al (and many more...) seem to disagree: http://www.ncbi.nlm.nih.gov/pubmed/11925567Indium
March 25, 2011
March
03
Mar
25
25
2011
03:45 PM
3
03
45
PM
PDT
PaV, no, of course mathematicians don't "prove" definitions. Some definitions are stipulated ("Let Specificiation A be . . .") and some come about as the consequence of such stipulations. You also state:
I agree that discussions amongst mathematicians would be valuable. But we’ve had them here before. They quibble about NFL theorems; they quibble about uniform probability distributions, and they want to say that all of this disqualifies CSI. The point being that CSI and ID are charged subjects that will not receive an impartial assessment.
Discussions "here" (or on any blog) are by nature more subjective than discussions in the mathematical literature. That's why I suggested one of the journals put out by SIAM (the Society for Industrial and Applied Mathematics). If discussions here have been unhelpful, perhaps that's because here's the wrong place to have them. But it's up to ID researchers to start that happening. Dr. Dembski doesn't publish in the mathematical field any more, and he's a busy man, so maybe somebody else should pick it up. I have a high opinion of Dr. Dembski, but I'm no mathematician. Nevertheless, it seems like there a qualified person could publish the math. It doesn't need to be resisted irresponsibly; in fact, there's no need to publish it with reference to evolution at all. Upright Biped: I have a hard time figuring out what you're trying to say to me. There's seems to be a lot of snark, but I can't understand what you're snarking about.QuiteID
March 25, 2011
March
03
Mar
25
25
2011
03:43 PM
3
03
43
PM
PDT
PaV
And then why can’t she just simply say that she doesn’t know how to apply CSI to these programs.
But to me that seems to me to be exactly what MathGrrl is saying. I think that if the relevant CSI was computed and the method shown for the examples given that would move this on to the next level, which I'm sure would be more productive as it would appear to promise a usable mechanism to objectively determine design!
Do mathematician’s “prove” definitions? I don’t think so.
I don't think that's what this is about. If CSI can be objectively computed for an arbitrary object, as claimed, then it can be computed for MathGrrl's examples.JemimaRacktouey
March 25, 2011
March
03
Mar
25
25
2011
03:40 PM
3
03
40
PM
PDT
Denise: You've stated twice that you think 'people' should address MathGrrl's question. First of all, it really isn't a question. It's a request. More of a demand. She says she wants to learn more about CSI: well, there are books, and there are on-line publications. Second: what are your reasons for your statement? I'm a bit confused. Why do think she is entitled to something that would be painstaking work to produce? What has she done to show that she has the background? What has she done to show that she understands CSI at all? What specific questions (not demands) has she offered seeking clarification? So, again, why do you think she deserves an involved response? The ONE actual question she asked had to do with what a "specification" is. Well, I know what a specification is per NFL. Why can't she read that and understand that? And then why can't she just simply say that she doesn't know how to apply CSI to these programs. Then suggestions could be given to her. I mention in my penultimate post that there is this striking similarity between her need for a "rigorous mathematical description" of CSI, and the comments made by Thomas Schneider. In the mind of Thomas Schneider, Shannon information, which is a simplistic logarithmic function, is real mathematics and thus a true description of information. Well no one at ID would ever think that Shannon information is any kind of true indication of information except when it comes to computer programs. It's too simplistic. Even mathematicians acknowledge its limitations. So this, I believe, is her ulterior motive: she wants to disparage CSI to look mathematically naked (which is just the opposite of reality since it is its complexity of concept that makes acceding to her request so difficult), and you're abetting her purpose. I can't help but wonder why? Quiet ID: I agree that discussions amongst mathematicians would be valuable. But we've had them here before. They quibble about NFL theorems; they quibble about uniform probability distributions, and they want to say that all of this disqualifies CSI. The point being that CSI and ID are charged subjects that will not receive an impartial assessment. It is very human to disagree with a conclusion someone has reached and then to find reasons to invalidate those conclusions, instead of just absorbing the work and pointing out errors if found. So I don't have much optimism about that. OTOH, is it possible to "mathematically" prove what CSI is? I don't know if that is possible. However, can you prove to me that the Shannon equation depicting information defines information? I don't think so. It's simply a definition. Do mathematician's "prove" definitions? I don't think so. So here we have MathGrrl who seems perfectly willing to accept Shannon's simplistic notion of information, but now finds it troublesome that CSI isn't "rigorously" defined. It is plenty rigorously defined. Maybe she doesn't like it. That doesn't concern me. Maybe she doesn't understand it. She can look at referenced work and email Dr. Dembski directly. Maybe she thinks it's wrong. Well, then, write a paper and show how it is wrong. Anything more doesn't seem like a good use of time.PaV
March 25, 2011
March
03
Mar
25
25
2011
03:04 PM
3
03
04
PM
PDT
QuiteID and Joseph: I think I get it now. It's not the specified complexity, Chi, but the specificity, sigma, which is customarily expressed in bits, as Professor Dembski states on page 19 of "Specification: The Pattern that specifies Intelligence." If the probability is below 10^-120, then sigma will be over 400 bits, and if it's below 10^-150 (Dembski's original universal probability bound), it'll be over 500 bits. Chi differs from sigma in that the expression we are taking the negative logarithm of (to base 2) has an additional multiplier of 10^120, the maximal number of bit operations that the observable universe could have performed throughout its history. Thus it's equivalent to subtracting 400 from the number of bits corresponding to sigma. If you've still got one or more bits left over after that (i.e. if Chi > 1), then you do indeed have a pattern that warrants the design inference. Cheers, and thanks for the quote, Joseph.vjtorley
March 25, 2011
March
03
Mar
25
25
2011
02:26 PM
2
02
26
PM
PDT
#221 Really? Then I must certainly keep myself in check. My question immediately dismantles the core conclusion our guest opponent wishes to imply, and for that it is being ignored. However, I wouldn't want to seem ungrateful. Actually I first posted this question 192 post ago, and I felt completely composed while doing it. But I am happy to ley you be the judge of that: - - - - - "So the question remains: Does the output of any evolutionary algorithm being modeled establish the semiosis required for [the] information to exist, or does it take it for granted as an already existing quality. In other words, if the evolutionary algorithm – by any means available to it – should add perhaps a ‘UCU’ within an existing sequence, does that addition create new information outside (independent) of the semiotic convention already existing? If we lift the convention, does UCU specify anything at all? If UCU does not specify anything without reliance upon a condition which was not introduced as a matter of the genetic algorithm, then your statement that genetic algorithms can create information is either a) false, or b) over-reaching, or c) incomplete."Upright BiPed
March 25, 2011
March
03
Mar
25
25
2011
02:11 PM
2
02
11
PM
PDT
MathGrrl, at 193 I agree that people should directly address your questions. People: Can someone summarize the discussion? What we learned, didn't, and why? I could run it as a post. MathGrrl: Download? I'm a Canadian, hardly short of download capacity, so not clear re problem, but happy to learn. (Could be because I live within walking distance of the CN Tower http://www.cntower.ca )O'Leary
March 25, 2011
March
03
Mar
25
25
2011
02:10 PM
2
02
10
PM
PDT
QuiteID (#214) I agree with you that an applied mathematics journal would be a good place to publish articles by the Intelligent Design movement on CSI. Regarding bits: after reading Joseph's comment at #212, and the relevant passage on converting probabilities into bits on page 19 of Professor Dembski's article on specification, I am wondering whether I missed something, after all. But I still have to ask: if a Chi (specified complexity) value of 1 warrants a design inference, how many bits is that, and why? If Chi is measured in bits, then that would mean 1 bit warrants a design inference. Wouldn't it?vjtorley
March 25, 2011
March
03
Mar
25
25
2011
02:06 PM
2
02
06
PM
PDT
1 5 6 7 8 9 15

Leave a Reply