Uncommon Descent Serving The Intelligent Design Community

On The Calculation Of CSI

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

My thanks to Jonathan M. for passing my suggestion for a CSI thread on and a very special thanks to Denyse O’Leary for inviting me to offer a guest post.

[This post has been advanced to enable a continued discussion on a vital issue. Other newer stories are posted below. – O’Leary ]

In the abstract of Specification: The Pattern That Signifies Intelligence, William Demski asks “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” Many ID proponents answer this question emphatically in the affirmative, claiming that Complex Specified Information is a metric that clearly indicates intelligent agency.

As someone with a strong interest in computational biology, evolutionary algorithms, and genetic programming, this strikes me as the most readily testable claim made by ID proponents. For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it. Unfortunately, what I’ve found is quite a bit of confusion about the details of CSI, even among its strongest advocates.

My first detailed discussion was with UD regular gpuccio, in a series of four threads hosted by Mark Frank. While we didn’t come to any resolution, we did cover a number of details that might be of interest to others following the topic.

CSI came up again in a recent thread here on UD. I asked the participants there to assist me in better understanding CSI by providing a rigorous mathematical definition and showing how to calculate it for four scenarios:

  1. A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”
  2. Tom Schneider’s ev evolves genomes using only simplified forms of known, observed evolutionary mechanisms, that meet the specification of “A nucleotide that binds to exactly N sites within the genome.” The length of the genome required to meet this specification can be quite long, depending on the value of N. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.)
  3. Tom Ray’s Tierra routinely results in digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes, but takes thousands of generations to evolve.
  4. The various Steiner Problem solutions from a programming challenge a few years ago have genomes that can easily be hundreds of bits. The specification for these genomes is “Computes a close approximation to the shortest connected path between a set of points.”

vjtorley very kindly and forthrightly addressed the first scenario in detail. His conclusion is:

I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

In that same thread, at least one other ID proponent agrees that known evolutionary mechanisms can generate CSI. At least two others disagree.

I hope we can resolve the issues in this thread. My goal is still to understand CSI in sufficient detail to be able to objectively measure it in both biological systems and digital models of those systems. To that end, I hope some ID proponents will be willing to answer some questions and provide some information:

  1. Do you agree with vjtorley’s calculation of CSI?
  2. Do you agree with his conclusion that CSI can be generated by known evolutionary mechanisms (gene duplication, in this case)?
  3. If you disagree with either, please show an equally detailed calculation so that I can understand how you compute CSI in that scenario.
  4. If your definition of CSI is different from that used by vjtorley, please provide a mathematically rigorous definition of your version of CSI.
  5. In addition to the gene duplication example, please show how to calculate CSI using your definition for the other three scenarios I’ve described.

Discussion of the general topic of CSI is, of course, interesting, but calculations at least as detailed as those provided by vjtorley are essential to eliminating ambiguity. Please show your work supporting any claims.

Thank you in advance for helping me understand CSI. Let’s do some math!

Comments
Further, if it's an actual number, it's a number of something. "Greater than 1 complex specified information" makes no sense. As noted above, bits is in the expression. It can only be removed by dividing by bits somewhere else in the expression.QuiteID
March 25, 2011
March
03
Mar
25
25
2011
09:00 AM
9
09
00
AM
PDT
vjtorley, expressing information in log2 terms means expressing it in bits. That's a basic convention of information theory. I don't know why Dr. Dembski calls it "an actual number" -- maybe to stress that's it's really quantifiable -- but it can't be to say that it's not a unit. As Wikipedia notes,
In 1928, Ralph Hartley observed a fundamental storage principle,[1] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm logb N of the number N of possible states of that system. Changing the basis of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the basis b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states. When b is 2, the unit is the "bit" (a contraction of binary digit). A system with 8 possible states, for example, can store up to log28 = 3 bits of information.
My emphasis.QuiteID
March 25, 2011
March
03
Mar
25
25
2011
08:57 AM
8
08
57
AM
PDT
Alex73 @ 185 Indeed. In the beginning was the Word. :-)tgpeeler
March 25, 2011
March
03
Mar
25
25
2011
08:13 AM
8
08
13
AM
PDT
VJ, The paper you link to is an extension of "No Free Lunch". And in NFL Dembski has CSI to be at least 500 bits of specified information- page 156.Joseph
March 25, 2011
March
03
Mar
25
25
2011
07:57 AM
7
07
57
AM
PDT
Mathgrrl: I just noticed a remark you made in #131, which I believe represents a profound misunderstanding of CSI on your part:
Second, Dembski's CSI has units of bits. A change must be either an increase or a decrease in the number of bits. (Emphasis mine - VJT.)
This is incorrect. CSI is just a number. It has no units, and it does not represent bits. I'm sure you'll ask me to supply "chapter and verse" from Professor Dembski in support of my contention on this point, so here goes. Professor Dembski, in his paper, Specification: The Pattern that Signifies Intelligence defines (on page 24) the specified complexity Chi of pattern T given chance hypothesis H, minus the tilde and context sensitivity, as: Chi=-log2[10^120.Phi_s(T).P(T|H)] Phi_s(T) is a unitless number. On page 17, Dembski defines Phi_s(T) as the number of patterns for which S's semiotic description of them is at least as simple as S's semiotic description of T. P(T|H) is defined as a probability: the probability of a pattern T with respect to the chance hypothesis H. Since it is a probability, it must be a number greater than or equal to 0 and less than or equal to 1. Thus there are no units in the specified complexity Chi. It's just a number. That was why I gave it to you as a raw number in #173, when calculating the specified complexity of a bacterial flagellum. I didn't say: "The specified complexity of a bacterial flagellum lies somewhere between 2126 bits and 3422 bits." Instead, I said: "The specified complexity lies somewhere between 2126 and 3422." Professor Dembski himself confirms this interpretation in his paper, when he writes on page 34:
In my present treatment, specified complexity ... is now ... an actual number calculated by a precise formula (i.e., Chi=-log2[10^120.Phi_s(T).P(T|H)]). This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification. (Emphases mine - VJT.)
I hope this clarifies matters for you.vjtorley
March 25, 2011
March
03
Mar
25
25
2011
07:10 AM
7
07
10
AM
PDT
Noesis, What probability? Evolutionary biologists haven't even demonstrated a feasibility that, for example, cance an necessity can produce a bacterial flagellum from a population or populations that never had one. There isn't any evidence of chance and necessity constructing simpler multi-protein systems. So again what probability? Evolutionary biologists have failed to demonstrate a probability exists...Joseph
March 25, 2011
March
03
Mar
25
25
2011
03:45 AM
3
03
45
AM
PDT
  vj #173   Your comment #173 is interesting but I think it combines your misinterpretation of Dembski’s work with a logical error in the original work.  1) As Dembski says, his objective when estimating the probability of obtaining the 30 proteins in bacterial flagellum as somewhere between 10^780 and 10^1170 is
…to sketch out some probabilistic techniques that could then be applied by biologists to the stochastic formation of the flagellum.
In fact these estimates are based on assuming that all amino acids are equally likely and independent of each other (with an adjustment for duplicates in one case). So they effectively represent a lower bound – the lowest the probability could reasonably be, given no other  knowledge of the process by which the proteins were obtained.  The two estimates are not the upper and lower bounds of the probability of obtaining the proteins.   They are a high and low estimate of the lower bound.  It is not legitimate to substitute these figures for P(T|H) in his formula which is intended to be genuine estimate of the probability of the bacterial flagellum based on an evolutionary hypothesis H. Dembski himself says that the precise calculation of P(T|H) is yet to be done (and this was written after the works you refer to).   2) But even if they were genuine estimates of P(T|H) there is a problem in Dembski’s logic.  He defines the specified complexity of an outcome as the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler. That is why P(T|H) is multiplied by Phi_s(T) in the formula. So he obtains the total probability of meeting that pattern or one that is as simple or similar by multiplying the estimated probability of meeting the observed pattern (in this case “bidirectional rotary motor-driven propeller”) by the total number of patterns.  But this hides the enormous assumption that the probability of matching each of the other patterns is similar to or lower than the probability of matching the observed pattern.markf
March 25, 2011
March
03
Mar
25
25
2011
02:43 AM
2
02
43
AM
PDT
tgpeeler quotes Dawkins: Dawkins actually says in “River Out of Eden” on page 19 that “Life is just bytes and bytes and bytes of digital information.” Isn't it sweet that the loudest anti-theist says exactly the same as one can read in the book he hates and deplores the most? Because a few pages after John the apostle idendifies Jesus of Nazareth as the Word (an old fashioned expression for digital information), he also quotes him saying absolutely unambiguously: I am ... the life. Every now and then I am stunned by the absolute perfection of these words...Alex73
March 25, 2011
March
03
Mar
25
25
2011
02:28 AM
2
02
28
AM
PDT
Noesis #176
"You either have not read or have not understood Dembski’s last paper on CSI, “Specification: The Pattern that Signifies Intelligence.” The approach to design inference is one of computing quantities."
Any measure of complexity (or even any measure tout court) is always approximate in principle. In fact to measure is to map quality to quantity and this map can never be perfectly exact for the simple fact that quality is incommensurable to quantity by definition. Any measure is necessarily defective (Dembski’s CSI included). It is always an issue about the amount of the defectiveness. This ascertainment leads me to say that the increase of a protein is only quantitative while the organized functional hierarchies of organisms imply quality (also if the ID measures try to reduce them to a number). The incommensurability between quality and quantity makes us to understand that mere quantitative increase of matter cannot produce qualitative organization. Analogously the simple increase of the number of rocks doesn’t explain the whole cathedral.niwrad
March 25, 2011
March
03
Mar
25
25
2011
01:35 AM
1
01
35
AM
PDT
GUYS Stop failing the Turing test. :D Anyway I think it will be not only possible to calculate CSI but relatively easy once we have a bioinformatics program capable of spitting out secondary, tertiary and quaternary structures from primary structure... But until then, studies such as the one by Axe which Meyer references in Signature in the Cell will do. For a protein 150 amino acids long, beta-lactamase, the ratio of working to non-working proteins is 10^-77. 10^-77 < 10^-40 therefore we have biological CSI.tragic mishap
March 24, 2011
March
03
Mar
24
24
2011
08:27 PM
8
08
27
PM
PDT
MathGrrl says "Thank you in advance for helping me understand CSI. Let’s do some math!" I confess to only skimming through this thread as it seems to be pretty much a rehash of the pro-ID, or we might say pro-mind, and the anti-ID, or we might say the naturalist/materialist/physicalist - NMPist - view which claims (apparently) that the source of biological information (complex, functional, specified, or whatever) is time plus natural selection, that is to say, the laws of physics. In other words, what is the CAUSE of information? Most every biologist I've read, even on the pro-NDT side (Mayr, Crick, Dawkins, Coyne, etc...) has no problem with the idea that there is indeed such a thing as biological information. Dawkins actually says in "River Out of Eden" on page 19 that "Life is just bytes and bytes and bytes of digital information." I quote him not to offer a "proof" of this but merely to point out that since the discovery of the structure of DNA by Crick and Watson the idea of biological information has taken on ever increasing importance in biology and is widely recognized to exist. Laying aside for the moment whether or not it can be measured to mf's or mg's satisfaction. Just for fun, let's consider human information. The kind that is created by, well, humans. Like this post. What is the source of this information? Is it also the laws of physics as the NMPist would have us believe? Or is it mind, as I would have us believe? If we consider the prerequisites for human information I think we can identify at least 4 or 5 depending on how you count language. Let's count the symbols and rules of language as 2. Those rules operate within the laws of reason so these rules (Being, Identity, Non-contradiction, Excluded Middle, Causality) are pre-req 3. How are the symbols arranged in order to encode a message? It seems as though they must be freely chosen. Otherwise, how to account for the fact that I am typing this instead of that? There is no POSSIBLE explanation grounded in physical law for why I am typing this instead of that which suggests the question, well then, if physics isn't doing it then what is? That's for another time. The last thing that is (at least) required is intentionality or purpose. A "scientist" might say "causality." What is it that causes these letters to appear "out here" in cyberspace? It seems that whatever it is that is freely arranging these English symbols in a (one hopes) logical fashion is also intending to do this. Otherwise, obviously, it wouldn't be done. To recap, we need: Symbols, rules, reason, free will, and purpose. Without these there is no human information. So the NMPist now has to explain the existence of this information in terms of the laws of physics. If he wants to be intellectually honest, that is. After all, if NMPism means that all that exists is physical then obviously it follows that all explanations of these physical things MUST be found in the laws of physics. (Never mind for a moment the glaring - embarrassingly so - fact that these laws are also abstract and therefore beyond the reach of "science" because they cannot be sensed. I doubt that anyone has ever tasted or heard the law of gravity.) Can the laws of physics explain any of the things on my list? No. It is not even conceptually possible since information (although encoded in a physical substrate) is not itself physical. And if it's not physical then physics can't explain it. Let me try to illustrate with some examples. Why does "the dog" refer to Fido and "der Hund" also refer to Fido? Can this possibly be explained by reference to the laws of physics? No. It cannot. Why does "Es regnet" mean "it's raining in German and means absolutely nothing in English? Can this be explained by reference to physical laws? No. It cannot. If "b" is less than "c", and "a" is less than "b", then "a" < "c". This is necessarily true. Not even God can make it not true. So explain that in terms of physical law. Cannot be done. Free will cannot be explained by reference to physical law. Indeed the thorough going NMPist denies free will because everything must be explained by reference to physical LAW. We have the delicious irony of the fool denying that he has free will even as he exercises his free will to form the thought that he has none. Intentionality cannot be explained by physics. Indeed, this is why Dawkins and the rest rail against the idea of there being real purpose or design or intentionality in the universe. Let me offer a quick modus tollens argument to show the idiocy of this line of thinking. If I did not intend to be writing this post, I would not be writing this post. But I am writing this post. Therefore I do INTEND to be writing this post. For me, it is not too great a step to get from human information to biological information. There HAS to be a code for information of any kind. The code is not based on any laws of physics that I've ever read about. In fact, Yockey (2005), the physicist, says they are not. Oh heck, let me quote him. He says on page 5 that: "The belief of mechanist-reductionists that the chemical processes in living matter do not differ in principle from those in dead matter is incorrect. There is no trace of messages determining the results of chemical reactions in inanimate matter. If genetical processes were just complicated biochemistry, the laws of mass action and thermodynamics would govern the placement of amino acids in the protein sequences." BUT THEY DON'T. OK, that last part was me, not Yockey, but that was his point. In addition to the code there must be rules (else how did we recognize the existence of the CODE?). There must be free will (the code is not determined by the laws of physics - although - obviously - none of the chemical reactions violate the laws of physics). And purpose. Sigh. Why would there be anything at all unless someone (or SomeOne) determined that there would be? At any rate, I do not expect this will gain any traction in the anti-ID camp but every now and again one has to try. If mg is still reading you might ask yourself what "doing math" actually means. At its essence it's manipulating symbols according to various and sundry laws. Mathematics is a language too. A universal language. So how is it that you can manipulate those symbols freely? :-)tgpeeler
March 24, 2011
March
03
Mar
24
24
2011
08:09 PM
8
08
09
PM
PDT
QuiteID (178): I'm tired, and I'm not going to look it up now. If I recall correctly, Dembski orders all of the specifications the semiotic agent is capable of emitting. To get the descriptive complexity of the specification, he takes the logarithm of the position of the specification in the ordering.Noesis
March 24, 2011
March
03
Mar
24
24
2011
08:08 PM
8
08
08
PM
PDT
Everyone needs to take a break and read this Dilbert http://www.dilbert.com/strips/comic/2009-03-16/Collin
March 24, 2011
March
03
Mar
24
24
2011
07:33 PM
7
07
33
PM
PDT
Noesis, thanks for the heads-up. Although I support Dr. Dembski's arguments intuitively, I find them difficult to follow. Frankly I'm struggling with the whole thread. Perhaps I should read more before offering my opinion. Dr. Dembski's written a lot, so could you help me out by expanding on your last comment. Where specifically does he define specificity in quantitative terms?QuiteID
March 24, 2011
March
03
Mar
24
24
2011
06:43 PM
6
06
43
PM
PDT
QuiteID:
So we can measure the information, but not the specification, which we can only say is either there or not.
Does anybody who believes in Dembski's CSI measure actually read what he wrote about it? He associates a numerical descriptive complexity with his specification of the bacterial flagellum, "bidirectional outboard rotary motor."Noesis
March 24, 2011
March
03
Mar
24
24
2011
06:28 PM
6
06
28
PM
PDT
VJ thank you for another excellent post at 173 and 174. I have a question though. Did you have to pay someone today for those papers? I mean do you have to have special permission, perhaps a decoder ring, in order to find and have these papers - both the critiques and responses? How long have these been available? Miller's Critique? And Dr Dembski's responses? It seems these have been available for quite some time now, have they not? And it also seems that anyone with access to the Internet can get to them, is that not true VJ? If someone (particularly a mathematician) was earnestly so inclined to work with these concepts, it would seem they might have had access to them all along. - - - - - - - - Guess what? It won't be sufficient.Upright BiPed
March 24, 2011
March
03
Mar
24
24
2011
06:09 PM
6
06
09
PM
PDT
niwrad,
The increase of a protein (eventually produced by gene duplication) by definition is a quantitative effect with no CSI per se. CSI implies quality and in principle quantity doesn’t entail quality.
You either have not read or have not understood Dembski's last paper on CSI, "Specification: The Pattern that Signifies Intelligence." The approach to design inference is one of computing quantities.Noesis
March 24, 2011
March
03
Mar
24
24
2011
06:06 PM
6
06
06
PM
PDT
MathGrrl, I was not challenging you with my remark about Omega. I was trying to give you a hint. I'll say outright this time that Dembski wanted (past tense, because we're talking about work he seems to have abandoned) dearly to have a probability measure on the space of possible biological forms, so he could take the negative logarithm of the probability of a form to get information. As Stuart Kauffman has pointed out, there can be no such probability measure, because none of us can know the space of possible biological forms (or phase space, as he puts it). Dembski does not know the phase space. He has often complained that evolutionary biologists won't give him the probabilities that he needs. He has indicated that evolutionary theory is deficient because it does not yield those probabilities. He seems to believe that if the theory says that there are chance contributions to biological evolution, then it should provide probabilistic models. This does not follow logically. If I see you flipping an apparently fair coin to select inputs to a "black box," then I know that there is a chance contribution to the behavior of the system. But there is no way for me to provide a detailed probabilistic model. In particular, I do not know the range of responses of the black-box system. Dembski promised long ago to produce an upper bound on the probability of evolution of the bacterial flagellum. He has yet to get back to us with that. If should ever claim to have that bound, it will be bogus. Again, he cannot measure probability on a set he cannot hope to define. And without probability, there is no CSI.Noesis
March 24, 2011
March
03
Mar
24
24
2011
05:40 PM
5
05
40
PM
PDT
Mathgrrl: In the interests of precision, and to avoid confusion, the line where I wrote in the post above: 20^30 = (10^39)^30 = 10^1170 should read: (20^30)^30 = (10^39)^30 = 10^1170.vjtorley
March 24, 2011
March
03
Mar
24
24
2011
04:50 PM
4
04
50
PM
PDT
Mathgrrl: The specified complexity Chi of a bacterial flagellum is somewhere between 2126 and 3422, according to Professor Dembski's preliminary calculations of the probability of a bacterial flagellum arising by chance. I don't have a copy of Dembski's No Free Lunch: Why specified complexity cannot be purchased without intelligence (2002, Lanham, Maryland: Rowman & Littlefield) where he performs the original calculation. However, I found the following quote in a critical review by Professor Kenneth Miller, entitled, The Flagellum Unspun: The Collapse of 'Irreducible Complexity' :
When Dembski turns his attention to the chances of evolving the 30 proteins of the bacterial flagellum, he makes what he regards as a generous assumption. Guessing that each of the proteins of the flagellum have about 300 amino acids, one might calculate that the chances of getting just one such protein to assemble from "random" evolutionary processes would be 20^-300 , since there are 20 amino acids specified by the genetic code. Dembski, however, concedes that proteins need not get the exact amino acid sequence right in order to be functional, so he cuts the odds to just 20^-30, which he tells his readers is "on the order of 10^-39" (Dembski 2002a, 301). Since the flagellum requires 30 such proteins, he explains that 30 such probabilities "will all need to be multiplied to form the origination probability"(Dembski 2002a, 301). That would give us an origination probability for the flagellum of 10^-1170, far below the universal probability bound.
For the benefit of non-mathematical readers, I should point out that 20^30 = (10^39)^30 = 10^1170 (approximately). Miller criticized Dembski's logic in the paper I cited. Dembski replied in a paper entitled, Still Spinning Just Fine: A Response to Ken Miller . I'll just quote the relevant parts:
My point in section 5.10 [of No Free Lunch - VJT] was not to calculate every conceivable probability connected with the stochastic formation of the flagellum (note that the Darwinian mechanism is a stochastic process). My point, rather, was to sketch out some probabilistic techniques that could then be applied by biologists to the stochastic formation of the flagellum. As I emphasized in No Free Lunch (2002, 302): "There is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody's favor." Miller doesn't like my number 10^(-1170), which is one improbability that I calculate for the flagellum. Fine. But in pointing out that a third of the proteins in the flagellum are closely related to components of the TTSS, Miller tacitly admits that two-thirds of the proteins in the flagellum are unique. In fact they are (indeed, if they weren't, Miller would be sure to point us to where the homologues could be found). Applied to those remaining two-third of flagellar proteins, my calculation yields something like 10^(-780), which also falls well below my universal probability bound.
Some scientists have criticized Professor Dembski's probability calculations as being too simplistic. I would advise them (and you, if you haven't already) to read Dembski's 2004 paper, Irreducibility Complexity Revisited, in which he lays out the numerous hurdles that have to be overcome before an irreducibly complex biochemical system can evolve by a Darwinian mechanism:
(1) Availability. Are the parts needed to evolve an irreducibly complex biochemical system like the bacterial flagellum even available? (2) Synchronization. Are these parts available at the right time so that they can be incorporated when needed into the evolving structure? (3) Localization. Even with parts that are available at the right time for inclusion in an evolving system, can the parts break free of the systems in which they are currently integrated and be made available at the "construction site" of the evolving system? (4) Interfering Cross-Reactions. Given that the right parts can be brought together at the right time in the right place, how can the wrong parts that would otherwise gum up the works be excluded from the "construction site" of the evolving system? (5) Interface Compatibility. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly so that, once suitably positioned, the parts work together to form a functioning system? (6) Order of Assembly. Even with all and only the right parts reaching the right place at the right time, and even with full interface compatibility, will they be assembled in the right order to form a functioning system? (7) Configuration. Even with all the right parts slated to be assembled in the right order, will they be arranged in the right way to form a functioning system?
For the time being, then, in the absence of any better calculations, I'm going to stick with 10^-780 and 10^-1170 as the upper and lower bounds for the probability of a bacterial flagellum arising as a result of stochastic processes. Now recall that Dembski, in his paper, Specification: The Pattern that Signifies Intelligence defines (on page 24) the specified complexity Chi of pattern T given chance hypothesis H, minus the tilde and context sensitivity, as: Chi=-log2[10^120.Phi_s(T).P(T|H)] and then goes on to define a specification as any pattern for which -log2[10^120.Phi_s(T).P(T|H)]>1 . He continues:
As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum. Recall the following description of the bacterial flagellum given in section 6: "bidirectional rotary motor-driven propeller." This description corresponds to a pattern T. Moreover, given a natural language (English) lexicon with 100,000 (=10^5) basic concepts (which is supremely generous given that no English speaker is known to have so extensive a basic vocabulary), we estimated the complexity of this pattern at approximately Phi_s(T)=10^20 (for definiteness, let's say S here is me; any native English speaker with a some of knowledge of biology and the flagellum would do). It follows that -log2[10^120.Phi_s(T).P(T|H)]>1 if and only if P(T|H)<(1/2)*(10^-140), where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event, is the evolutionary pathway that brings about the flagellar structure (for definiteness, let's say the flagellar structure in E. coli).
Time for some math. Given that log2(10)=3.321928094887362 (approx.), that Phi_s(T)=10^20 and that 10^(-1170)<=P(T|H)<=10^(-780), we can calculate: -log2[10^120.Phi_s(T).P(T|H)]= -3.321928094887362*(140-780)=2126 for the most optimistic scenario for the chance formation of the bacterial flagellum, and -3.321928094887362*(140-1170)=3422 for the most pessimistic scenario. So there's your answer, in black and white: for the bacterial flagellum, the specified complexity lies somewhere between 2126 and 3422. Since this is far greater than 1, then the flagellum can be described as a specification. I hope this answers your question.vjtorley
March 24, 2011
March
03
Mar
24
24
2011
04:40 PM
4
04
40
PM
PDT
My understanding of CSI is that if it is present then that is a dead-on indicator of a designing agency. Meaning chance and necessity cannot accout for CSI. It is a yes or no thing. And you go about determining it is present by counting bits and determining a specification is present. (I also think it is limited because of that- you need to be dealing with something readily represented as bits- but that is neither here nor there- just sayin'.) But anyway yes or no. Function is a specification- does it exist,yes or no. We have something with a function- a bacterial flagellum- can chance and necessity account for it- yes or no. If yes we never get to the design inference and CSI isn't looking so good as a dead-on design indicator.Joseph
March 24, 2011
March
03
Mar
24
24
2011
04:40 PM
4
04
40
PM
PDT
QuietID:
“500 bits of specified information.” Exactly! The CSI is measured in bits, but the thing being measured is the information, not the specification. Right? Specification is either present or not.
1- It can't be CSI without the specification 2- Yes the thing being measured is the information 3- Therefor Shannon's methodology for measuring/ calculating information may apply with that caveat 4- As Stephen C Meyer pointed out in "Signature in the Cell" Shannon provided a way of measuring/ calculating information carrying capacity 5- Therefor if that information is also specified it does not change the measurement/ calculation. Good job Q.Joseph
March 24, 2011
March
03
Mar
24
24
2011
04:23 PM
4
04
23
PM
PDT
Joseph, Your brain automatically supplies the missing word without telling you it's missing. Now here's the question, is that a designed process, and if so, who designed it? Mathgrrl, I'm not sure you will get the definition you want because "intelligence" and "intelligent agent" are not defined that specifically either. It's like trying to say that 2X=3Y when you know that X and Y are not quantifiable. You are trying to find the value of X (CSI) but Y seems to be the value of PI. Undefinable.Collin
March 24, 2011
March
03
Mar
24
24
2011
03:43 PM
3
03
43
PM
PDT
Joseph, "500 bits of specified information." Exactly! The CSI is measured in bits, but the thing being measured is the information, not the specification. Right? Specification is either present or not. It's like, the speed of a washing machine is quantifiable as rpms. But the machine itself is either top-loading or front-loading.QuiteID
March 24, 2011
March
03
Mar
24
24
2011
03:41 PM
3
03
41
PM
PDT
The NFL theorems do not apply to a situation in which there is only one fitness landscape
The notion of NFL is larger than the theorems by those names. In ID useage (as in the book No Free Lunch) it deals with the probability an evolutionary algorithm can exist in the first place before there is even a fitness landscape.
Can you provide a rigorous mathematical definition of CSI
With respect to a certain pattern and probability distribution, in principle, you can provide a measurement. Whether you decide that the defintion is rigorous (or not) is less of a problem for ID than it is for Darwinists who claim certain structures can evolve without intelligence in the pipeline:
and example calculations for the four scenarios I described?
We can work on it if you can provide the probabilities for the landscapes existing in the first place without intelligent programming of the landscape. In the genetic algorithms I wrote, the probability space for creating that landscape is very small. An approximate measure is taking the amount of probability space to create a workable program from the language symbols. You can't just say tierra, or EV, or steiner will work by letting a random number generator create the fitness functions. This is like saying a printed document requires no intelligence because a computer and printer can print something out with human intervention after the "print" command is initiated. I've already said evolutionary algorithms can create CSI if they act as surrogates and extensions of human thought or other intelligent agencies. I see no reason to belive evolutonary algorithms capable of generating CSI can spontaneously self-create themselves. Say Steiner source is 1000 characters long (in Fortran), what is the number of compilable programs that can be implemented with 1000 characters versus the space of all possible characters. Certainly it is small. Will you be dissatisfied with any less than multi-decimal precision when the probabilites are so obviously remote? Let's say a variable name and reference requires 10 characters and must be coordinated somewhere in the source code so the program works correctly. That probability of success would be somethin like 1 in 10^40 (if we include special characters). And that is only the beginning of problems. I would hardly stake much claim in a theory have a 1 in 10^40 chance of being true. And that is a generous figure by the way... Even if you said biological systems implement evolutionary algorithms, you have no theoretical justificatin to say the biological systems self-created their capabilities any more than tierra, EV, or steiner self-created itself. They all needed intelligent designers. Praise be. By the way, Mendel's accountant is a superior model of evolutionary algorithms in nature. Why aren't those results (versus EV, tierra, steiner) used in scientific discussion. I suppose because they give answers Darinists don't like even if they might be correct.scordova
March 24, 2011
March
03
Mar
24
24
2011
03:39 PM
3
03
39
PM
PDT
Mathgrrl...such dedication to detail. Yet, she refuses to acknowledge a key reality of what she is "seeking" to understand. Perhaps that reality stands as an impediment to the claim she wishes to make. ”Does the output of any evolutionary algorithm being modeled establish the semiosis required for [the] information to exist, or does it take it for granted as an already existing quality”. To answer this question Mathgrrl, you don't even have to agree with me that information only exist as a matter of symbols and rules. In the matter at hand, the fact that it does exist as a mapping of discrete objects is not even in dispute. So why is it that you cannot acknowledge it?Upright BiPed
March 24, 2011
March
03
Mar
24
24
2011
03:39 PM
3
03
39
PM
PDT
"in biology CSI refers to biological function" I don't think that's true of biology generally, as the term is used by very few people in the field.QuiteID
March 24, 2011
March
03
Mar
24
24
2011
03:37 PM
3
03
37
PM
PDT
"But anyway seeing that you ignore most of what I post dealing with you is a waste of time." Because you have failed to provide the one thing MathGrrl is asking for: a methematically rigorous definition of CSI.Grunty
March 24, 2011
March
03
Mar
24
24
2011
03:36 PM
3
03
36
PM
PDT
Mathgrrl, Do you believe that the word "intelligence" has been defined rigorously?Collin
March 24, 2011
March
03
Mar
24
24
2011
03:36 PM
3
03
36
PM
PDT
Collin, Strange that when I read it the missing word is there. Funny how that worksJoseph
March 24, 2011
March
03
Mar
24
24
2011
03:36 PM
3
03
36
PM
PDT
1 7 8 9 10 11 15

Leave a Reply