Uncommon Descent Serving The Intelligent Design Community

On The Calculation Of CSI

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

My thanks to Jonathan M. for passing my suggestion for a CSI thread on and a very special thanks to Denyse O’Leary for inviting me to offer a guest post.

[This post has been advanced to enable a continued discussion on a vital issue. Other newer stories are posted below. – O’Leary ]

In the abstract of Specification: The Pattern That Signifies Intelligence, William Demski asks “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” Many ID proponents answer this question emphatically in the affirmative, claiming that Complex Specified Information is a metric that clearly indicates intelligent agency.

As someone with a strong interest in computational biology, evolutionary algorithms, and genetic programming, this strikes me as the most readily testable claim made by ID proponents. For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it. Unfortunately, what I’ve found is quite a bit of confusion about the details of CSI, even among its strongest advocates.

My first detailed discussion was with UD regular gpuccio, in a series of four threads hosted by Mark Frank. While we didn’t come to any resolution, we did cover a number of details that might be of interest to others following the topic.

CSI came up again in a recent thread here on UD. I asked the participants there to assist me in better understanding CSI by providing a rigorous mathematical definition and showing how to calculate it for four scenarios:

  1. A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”
  2. Tom Schneider’s ev evolves genomes using only simplified forms of known, observed evolutionary mechanisms, that meet the specification of “A nucleotide that binds to exactly N sites within the genome.” The length of the genome required to meet this specification can be quite long, depending on the value of N. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.)
  3. Tom Ray’s Tierra routinely results in digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes, but takes thousands of generations to evolve.
  4. The various Steiner Problem solutions from a programming challenge a few years ago have genomes that can easily be hundreds of bits. The specification for these genomes is “Computes a close approximation to the shortest connected path between a set of points.”

vjtorley very kindly and forthrightly addressed the first scenario in detail. His conclusion is:

I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

In that same thread, at least one other ID proponent agrees that known evolutionary mechanisms can generate CSI. At least two others disagree.

I hope we can resolve the issues in this thread. My goal is still to understand CSI in sufficient detail to be able to objectively measure it in both biological systems and digital models of those systems. To that end, I hope some ID proponents will be willing to answer some questions and provide some information:

  1. Do you agree with vjtorley’s calculation of CSI?
  2. Do you agree with his conclusion that CSI can be generated by known evolutionary mechanisms (gene duplication, in this case)?
  3. If you disagree with either, please show an equally detailed calculation so that I can understand how you compute CSI in that scenario.
  4. If your definition of CSI is different from that used by vjtorley, please provide a mathematically rigorous definition of your version of CSI.
  5. In addition to the gene duplication example, please show how to calculate CSI using your definition for the other three scenarios I’ve described.

Discussion of the general topic of CSI is, of course, interesting, but calculations at least as detailed as those provided by vjtorley are essential to eliminating ambiguity. Please show your work supporting any claims.

Thank you in advance for helping me understand CSI. Let’s do some math!

Comments
Dembski is kind of awol. At his website, Designinfererence.com he has only one entry in 2010 and none in 2011. In 2007 he had like 15. These entries are scholarly not "blog posts." What is he up to lately?Collin
March 25, 2011
March
03
Mar
25
25
2011
01:49 PM
1
01
49
PM
PDT
After your “big finger from the sky” comment, I mistakenly thought you were here to encourage the fraud.
Not very familiar with Monty Python, are you?
Point on the word in my question that indicates hostility toward you, and I will be more than happy to retract it.
Having observed many more threads than I have commented on, I have noticed that, when frustrated, you tend to answer questions with questions. So, your whole question came across as a fit of pique.jon specter
March 25, 2011
March
03
Mar
25
25
2011
01:40 PM
1
01
40
PM
PDT
critter at 217, Bill Dembski blogs here, but - just for clarification - this is not "his" blog. It passed into the hands of a Colorado not-for-profit some time ago, by his wish. He and I are two of five mods. I hope you find the information you seek.O'Leary
March 25, 2011
March
03
Mar
25
25
2011
01:39 PM
1
01
39
PM
PDT
Critter, That's exactly what I'm thinking. He used to comment much more frequently on this blog.Collin
March 25, 2011
March
03
Mar
25
25
2011
01:37 PM
1
01
37
PM
PDT
QID, yes yes, I am sure you are right. Implications haven't had an impact on any other discourse regarding ID, so I must have been making an unsupported assuption.Upright BiPed
March 25, 2011
March
03
Mar
25
25
2011
01:34 PM
1
01
34
PM
PDT
I have been following this thread because I am genuinely interested in the math of evolution and design. (I had a year of work towards a MS in Math) Mathgrrl is asking for the math involved. Perhaps Dr Dembski can supply some knowledge (this is his blog).critter
March 25, 2011
March
03
Mar
25
25
2011
01:32 PM
1
01
32
PM
PDT
#213 1) KF = kairosfocus. You might have noticed, since he has been active on the same threads as you have been. 2) My apologies as well. After your "big finger from the sky" comment, I mistakenly thought you were here to encourage the fraud. I do apologize. Please allow me to make it up to you. Point on the word in my question that indicates hostility toward you, and I will be more than happy to retract it.Upright BiPed
March 25, 2011
March
03
Mar
25
25
2011
01:32 PM
1
01
32
PM
PDT
While I probably don't understand CSI well enough, I thought that my calculations in 21, 28 and 29 were interesting. Does anybody else like my idea of calculating the tightness of fit between code and function and comparing it to the same code and function in an unrelated species to get an inference of design? It may not be CSI but I thought that the math was correct (and very simple).Collin
March 25, 2011
March
03
Mar
25
25
2011
01:24 PM
1
01
24
PM
PDT
vjtorley, do you agree with me that the CSI arguments should be presented in a form acceptable to a top-notch applied math journal? I don't think they've been presented in that way yet, which is one reason that mathematics as a whole has ignored the issue. Upright Biped, most mathematicians have not found CSI and related ideas worth considering. I think they are worth considering, but the challenge has not (yet) been presented in ways that would force mathematicians to take them seriously.QuiteID
March 25, 2011
March
03
Mar
25
25
2011
12:32 PM
12
12
32
PM
PDT
#208 KF took care of that long ago, but nothing is good enough if the conclusion has to be protected.
Oh, sorry. I am relatively new here and don't always get who is who. Which usernames are those scientists posting under?
While you are here, do you think you can answer one of the questions ruled off-limits on this thread?
I don't think I deserve that hostility. I came here as a supporter, hoping to see CSI calculated. That is still my hope.jon specter
March 25, 2011
March
03
Mar
25
25
2011
12:30 PM
12
12
30
PM
PDT
QuietID, The fact the paper I linked to in comment 12 exists proves that scientists are interested in quantifying functional information, which is the equivalent of CSI. And I am not sure what VJ is referring to. In the paper he links to Dembski specifically says :
Note that putting the logarithm to the base 2 in front of the product ? S(T)·P(T|H) has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity ameasure of information. page 18(bold added)
Joseph
March 25, 2011
March
03
Mar
25
25
2011
12:23 PM
12
12
23
PM
PDT
#208 KF took care of that long ago, but nothing is good enough if the conclusion has to be protected. While you are here, do you think you can answer one of the questions ruled off-limits on this thread? "Does the output of any evolutionary algorithm being modeled establish the semiosis required for the information to exist, or does it take it for granted as an already existing quality"Upright BiPed
March 25, 2011
March
03
Mar
25
25
2011
12:11 PM
12
12
11
PM
PDT
Hi everyone, For people who might be interested in getting their hands on some solid probability figures relating to various molecular machines, here's a source I've just found, which I'd strongly recommend: http://theory-of-evolution.net/ . Here is a list of 40 irreducibly complex molecular machines, for those who aren't familiar with it already: http://creationbydesign.wordpress.com/2010/06/15/list-of-40-irreducibly-complex-molecular-machines-which-defy-darwinian-claims/ .vjtorley
March 25, 2011
March
03
Mar
25
25
2011
12:10 PM
12
12
10
PM
PDT
Mathgrrl: After reading your posts, I'm beginning to think that your college major wasn't mathematics, as your handle suggests, but English (you're quite an articulate person) or possibly biology (since you display familiarity with various software programs designed to mimic evolution). Why do I say that? Looking through your comments, I can see plenty of breezy, confident assertions along the lines of "Yes, I've read that paper," but so far, NOT ONE SINGLE EQUATION, and NOT ONE SINGLE PIECE OF RIGOROUS MATHEMATICAL ARGUMENTATION from you. Instead, you've let us do all the mathematical spadework, while you've done nothing but critique it on general, non-technical grounds. This is highly suspicious. I'm calling you out. How much mathematics do you really know? You have complained that you don't know how to calculate the CSI for a bacterial flagellum, despite professing to be familiar with Dembski's works. But the calculation I performed for the CSI of a bacterial flagellum could have been done by anyone who had completed Grade 10 at high school. There was nothing advanced about the mathematics. So I have to ask: who are you, really? You write (#193):
There are no calculations of CSI that provide enough detail to allow it be objectively calculated for other systems. The only example of a calculation for a biological system is Dembski's estimate for a bacterial flagellum, but no one has managed to apply the same technique to other systems.
Now that's just mean, rude and curmudgeonly. I'm a philosopher, not a mathematician, and it's been 30 years since I studied mathematics at university. I spent hours of my valuable time looking up Professor Dembski's old papers, tracking down the probabilities and re-reading his paper on specification to see if I'd understood the math aright, before calculating a figure of between 2126 and 3422 for the CSI of a bacterial flagellum, and you never even acknowledged my calculation. A simple "Thank you" would have been nice. Instead, you complained that CSI had only been calculated for a bacterial flagellum so far, and that "no one has managed to apply the same technique to other systems." Rubbish. Actually, it's quite easy to do. If you click here , you will find a list of 40 irreducibly complex molecular machines. If you scroll down to number 8, you will find one I've written about before: ATP synthase. Here's the most concise English description: "stator joining two rotary motors." This description corresponds to a pattern T. Given a natural language (English) lexicon with 100,000 (=10^5) basic concepts, you should be able to estimate Phi_s(T). Let's see you do it. What about the probability P(T|H)? Well, I've located a scientifically respectable source which calculates the probability of ATP synthase arising as 1 in 2^884, or 1 in 1.28x10^266. I invite you to calculate the CSI, using Dembski's formula. Can you, I wonder? I'm calling your bluff.vjtorley
March 25, 2011
March
03
Mar
25
25
2011
12:01 PM
12
12
01
PM
PDT
You sure got that right. Scientist like Denton, Behe, Orgel, Abel, Durston, Kenyon, Thaxton, to name a few.
Do you know any of them? It might be helpful to invite them over to further Mathgirl's education.jon specter
March 25, 2011
March
03
Mar
25
25
2011
11:50 AM
11
11
50
AM
PDT
"Sounds an awful lot like a day in the life of a scientist" You sure got that right. Scientist like Denton, Behe, Orgel, Abel, Durston, Kenyon, Thaxton, to name a few.Upright BiPed
March 25, 2011
March
03
Mar
25
25
2011
11:39 AM
11
11
39
AM
PDT
"when those works do not present with a level of precision that would be accepted by the vast majority of mathematicians. Perhaps that’s why these concepts have gone nowhere within the mathematics community." This is an example of naiveté that ID cannot afford. If these exact calculations demonstrated that information emerges as easily as rust, then they would be held up and waived around in the hands of NCSE lawyers in courthouses across the country. They do, however, show that the opposite is true.Upright BiPed
March 25, 2011
March
03
Mar
25
25
2011
11:35 AM
11
11
35
AM
PDT
PaV, I think a solution for this dilemma -- and I do think it would be a real solution -- would be for someone to publish a description and defense of CSI in a top-flight applied mathematics journal. So far, the mathematics of CSI hasn't been developed with that degree of mathematical precision.QuiteID
March 25, 2011
March
03
Mar
25
25
2011
11:24 AM
11
11
24
AM
PDT
MathGrrl: I noticed this in Schneider's response to Wm Dembski's objection to ev.
The ev paper did not make this claim since the phrase "complex specified information" was not used. It is unclear what this means. Shannon used the term "information" in a precise mathematical sense and that is what I use. I will assume that the extra words "complex specified" are jargon that can be dispensed with. Indeed, William A. Dembski assumes that information is specified complexity, so the term is redundant and can be removed.
You've alluded to Schneider before. Given his predilection for Shannon information, and his phrase, "Shannon used the term 'information' in a precise mathematical sense . . .", I suspect that you're a graduate student of his. In the effort for full disclosure, please tell us exactly what you're doing these days. Unless there is serious reasons for keeping this undisclosed, I take your refusal to answer as sufficient reason to no longer prolong this discussion. (Although it isn't a discussion, since you keep repeating over and over the same demand, sorry, request)PaV
March 25, 2011
March
03
Mar
25
25
2011
11:23 AM
11
11
23
AM
PDT
QuiteID (#191, 192) Thank you for your posts. I quite agree with your points that: (i) the information that can be stored in a system is proportional to the logarithm logb(N) of the number N of possible states of that system; (ii) if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states; and (iii) when b is 2, the unit is the "bit" (a contraction of binary digit). My point, however, is that the expression [(10^120).Phi_s(T).P(T|H)] in Professor Dembski's formula for Chi doesn't refer to a number N of possible states, but to a probability: the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than that of pattern T and whose probability is no more than P(T|H), over the entire history of the universe (which is why the 10^120 factor is introduced). In my book, a probability is just a number, and the point of taking the log to base 2 and multiplying by -1 is to ascertain whether this probability is greater or less than 0.5. If the probability is far less than 0.5, then taking the log to base 2 of that probability and multiplying it by -1 will yield a number well in excess of 1, but it would be meaningless to equate this to bits. Consider Professor Dembski's statement that we can infer design by an intelligent agent if the specified complexity Chi is greater than 1. If you were to equate "1" to "1 bit" and then express Dembski's statement in terms of bits, you would get the nonsensical result that one bit is enough to warrant an inference to an intelligent agent! Surely that can't be what Dembski meant. That's why I interpret Chi as a cutoff number: if it's above 1, then we can be certain beyond reasonable doubt, on mathematical grounds alone, that the pattern was designed; and if it's 1 or below, then we need other compelling grounds if we are to make a warranted design inference.vjtorley
March 25, 2011
March
03
Mar
25
25
2011
11:18 AM
11
11
18
AM
PDT
To provide a “rigorous definition” of CSI in the case of any of those programs would require analyzing the programs in depth so as to develop a “chance hypothesis”. This would require hours and hours of study, thought, and analysis.
Sounds an awful lot like a day in the life of a scientist.jon specter
March 25, 2011
March
03
Mar
25
25
2011
11:17 AM
11
11
17
AM
PDT
Dear MathGrrl: To provide a "rigorous definition" of CSI in the case of any of those programs would require analyzing the programs in depth so as to develop a "chance hypothesis". This would require hours and hours of study, thought, and analysis. You come here and just simply "ask" that someone do this. Why? You do it. You analyze, study and think about any one of those programs, and you think about, study and analyze Dembski's concept of CSI, and then you prove that, lo, and behold, ev or Tierra, or what have you is wrong. What Dembski lays out in NFL is the conceptualization of a very abstract notion. His paper on Specification further elaborates and, frankly, complicated his original conceptualization. The hard labor will always be to take this conceptual framework and apply it to real-life situations. It can be done. But, it would take an immense labor, and the output of all this would amount to a Ph'D thesis in length and complexity. And, if you can't understand all of the work and effort that would be required, then I don't think you understand what's involved very well. That's your problem; not ours. I've already posted a paper that has shown that, per his definition of complexity, neither ev or Tierra produce any on-going complexity. And the bit output they come up with, based on the standards of NFL's description of CSI, demonstrates these programs don't measure up to CSI. Likewise, I've stated elsewhere---and you're aware---that I've looked at Tierra in some detail, but still in a cursory way, and from what I can see, it's output is trivial. You have a "parasite" forming. A different "life form". What does that mean actually? That a small assembly-language program, can, through generation of random changes, both lose its ability to "copy" itself, and then find a way to get another "organism" (i.e., a small program) to "copy" itself. Now I'm sure that with the smallness and simplicity of the programming (let's remember the guy who wrote this "program" was a biologist, not a computer geek) might allow something like this to happen in random fashion IF YOU RUN the computer program long enough. Which is what happens. But from what I could see, both the loss and parasitism were the result of simply commands coming and going. Let's remember that the "copy" command, and its execution, have already been programmed in. So this basically amounts to what we see in so-called microevolution when an operon is turned on and off in bacteria, for example. Why should I spend another moment analyzing something as basic and simplistic as this output? Can you give me an answer, other than you would be interested in it? Well, excuse me if I don't respond to your request. Tom Ray, the 'inventor' of Tierra made it available to anyone on the internet. It was Network Tierra. And what were Ray's expectations? He actually expected that by sheer computer power---that is, linking to other computers to use thier CPU time---he was going to generate "software programs" that we "couldn't even imagine"---just like Darwinism, you see. You can see him making this claim on an internet video. Well, he made that claim years ago. What has happened since? No "software program". Tom Ray is now doing other kinds of research. IOW, a big, fact "dead-end". The paper I cited put into mathematical language the very obvious conclusion I reached having taken a look at Tierra. As to ev, here's a link to Schneider's EV home. What do we see? Can you see the plateau-ing of the "complexity" after a thousand generations? So, what is your point with this repeated request---I call it a demand because of this repetition? If you don't understand CSI, then write Wm. Dembski an email, and ask for clarification. It's as simple as that. Again, one could, and can, do an ANALYSIS of whether or not CSI is present within the output of those programs, using the definition of Dembski. This is a response to an entirely different question, and the one which I think you think you're asking. But I'm not about to do it. You seem interested, so why don't you do that? "Creativity", like CSI, is an abstract concept. So, please, provide me with a rigorous mathematical definition of "creativity". Can you do that? One, can, however, apply the mathematical structure of CSI, as found in NFL and then refined in "Specification", to these various programs and demonstrate that they don't constitute CSI as described/defined by Dembski. This is something entirely different than "providing a rigorous mathematical definition" of CSI. Dembski has already done that. It's up to you to understand it, and then refute it. And, BTW, you would become a world-class luminary in the world of Darwinism if you could refute it. So why don't you start doing that, instead of just repeating your disproportionate demand for a "rigorous mathematical" refutation of ev and Tierra, etc.PaV
March 25, 2011
March
03
Mar
25
25
2011
11:04 AM
11
11
04
AM
PDT
Joseph, I think MathGrrl has a point. The fact is that the mathematics of CSI is somewhat confusingly presented. The latest little dispute about whether CSI is in bits or not (you and I think it is, vjtorley thinks it's not) is illustrative. It won't do to say "read No Free Lunch and the Design Inference," when those works do not present with a level of precision that would be accepted by the vast majority of mathematicians. Perhaps that's why these concepts have gone nowhere within the mathematics community. That doesn't mean they're wrong -- I hope they're not! -- but it does mean that the mathematics needs to be nailed down tightly.QuiteID
March 25, 2011
March
03
Mar
25
25
2011
11:00 AM
11
11
00
AM
PDT
markf(#186) I'd now like to address your first point. You write that Dembski's estimates for the probability of a bacterial flagellum arising by stochastic processes are "estimates are based on assuming that all amino acids are equally likely and independent of each other." That's true, but in reality they are all equally likely, and they are independent of each other. That's just a fact of chemistry. A similar point applies to the DNA molecule: the four bases are equally likely, and they are independent of each other. If they were dependent on each other, then they'd produce a boring regular pattern like AGAGAGAG - i.e mere order, rather than complexity. Order has low Shannon information. Hence I do not agree with you that the two figures I supplied (namely, 10^(-780) and 10^(-1170)) "effectively represent... a high and low estimate of ... a lower bound – the lowest the probability could reasonably be, given no other knowledge of the process by which the proteins were obtained." I would say that they represent the upper and lower bounds of a naive estimate, which may either be too optimistic or too pessimistic, but which is currently the best we have. You correctly state that "Dembski himself says that the precise calculation of P(T|H) is yet to be done" but you then add that "this was written after the works you refer to." Actually, Dembski said the same thing several years ago, in his 2003 paper, Still Spinning Just Fine: A Response to Ken Miller:
My point in section 5.10 was not to calculate every conceivable probability connected with the stochastic formation of the flagellum (note that the Darwinian mechanism is a stochastic process). My point, rather, was to sketch out some probabilistic techniques that could then be applied by biologists to the stochastic formation of the flagellum. As I emphasized in No Free Lunch (2002, 302): "There is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody's favor." ... Bottom line: Calculate the probability of getting a flagellum by stochastic (and that includes Darwinian) means any way you like, but do calculate it. All such calculations to date have fallen well below my universal probability bound of 10^(-150). But for Miller all such calculations are besides the point because a Darwinian pathway, though completely unknown, most assuredly exists and, once made explicit, would produce probabilities above my universal probability bound. To be sure, if a Darwinian pathway exists, the probabilities associated with it would no longer trigger a design inference. But that's just the point, isn't it? Namely, whether such a pathway exists in the first place. Miller, it seems, wants me to calculate probabilities associated with indirect Darwinian pathways leading to the flagellum. But until such paths are made explicit, there's no way to calculate the probabilities. This is all very convenient for Darwinism and allows Darwinists to insulate their theory from critique indefinitely.
Enough on that. I'd now like to address a more fundamental point you raise, namely the interpretation of P(T|H). Let me say at the outset that my understanding of the significance of P(T|H) in Professor Dembski's paper, Specification: The Pattern that Signifies Intelligence is quite different from yours. I'm not accusing you of mis-reading Dembski's paper; what I'm suggesting is that Dembski's argument would make more sense if P(T|H) referred to the probability of a pattern T arising by pure chance. Please allow me to explain why. You write:
It is not legitimate to substitute these figures for P(T|H) in his formula which is intended to be genuine estimate of the probability of the bacterial flagellum based on an evolutionary hypothesis H. (Emphases mine - VJT.)
Certainly, there are passages in Professor Dembski's paper which support your interpretation. For instance, on page 18, Dembski writes that "H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms," and on page 25, when discussing the bacterial flagellum, Dembski writes that "H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms." Again, on page 26, Dembski talks about the example of a lopsided die, and says that even if specified complexity eliminates the chance hypothesis that all sides will appear with probability 1/6, "that still leaves alternative hypotheses H' for which the probability of the faces are not all equal." And again, on pages 27-28, Dembski rebuts an objection frequently voiced by evolutionists, that "because we can never know all the chance hypotheses responsible for a given outcome, to infer design because specified complexity eliminates a limited set of chance hypotheses constitutes an argument from ignorance." Personally, I think Professor Dembski was being too generous to his opponents here, and his more recent papers at http://www.evoinfo.org point to a better response: if Darwinian processes can produce organisms with a large amount of biological information, then these processes must themselves have been rigged with information at the start by an Intelligent Designer, thereby enabling them to achieve these spectacular results. Or as Dembski & Marks put it on page 4 of their 2009 article, Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information :
The challenge of intelligent design, and of this paper in particular, is to show that when natural systems exhibit intelligence by producing information, they have in fact not created it from scratch but merely shuffled around existing information. Nature is a matrix for expressing already existent information. But the ultimate source of that information resides in an intelligence not reducible to nature. The Law of Conservation of Information, which we explain and justify in this paper, demonstrates that this is the case. Though not denying Darwinian evolution or even limiting its role as an immediate efficient cause in the history of life, this law shows that Darwinian evolution is deeply teleological. Moreover, it shows that the teleology inherent in Darwinian evolution is scientifically ascertainable - it is not merely an article of faith. (Emphases mine - VJT.)
To return to the die example: if I found a die which, when rolled, yielded the first 100 digits of pi in base 6, I'd be quite certain that it was designed by some agent to do that. And if someone were to demonstrate to me that the laws of Nature and/or the initial conditions of the universe were sufficient to ensure (or make it highly likely) that the die would do that, I certainly wouldn't become a convert to naturalism. Instead, I'd say that some Intelligence must have designed those laws and/or initial conditions. In other words, I'd invoke the fine-tuning argument. The foregoing argument explains why I think that Dembski's argument is in fact better understood if Dembski's hypothesis H is treated as a pure chance hypothesis. And indeed, throughout most of his paper, Dembski writes as if that was what he meant. For instance, P(T|H) is defined by Professor Dembski in his paper as a probability: the probability of a pattern T with respect to the chance hypothesis H. Dembski repeatedly refers to "the chance hypothesis H" in his paper (see pages 3, 5, 6, 7, 8, 9, 12, 16, 18, 19, 20, 24 and 25) and on page 22, referring to a particular sequence of ten digits (1123581321), Dembski writes:
This sequence, if produced at random (i.e., with respect to the uniform probability distribution denoted by H), would have probability 10^?10 , or 1 in 10 billion. This is P(T|H). (Emphases mine - VJT.)
I also think it's odd to describe a loaded die as coming up with a number (say, 6's) "by chance." If I found a die that kept rolling 6's, I wouldn't come up with an alternative chance hypothesis. Instead, I'd reject the chance hypothesis in favor of the alternative non-chance hypothesis that the die was biased. So I think we should treat H as a pure chance hypothesis, with a uniform probability distribution, when endeavoring to ascertain whether a pattern was designed by an agent or not.vjtorley
March 25, 2011
March
03
Mar
25
25
2011
10:50 AM
10
10
50
AM
PDT
MathGrrl:
I would very much like to understand CSI well enough to test the assertion that it cannot be generated by evolutionary mechanisms.
My aplogies but I don't believe you. The reasons I don't believe you are: 1- "Evolutionary" mechanisms is meaningless. ID only argues against blind watchmaker-type pocesses having sole dominion over evolution. 2- That means ID doesn't attack evolutionary biology it attacks blind watchmaker evolution. 3- "No Free Lunch" is readily available. The concept of CSI is thoroughly discussed in it. The math he used to arrive at the 500 bits as the complexity level is all laid out in that book. 4- Science isn't about "proof". But with our knowledge of cause and effect relationships a designing agency is the only explanation for CSI 5- Your scenarios are till bogusJoseph
March 25, 2011
March
03
Mar
25
25
2011
10:33 AM
10
10
33
AM
PDT
Mathgrrl, thanks for the sales pitch at 193. The dynamic on display here is, after all, not an uncommon occurance in human interaction. It actually has a rather rich history (the Inquisition comes to mind). Questions may be asked in order to formulate a conclusion. This process is often the foundation of logic, justice, and discovery. Yet, as we all know, questions can also be asked not to formulate a conclusion, but as a means to demonstrate a conclusion which has already been reached. This method is very often contained in a closed environment where certain subjects are off limits, and any answers given must first fit through the arbitrary constraints imposed by the person asking the questions. Both of those elements are on rampant display on this thread (and the ones leading up to it). In this regard, what has all the appearances of an exercise to formulate a conclusion is nothing of the sort. Imagine a courtroom operating in such a way. And of course, the sales pitch you've just given is intended to present the latter process as an example of the former. Nice job.Upright BiPed
March 25, 2011
March
03
Mar
25
25
2011
10:33 AM
10
10
33
AM
PDT
Mathgrrl, Some of the confusion is probably due to my lack of understanding of what CSI is. So I hope you do not believe that there is more confusion over CSI than there actually is just because I haven't read the material.Collin
March 25, 2011
March
03
Mar
25
25
2011
09:52 AM
9
09
52
AM
PDT
Here's two quotes from the paper I cite above: The issue of open-ended evolution can be summed up by asking under what conditions will an evolutionary system continue to produce novel forms. Arti?cial Life systems such as Tierra and Avida produced a rich diversity of organisms initially, yet ultimately peter out. By contrast, the Earth’s biosphere appears to have continuously generated new and varied forms throughout the 4 × 10^9 years of the history of life. There is also a clear trend from simple organisms at the time of the ?rst replicators towards immensely complicated organisms such as mammals and birds found on the Earth today. This raises the obvious question of what is missing in arti?cial life systems? And: Complexity is related to information in a direct manner, in a way to be made more precise later in this paper. Loosely speaking, available complexity is proportional to the dimension of phenotype space, and an evolutionary process that remained at lowlevels of complexity will quickly exhaust the possibilities for novel forms. However, intuitively, one would expect the number of novel forms to increase exponentially with available complexity, and so perhaps increasing complexity might cease to be important factor in open-ended evolution beyond a certain point. Of course, it is by far from proven that the number of possible forms increases as rapidly with complexity as that, so it may still be that complexity growth is essential for continual novelty.PaV
March 25, 2011
March
03
Mar
25
25
2011
09:40 AM
9
09
40
AM
PDT
markf (#186) Thank you for your post. To take your second point first, I believe you have misinterpreted Professor Dembski's definition of specified complexity. You write:
[T]here is a problem in Dembski’s logic. He defines the specified complexity of an outcome as the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler. That is why P(T|H) is multiplied by Phi_s(T) in the formula.
This is a misunderstanding. First, the specified complexity Chi of an outcome is not a probability. A probability, by definition, has to be in the range 0 to 1, whereas Dembski's specified complexity can exceed 1. If it does, it is referred to as a specification and it is then attributed to an intelligent agent. Chi is actually minus 1 times the log to base 2 of a probability, namely [(10^120).Phi_s(T).P(T|H)]. If this probability is less than 0.5, then Chi will be greater than 1. Second, the reason why P(T|H) is multiplied by Phi_s(T) in the formula is not just in order to compute "the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler." Phi_s(T).P(T|H) is indeed a probability, but it is not the one you describe. Let's go back to page 17, where Dembski defines the specificity sigma as: sigma=-log2[Phi_s(T).P(T|H)] Dembski continues:
What is the meaning of this number, the specificity sigma? To unpack sigma, consider first that the product Phi_s(T).P(T|H) provides an upper bound on the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than T and whose probability is no more than P(T|H). The intuition here is this: think of S as trying to determine whether an archer, who has just shot an arrow at a large wall, happened to hit a tiny target on that wall by chance. The arrow, let us say, is indeed sticking squarely in this tiny target. The problem, however, is that there are lots of other tiny targets on the wall. Once all those other targets are factored in, is it still unlikely that the archer could have hit any of them by chance? That's what Phi_s(T).P(T|H) computes... (Italics and emphases mine - VJT.)
Thus the reason why P(T|H) is multiplied by Phi_s(T) in the formula is in order to compute the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler, and whose probability is no more than P(T|H). There is no suggestion here that the other patterns have the same probability as pattern T; rather the reverse. Subsequently, on page 24, Dembski introduces the 10^120 multiplier, to apply the probability Phi_s(T).P(T|H) over the entire history of the observable universe (where the maximum number of events = 10^120). The specified complexity Chi is: minus the log to base 2 of this probability. Later on, you wrote:
But this hides the enormous assumption that the probability of matching each of the other patterns is similar to or lower than the probability of matching the observed pattern.
I do not believe that Professor Dembski is making this assumption. There may well be other patterns with an equally simple description, but which are far more probable than the bacterial flagellum described by Dembski. That in no way weakens the point that if the probability of a bacterial flagellum arising by a stochastic process were shown to be astronomically low (e.g. 10^(-1170), as Dembski calculates) then it would be rational to infer that it was designed by an intelligent agent, if, after multiplying this astronomically low probability by the number of other patterns with an equally simple verbal description, and then multiplying that number by the number of events in the history of the observable universe, we still obtained a figure of less than 0.5 (in other words, -log2[(10^120).Phi_s(T).P(T|H)]>1). I think this answers your second point. I'll address your first and more substantive point in my next post.vjtorley
March 25, 2011
March
03
Mar
25
25
2011
09:38 AM
9
09
38
AM
PDT
Everyone, We're getting near the 200 comment mark, which is great and I appreciate all the participation so far. I've noticed that threads here tend to load more slowly at around 300 and become difficult to use at 400, so this is close to the halfway point. I'll continue to respond directly to as many posts as I can, but I'd like to take a moment to summarize what I've learned up to this point and, possibly, refocus the discussion. CSI is unique among the arguments of ID proponents in that it leads to positive, potentially testable claims. Every other ID argument I've seen is an attack on modern evolutionary theory, not explicit support for ID. Further, the claim that ID is a reliable indicator of intelligent agency, if it could be demonstrated, would be world changing. Based on this, I would expect CSI to be the most active area of research for ID proponents, with new calculations being published frequently. Indeed, this is what I was hoping to find when I first became interested in the topic from lurking here and on other blogs. My preliminary conclusions from this discussion differ greatly from my initial expectations. It appears to me that there are at least four major problems with CSI as used by ID proponents here: 1) There is no agreed definition of CSI. I have asked from the original post onward for a rigorous mathematical definition of CSI and have yet to see one. Worse, the comments here show that a number of ID proponents have definitions that are not consistent with each other or with Dembski's published work. 2) There is no agreement on the usefulness of CSI. This may be related to the lack of an agreed definition, but several variants, that are incompatible with Dembski's description, and alternative metrics have been proposed in this thread alone. 3) There are no calculations of CSI that provide enough detail to allow it be objectively calculated for other systems. The only example of a calculation for a biological system is Dembski's estimate for a bacterial flagellum, but no one has managed to apply the same technique to other systems. 4) There is no proof that CSI is a reliable indicator of intelligent agency. This is not surprising, given the lack of a rigorous mathematical definition and examples of how to calculate it, but it does mean that the claims of many ID proponents are unfounded. When I took advantage of Denyse O'Leary's kind offer to make a guest post, I fully expected a lot of tangential conversation in the comments. What I did not expect was for us to be nearly 200 comments in without anyone directly addressing the five straightforward questions I asked, without anyone providing a rigorous mathematical definition of CSI, and without anyone demonstrating how to calculate CSI for the scenarios I described. I would very much like to understand CSI well enough to test the assertion that it cannot be generated by evolutionary mechanisms. If there is any ID proponent that can provide me with the definition and examples I've requested, please do so before this thread reaches the limit of the blog software.MathGrrl
March 25, 2011
March
03
Mar
25
25
2011
09:26 AM
9
09
26
AM
PDT
1 6 7 8 9 10 15

Leave a Reply