Home » Intelligent Design » John Derbyshire: “I will not do my homework!”

John Derbyshire: “I will not do my homework!”

[[Derbyshire continues to embarrass himself regarding ID -- see his most recent remarks in The Spectator -- so I thought I would remind readers of UD of a past post regarding his criticisms of ID. --WmAD]]

John Derbyshire has written some respectable books on the history of mathematics (e.g., his biography of Riemann). He has also been a snooty critic of ID. Given his snootiness, one might think that he could identify and speak intelligently on substantive problems with ID. But in fact, his knowledge of ID is shallow, as is his knowledge of the history of science and Darwin’s writings. This was brought home to me at a recent American Enterprise Institute symposium. On May 2, 2007, Derbyshire and Larry Arnhart faced off with ID proponents John West and George Gilder. The symposium was titled “Darwinism and Conservatism: Friends or Foes.” The audio and video of the conference can be found here: www.aei.org/…/event.

Early in Derbyshire’s presentation he made sure to identified ID with creationism (that’s par for the course). But I was taken aback that he would justify this identification not with an argument but simply by citing Judge Jones’s decision in Dover, saying “That’s good enough for me.” To appreciate the fatuity of this remark, imagine standing before feminists who regard abortion for any reason as a fundamental right of women and arguing against partial birth abortions merely by citing some court decision that ruled against it, saying “That’s good enough for me.” Perhaps it is good enough for YOU, but it certainly won’t be good enough for your interlocutors. In particular, the issue remains what about the decision, whether regarding abortion or ID, makes it worthy of acceptance. Derbyshire had no insight to offer here.

What really drove home for me what an intellectual lightweight he is in matters of ID — even though he’s written on the topic a fair amount in the press — is his refutation specifically of my work. He dismissed it as committing the fallacy of an unspecified denominator. The example he gave to illustrate this fallacy was of a golfer hitting a hole in one. Yes, it seems highly unlikely, but only because one hasn’t specified the denominator of the relevant probability. When one factors in all the other golfers playing golf, a hole in one becomes quite probable. So likewise, when one considers all the time and opportunities for life to evolve, a materialistic form of evolution is quite likely to have brought about all the complexity and diversity of life that we see (I’m not making this up — watch the video).

But a major emphasis of my work right from the start has been that to draw a design inference one must factor in all those opportunities that might render probable what would otherwise seem highly improbable. I specifically define these opportunities as probabilistic resources – indeed, I develop a whole formalism for probabilistic resources. Here is a passage from the preface of my book THE DESIGN INFERENCE (even the most casual reader of a book usually persues the preface — apparently Derbyshire hasn’t even done this):

Although improbability is not a sufficient condition for eliminating chance, it is a necessary condition. Four heads in a row with a fair coin is sufficiently probable as not to raise an eyebrow; four hundred heads in a row is a different story. But where is the cutoff? How small a probability is small enough to eliminate chance? The answer depends on the relevant number of opportunities for patterns and events to coincide—or what I call the relevant probabilistic resources. A toy universe with only 10 elementary particles has far fewer probabilistic resources than our own universe with 10^80. What is highly improbable and not properly attributed to chance within the toy universe may be quite probable and reasonably attributed to chance within our own universe.

Here is how I put the matter in my 2004 book THE DESIGN REVOLUTION (pp. 82-83; substitute Derbyshire’s golf example for my poker example, and this passage precisely meets his objection):

Probabilistic resources refer to the number of opportunities for an event to occur or be specified. A seemingly improbable event can become quite probable once enough probabilistic resources are factored in. On the other hand, such an event may remain improbable even after all the available probabilistic resources have been factored in. Think of trying to deal yourself a royal flush. Depending on how many hands you can deal, that outcome, which by itself is quite improbable, may remain improbable or become quite probable. If you can only deal yourself a few dozen hands, then in all likelihood you won’t see a royal flush. But if you can deal yourself millions of hands, then you’ll be quite likely to see it.

Thus, whether one is entitled to eliminate or embrace chance depends on how many opportunities chance has to succeed. It’s a point I’ve made repeatedly. Yet Derbyshire not only ignores this fact, attributing to me his fallacy of the unspecified denominator, but also unthinkingly assumes that the probabilsitic resources must, of course, be there for evolution to succeed. But that needs to be established as the conclusion of a scientific argument. It is not something one may simply presuppose.

There’s a larger issue at stake here. I’ve now seen on several occasions where critics of design give no evidence of having read anything on the topic — and they’re proud of it! I recall Everett Mendelson from Harvard speaking at a Baylor conference I organized in 2000 decrying intelligent design but spending the whole talk going after William Paley. I recall Lee Silver so embarrassing himself for lack of knowing anything about ID in a debate with me at Princeton that Wesley Elsberry chided him to “please leave debating ID advocates to the professionals” (go here for the Silver-Elsberry exchange; for the actual debate, go here). More recently, Randy Olson, of FLOCK OF DODOS fame, claimed in making this documentary on ID that he had read nothing on the topic (as a colleague at Notre Dame recently reported, privately, on a talk Randy gave there: “He then explained how he deliberately didn’t do research for his documentary, and showed some movie clips on the value of spontaneity in film making”). And then there’s Derbyshire.

These critics of ID have become so shameless that they think they can simply intuit the wrongness of ID and then criticize it based simply on those intuitions. The history of science, however, reveals that intuitions can be wrong and must themselves be held up to scrutiny. In any case, the ignorance of many of our critics is a phenomenon to be recognized and exploited. Derbyshire certainly didn’t help himself at the American Enterprise Institute by his lack of homework.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

84 Responses to John Derbyshire: “I will not do my homework!”

  1. I’ve run into the same problem repeatedly. People want to argue against “ID”, then admit they’ve never read anything first-hand on it. Or they do something like call you “Dembsky” unwittingly showing the same.

    To me, it is like a Shakespeare critic never having read any Shakespeare, only reading Cliffs Notes.

    If you’re gonna critique ID, then at very least learn the arguments first-hand. It isn’t that hard, there are lots of ID resources online and in print.

  2. I’ve noticed that the lottery ticket rebuttal (someone has to win right? RIGHT??!!!) is often used by creditionaled persons — who one would think would know better — to rebut I.D.

    I picked up pretty quickly long ago that what you were getting at was an attempt to specifically address, and go beyond, discussions of simple probablilty.

    I don’t consider myself a genius. I just consider those who can’t see this to be as dumb as bricks.

    NOTE: for those of you who can’t see what Dembski is getting at and might be reading this: “dumb as bricks” is meant figuratively, not literally.

  3. I find it interesting that most materialists (evolutionists) will hardly ever address any of the conclusive evidence for design. Yet, when they bring up some suggestive similarities as evidence, ID proponents investigate the matter with rigor. It should be noted the evidence for design keeps growing as each piece of suggestive evidence is dealt with and put in the proper ID perspective. Evolutionists don’t seem to grasp the necessity for empirical validation in science.

  4. A person who accepts a court ruling as definitive in a matter of this type apparently believes in the social construction of reality.

    If Derbyshire approaches reality in that way, he would not want to actually read books by ID proponents.

    A socially constructed reality is like a theatre set. Once the stage carpenters are finished, you don’t wnat anyone changing it – even if some think it a poor or inadequate venue for the drama. The show, after all, must go on!

    I run into people like that frequently. One characteristic is that they start telling me about the ID controversy, get just about everything wrong, and then – when I hint that I have followed it as a major beat for about five years now – abruptly change the subject.

  5. I’m tempted to praise Dr. Dembski for such a great analysis. However, it seems awfully silly coming from me, someone with hardly any education at all. It’s not worth much. So, for whatever it’s worth, thank you Dr. Dembski.

    I listened to the AEI debate and among the silly objections offered by Derbyshire, he made what I think must be an extraordinarily embarassing flip-flop. And actually, I was a little disappointed that neither West nor Gilder called him on it…

    During his presentation, he responded to the typical Christian testimony of how Christianity gave purpose to life, etc. by proudly challenging with the question “Yes, but is it REALLY TRUE?” And as a Christian myself, I do agree with him on the importance of that question, and yes, I do think it’s “really true.”

    Derbyshire compared Christianity with Plato’s “noble lie” and really made a big, hairy deal out of the importance of the truth of a given proposition as opposed to how it made people feel or how comfortable people were with it.

    But then he turned right around and essentially defended Darwinism on the basis that “scientists are happy with it.” Suddenly the TRUTH of the proposition didn’t seem all that important to Derbyshire. Suddenly he was more concerned with how it made scientists feel, or with how comfortable scientists were with it.

    This was a truly amazing about-face. It revealed to me that Darwinism itself can be compared to the “noble lie”… at least from a naturalist’s perspective. Obviously, as lies go, I don’t think that one’s so “noble” myself.

  6. “To me, it is like a Shakespeare critic never having read any Shakespeare, only reading Cliffs Notes.”

    How about the ‘reams of evidence’ canard? Search on “evolution” and “reams of evidence” and you get 3,940 hits. Read a few of them and laugh (or cry). Reminds one of pronouncements made at the Dover trial. Sadly, Judge Jones bought it.

  7. bornagain77: “Evolutionists don’t seem to grasp the necessity for empirical validation in science.”

    Of course not. They demand it everywhere from IDists, but are incapable of providing any themselves. Yet they all clamor constantly that “there are mountains of evidence” for NDE!! I’ve yet to see any after 30 years of debating them.

    “scientists [they always imply by this that ID scientists are not real scientists] are happy with it” !!!

    Interestingly, Michael Chrichton, comments. “Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.”

  8. One thing that I think is confusing, and makes probabilities difficult to understand, (for me at least) is that saying that an event has a “one in one million” chance of happening is NOT the same (correct me if I’m wrong) as saying that it WILL happen once in one million times. Isn’t that right?

    Seems to me that this expression of chance is frequently used as some sort of “proof” that the result they want would happen eventually. While I’m the furthest thing from a mathematician, this strikes me as being quite incorrect.

    If anyone would care to elaborate, I’d be very interested.

  9. 10
    The Scubaredneck

    Note the similarity between this situation and the situation in Galileo’s time. The folks opposing Galileo insisted that he was wrong and yet patently refused to look through his telescope and consider the evidence for themselves. Their worldview precluded Galileo’s argument so there was nothing more to discuss. Today, materialism and naturalism preclude design so there is no need to bother with the specifics of the case. It is ruled out of bounds by definition.

    The Scubaredneck

  10. TRoutMac -

    When a probability is given, it is only meaningful over a long range of measurements. For example, even a 1 in 10 chance may not happen for 15 or 20 trials without arousing too much suspicion, but the longer you go, the more in doubt you are about the original estimate of probability. As you do more and more trials, the average of all successes divided by all attempts is you experimental probability. Some people argue that theoretical probability of one time events is essentially meaningless (from a mathematical standpoint).

    Still, given enough time, all sorts of weird things will happen. If you have a goal of flipping 10 heads in a row, it will happen eventually. An estimate can be give to the chances (1 in 1024), but you can’t gather up 1023 of your closest friends and have them all flip 10 coins and be certain it will happen.

    The reverse is true, of course, as well. Even if something is a 1 out of a million chance, it might happen to one person twice in a row the first two times they try it. It is unexpected, but not impossible.

  11. “I’ve noticed that the lottery ticket rebuttal (someone has to win right? RIGHT??!!!) is often used by creditionaled persons — who one would think would know better — to rebut I.D.”

    Yet even a lottery itself have to be design well for them to work(draw people in). If you set the odds too high (for example 1 win in 1000 years) then there will not be any winners at a reasonable time, Thus with time people see it as nothing but a scam and stop buying tickets. If the odds are too low you don’t have any big time winners to advertise and draw more people into the lottery.

  12. If you have a goal of flipping 10 heads in a row, it will happen eventually.

    That depends, on the probabilistic resources.

    An estimate can be give to the chances (1 in 1024), but you can’t gather up 1023 of your closest friends and have them all flip 10 coins and be certain it will happen.

    No, but you can calculate the probability that it will happen ;).

    1024 people are equipped with 10 coins, with each coin having a “heads” side and a “tails” side.

    All 1024 people toss their ten coins and record how many heads and how many tails appeared amongs theuir ten coins.

    What is the probability that at least one of the 1024 people would record ten heads as the result of their tosses, assuming all are being honest in their recording to the acutual toss and that the chances of seeing a heads on each toss was equivalent to the chances of seeing a tails?

  13. Maybe someone should point out a good refutation of the lottery example since it is a common argument used by Darwinists with the dare that “you cannot refute it.”

    Have any of the ID proponents provided a good answer to it?

  14. Eric wrote “For example, even a 1 in 10 chance may not happen for 15 or 20 trials without arousing too much suspicion”

    Well, I think we’re on the same page then. ‘Cuz I was thinking that an event that has a probability of 1 in one million doesn’t actually NEED to happen in the first million. I mean, even if you try 10 million times, it’s still conceivable that this event didn’t happen in any of those. And yet, it MIGHT have happened on the first try, or it might happen on the 10,000,001 try.

    But it seems like when the Darwinists talk about probabilities, they seem to be looking at that one in a million as some sort of “guarantee” that it’ll happen AT LEAST ONCE in that million tries. Now, I may be overstating this somewhat just to illustrate my point, but that’s the way it looks to me sometimes.

    Think of it this way… If you were able to make your one million tries ONE MILLION times (1 mil x 1 mil) what would be the odds that ONE of those sets of one million tries would actually match up with your probability estimate of one in a million? I’m thinking not very good… because now you’ve specified a TARGET and that changes everything. Weird.

    I know, I’m WAY outta my league here, and I’m sure it shows. I gave up on math a LONG time ago.

  15. Has anyone read “Why Intelligent Design Fails”?

    I read the chapters critiquing Dembski , but did not spent much time thinking through.

    It seems that the arguments presented in the book are Derbyshire’s kind.

    I also have not read Dembski myself. Now I know he has factored in “probabilistic resources”.

    I am a bit scared of Probability mathematics now. The last time I touched it was 1982.

  16. Maybe someone should point out a good refutation of the lottery example since it is a common argument used by Darwinists with the dare that “you cannot refute it.”

    Try looking at it this way:

    That someone will win a lottery is a near certainty.

    Life forming from dust stirred up on your walk to the store is, well, not.

    Why is someone winning a lottery a near certainty? Because a lottery is designed that way. In fact, if a particular lottery never has a winner you will be wise to assume — heh heh — it is designed that way.

    So to correlate the winning lottery ticket to the inevitability of the existence of life one would have to presume life to be . . . . ?

    Now, here’s something else to ponder based on the hole in one probability.

    With thousands of golfers playing thousands of holes you can count on an occasional hole in one.

    BUT suppose every golfer had some muscle condidtion that prevented him from hitting the ball more than 10 yards? And suppose all were blind? And suppose all were facing the wrong way? Even with a trillion billion trillion golfers playing on a trillion billion trillion Sundays could you ever except a hole in one with those conditions?

    The existence of physical impossibility is reality.

    It is impossible for life to exist by chance. It is inevitable for life to exist with the right Designer.

  17. eric

    The lottery example is a good one.

    Take any ten lottery winners. Without specification – connection to a independently given pattern – there’s no reason to presume they aren’t just 10 lucky people out of many millions who bought tickets.

    Now let’s say we find out the ten winners are all related to each other. Now we have reason to believe it might not be just the luck of the draw at work. Say we further learn that the lottery commissioner is related to all 10 of them as well. Now we really have cause to suspect design instead of luck in the result.

    For a more stark example say you pick up a deck of playing cards and find they’re all perfectly ordered by suit and rank. Statistically that order is just as likely as any other but anyone who thinks they just happened to become ordered that way in a fair shuffle is seriously stupid. The ordering is an independently specified pattern.

    This is the difference between complexity and specified complexity. Design inferences are warranted when there is specification.

  18. 19
    Vladimir Krondan

    TRoutMac asks,

    an event has a “one in one million” chance of happening is NOT the same (correct me if I’m wrong) as saying that it WILL happen once in one million times. Isn’t that right?

    Yes, that is correct. Be aware of the following fallacy of probabilistic reasoning. What is the probability of throwing a 6 on an unloaded die? It is 1/6. If you throw the die and a 6 turns up, what is the probability of that 6? If you say 1, you have fallen for the fallacy. The probability is 1/6 no matter what the outcome. This fallacy lies at the core of many Darwinian assertions about probability and chance.

  19. “Maybe someone should point out a good refutation of the lottery example since it is a common argument used by Darwinists with the dare that “you cannot refute it.”

    I think that the fallacy of the argument can be explained by this comparison:

    1- Real lottery: n-digit series (let’s say n=7) number and 10^n= ten million tickets sold (at the best). There is a Pr=1 that one person will win the lottery.

    2- Difficult lottery: N-digit series number (let’s say N=16) and ten million tickets sold.
    What’s the prob. that one person will win? It’s a depressing:

    Pr=10^n/10^N=10^-9= 1/1,000,000,000

    And if N would be 100?

  20. Muy interestante:

    On the Lottery Argument:

    1] Context:

    Probability thinking and linked statistics are loaded with all sorts of subtleties and technicalities. They are also, very, very important — and are often of great consequence on vital matters.

    A set-up for tricksters, who commonly reap a rich harvest. Thus, Darrell Huff’s classic expose, “How to Lie with Statistics.” And, full of traps for the unwary or those who let their attention falter for a moment.

    2] Condom roulette and winnable lotteries:

    Consider a certain condom that fails in use 1 of 10 times on average. If someone uses such a condom 10 times in a high risk context, what are the odds he has been “protected” all 10 times?

    ANS: First time, 1 – .1 = 0.9. Assuming the tries are independent [and they may not be - bad technique . . .], odds of protection all ten times are therefore [0.9]^10 ~ 35%. [That's like playing so-called "Russian Roulette" with four loaded chambers out of six!]

    So increasing exposure does shift overall odds. The idea that if we have a million tries at a 1 in a million chance, we are “bound” to come up a winner, is strictly wrong, but is fairly close to an important point — multiplying opportunities shortens overall odds. Thus, too, we see that a lottery with a well designed number of tickets and size of winning number — number of digits — is reasonably winnable.

    But, are all “lotteries” winnable?

    No, and here is why:

    3] Dembski’s upper probability bound:

    Say a certain lottery has tickets with 150 decimal digits, and that there is just one winning ticket. Then, other things being equal the odds of winning are 1 in 10 for the first digit, 1 in 10 for the second, etc, so overall it is 1 in 10^150. A lottery of this type, of this size or bigger is arguably unwinnable within the scope of the observable universe. (BTW, in binary digits, this is about 500 bits.] Here’s why, as Dan Peterson summarises:

    Dembski has formulated what he calls the “universal probability bound.” This is a number beyond which, under any circumstances, the probability of an event occurring is so small that we can say it was not the result of chance, but of design. He calculates this number by multiplying the number of elementary particles in the known universe (10^80) by the maximum number of alterations in the quantum states of matter per second (10^45) by the number of seconds between creation and when the universe undergoes heat death or collapses back on itself (10^25). The universal probability bound thus equals 10^150, and represents all of the possible events that can ever occur in the history of the universe. If an event is less likely than 1 in 10^150, therefore, we are quite justified in saying it did not result from chance but from design. Invoking billions of years of evolution to explain improbable occurrences does not help Darwinism if the odds exceed the universal probability bound.

    Of course, key informational macromolecules such as DNA and proteins etc, by far exceed this storing capacity, and we need dozens to hundreds in the compass of a cell for viable life to exist.

    That is why it is a very good empirical observation to note that there is a functionally specified complex information content in life that so far exceeds the reasonable probability resources of the known universe, that it is well-warranted to infer that life as we see it is designed to meet a specification. For agents routinely produce systems using intelligence, which are well beyond the Dembski bound. [This post is an example.]

    4] Thermodynamics-scale odds . . .

    Can we beat odds like that or worse?

    Logically and physically yes, but there comes a point where the odds of winning are so low that we simply do not reasonably expect a win. (Cf my case study thot expt from a discussion with Pixie on that.)

    Thermodynamics provides many a case in point, as Dr Granville Sewell often points out. For instance, there is a possibility that every oxygen molecule by random chance in the room where you are will all rush to one end, and so you could dies of lack of oxygen. But he odds of that are so low relative to the changes of a more or less even distribution that we will never observe it in the gamut of the observed universe. [And to smuggle in the assumption that there is a vastly wider universe as a whole that reduces the overall odds, is of course speculation, not science . . .]

    Okay, trust that helps

    GEM of TKI

  21. I posed the question about the refutation of the lottery because I was interested if anyone who has written about ID has a good response to it. I had not seen one so was actually looking for a cite or a series of discussions about it.

    The typical materialist response is that what you see is not the only thing you could possibly get but what just happened to happen. So while any specific rock formation is of incredibly low probability we witness zillions of incredibly low probabilistic formations all over the universe. Similarly, while the actual molecules that eventually led to life are also incredibly low probability what we witness is just the actual formation that emerged and zillions of others were possible but the one that we witnessed is the one that happened.

    This is supposedly what CSI is supposed to eliminate but we had a very long thread a few months ago without anyone providing an easily understandable discussion of what CSI is. I don’t want to start this again but it ran for over 200 comments and I do not believe anything was resolved.

    I doubt there is anything written that would convince John Derbyshire that the lottery example is not valid since we here were unable to find a clear understandable explanation.

    Personally, I intuitively understand it and believe the lottery example nonsense but what I was looking for was well thought out arguments against it that the typical person who reads about this topic could understand.

    You can show that all the proteins of length 40 or 60 depending upon how you do the calculations exhaust all the matter in the universe so the assembly of anything by chance is infinitely small. The answer to this is that there is incredibly large number of functional possibilities for life and the combinations that emerged are just one of this very, very large set. Just like the specific rock formation is of incredibly low probability but some formation emerged.

    That is why the argument of the shooting the arrow at the wall fails because there is not just one target but zillions of targets on the wall and it would be difficult for the archer to miss one of them.

  22. That is why the argument of the shooting the arrow at the wall fails because there is not just one target but zillions of targets on the wall and it would be difficult for the archer to miss one of them.

    I think I see the point you are missing. There are never zillions of targets. There is only just one. There may be zillions of potential targets, however. How do you differentiate between hitting a target and random shooting?

    You say you have an intuitive understanding of CSI.

    Here is something to consider:

    Is it possible to function in society without making assumptions in which you discriminate between design and chance?

    What is the process by which you make these assumptions?

    Now try to articulate and quantify that process and apply it to this debate.

  23. jerry wrote: “I doubt there is anything written that would convince John Derbyshire that the lottery example is not valid since we here were unable to find a clear understandable explanation”

    Well, I think it all boils down to this, with either the lottery example OR the golfing example:

    Someone wins the lottery (or gets a hole in one) because people are TRYING to win the lottery. I would say that the Darwinist who uses the lottery example to defend chance has just defended design… in particular, the design of the participantts. People buy lottery tickets because they CHOOSE to… they INTEND to. And they do so because they want to be filthy rich. If no one played the lottery, there’d be no intent, no “design” towards winning it, and you’re probability would fall off dramatically (as if it’s not already very low)

    Likewise, A hole-in-one is something that every golfer strives for… it’s the whole object of the game; to get the ball into the cup with the fewest strokes. That someone eventually gets a hole-in-one reflects their “design” (read: intent)

    Right now I’m not playing golf. In fact, I DON’T play golf. So what are the odds that in the course of NOT playing golf, I’ll hit a hole-in-one? He, he.

    I guess if Derbyshire wants to impress me with examples of people winning the lottery or getting a hole-in-one, he’ll have to find examples of that where the person that won the lottery hadn’t bought a ticket. Or the person that got the hole-in-one wasn’t playing golf. Otherwise, it’s just another example of design.

    Am I right?

  24. Hey jerry, others,

    Allow me to throw my thoughts into the mix.

    I have just had a week long email exchange with a Darwinist friend on this very topic.

    I posed him a question, to see if he could help me resolve it. I said, can low probabilities ever be used to rule out the “chance” occurrence of an event? We said yes, we do it in stats. We prespecify our rejection region, then reject the chance hypothesis.

    But then I asked how we could do that consistently, when unlikely events of arbitrarily low probability happen all the time?

    Take for example the quote mentioned earlier:

    If an event is less likely than 1 in 10^150, therefore, we are quite justified in saying it did not result from chance but from design.

    Now, let’s say I flip a fair coin 501 times and write down the resulting binary string. Was that specific string the result of chance? If so, we have an example of > 500 bits which is the result of chance.

    So we kept talking, and eventually came to the conclusion that we need both specification and low probability (viz. classic ID), but I couldn’t come up with a good reason why the specification should “prevent” certain outcomes. (I couldn’t say it was the low probability that prevented them, since lower probability events could happen at any time.)

    But somehow, macroscopically describable, low description length events of low probability do not happen by chance. I can’t say why not, I just see that they don’t.

  25. Atom,

    We had a very similar discussion before on a long thread in February though the concept of lottery was not really mentioned

    http://www.uncommondescent.com.....me-online/

    At the end great_ape, kairosfocus, and gpuccio were trying to illuminate the problem but it stopped when the thread essentially ran off the list of threads and got too long. No one after 200 comments had defined CSI to everyone’s satisfaction.

    As far as I could see no one could answer the lottery example which is not just that someone has to win the lottery. That is really a bad choice of words and the real issue is that there could be an enormous number of possible starting points for life all of incredibly low probability and the one that happened or “won the lottery” was just one of these starting points. So this lottery was really a lottery of starting points and one of the starting points is the one that we observe. If time had marched on then some other starting point may have arisen.

    In fact several researchers propose that life had more than one origin and not all may have been DNA based but the DNA based one is the one that won out. There is no evidence for this but because life exists, they say there must be an explanation. The use of low probabilities does not eliminate all potential starting points and the one that appeared was the one that won the this “lottery.”

  26. I just placed a comment up and it did not appear. I assume it is caught in the filter, probably because of the link in it. Could a moderator check it out when they have time to see there is nothing wrong with it. And then remove this comment.

    Thank you.

  27. Since my comment is stuck in the filter, I recomposed it without the link.

    Atom,

    We had a very similar discussion before on a long thread in February though the concept of lottery was not really mentioned

    Search for Michael Egnor Responds. There was a long thread in February. The actual link seems to put a comment into the spam filter.

    At the end great_ape, kairosfocus, and gpuccio were trying to illuminate the problem but it stopped when the thread essentially ran off the list of threads and got too long. No one after almost 200 comments had defined CSI to everyone’s satisfaction.

    As far as I could see no one has ever answered the lottery example which is not just that someone has to win the lottery. That is really a bad choice of words because it doesn’t define what the lottery is. The real issue or lottery is that there could be an enormous number of possible starting points for life all of them of incredibly low probability and the one that happened or “won the lottery” was just one of these starting points. So this lottery was really a lottery of starting points and one of the starting points is the one that we observed 3.5 billion years ago as cells fossilized in ancient rocks. If time had marched on then some other starting point may have arisen. There is of course other lotteries along the way after the first lottery. These are the nature of multi-celled organisms, the various phyla of the Cambrian Explosion, flight, legs, 4 chambered hearts, etc.

    For the origin of life, several researchers propose that life had more than one origin and not all may have been DNA based but the DNA based one is the one that won out. There is no evidence for this but because life exists, they say there must be an explanation or a lottery winner. The use of low probabilities does not eliminate the possibilities that one of all potential starting points didn’t happen just that the one that appeared was the one that won the this “lottery” of low probability events.

  28. jerry wrote: “For the origin of life, several researchers propose that life had more than one origin and not all may have been DNA based but the DNA based one is the one that won out. There is no evidence for this but because life exists, they say there must be an explanation or a lottery winner.”

    Yes, some do take this tack. However, they’ve just fallen into a trap because, just as the multiverse theory (which holds that there really is an infinite number of universes but ours just happened to land on the magic combination) that theory is untestable.

    They’re free to assert this, of course, but they shouldn’t expect to retain any credibility since they badmouth ID as being “untestable.”

  29. TRoutMac says:

    They’re free to assert this, of course, but they shouldn’t expect to retain any credibility since they badmouth ID as being “untestable.”

    Is is testable, to some degree. (No, we can’t tell what actually happened, I realize that, but hear me out here…)

    So, if the hypothesis is that there are multiple ways for life to start, we can test this by reproducing what we believe to be the initial conditions and seeing what arises. Possibly, we can do thought experiments regarding what combinations of proteins, etc. are self-replicating, etc. – but I know that many don’t consider those very good evidence.

    The point is that, if we can reproduce initial conditions and we find that self-replicating substances can form from there, we’ve shows that it is possible. Here were are not predicting / testing that it did happen, only that it could.

    Note: these experiments may be extremely difficult to do and take a long time to get results. I know that frustrates a lot of people when scientists don’t give up on a theory just because it hasn’t borne fruit yet, but there are many areas of science (particle physics, for example) that are playing the same game of experimentation lagging theory by a wide margin.

    Oh, and for what it’s worth, there are a couple of proposed tests for a multiverse concept as well – although they would be very difficult to do.

  30. Atom: So we kept talking, and eventually came to the conclusion that we need both specification and low probability (viz. classic ID), but I couldn’t come up with a good reason why the specification should “prevent” certain outcomes. (I couldn’t say it was the low probability that prevented them, since lower probability events could happen at any time.)

    It might to think of it this way.

    Consider writing down any sequence of 100 heads or tails. It doesn’t matter if it follows a “pattern” or not. By writing it down you have specified the sequence. No matter which sequence it was, the probability that 100 coin tosses will get that sequence is exactly the same.

    OR, have someone toss the coins first, and then try to guess (i.e. specify) the sequence that was tossed without looking (also excluding any ESP or any other knowledgable “help”). The probability that the sequence tossed is the one you picked is still the same low probability.

    NOW, contrast this with tossing the coin 100 times and getting any old sequence of heads and tails. What is the probability that you will get something or other. Quite high! In fact, it is a certainty.

    If you want to say more specifically, “What is the probability that I will get some arrangement or other that is not such-and-such kind of pattern?”, then just

    1) define “such-and-such kind of pattern,
    2) calculate the probability for getting that kind of pattern (usually very low), and
    3) subtract the answer from 100% (or from one, for zero to 1 probabilities).

    The result is that it will still usually be very quite likely that you will get some sequence that is not-that-pattern, i.e. not a certainty but still quite high.

    BOTTOM LINE:

    Specification does not need to mean “regular pattern”, and specification does not “prevent” anything. However, whether a regular pattern or not, a specified sequence can be extremely unlikely.

    The mistake is to think that any-old-result is just as unlikely as the specified result. It would be if and only if that particular sequence is specified. If you are willing to take whatever turns up without specifying independently, that is not unlikely at all!

    It isn’t the orderliness or symmetry or regularity of a pattern, per se. Its specifying vs. not specifying.

  31. ericB wrote: “It isn’t the orderliness or symmetry or regularity of a pattern, per se. Its specifying vs. not specifying.”

    Right. For example, if you randomly grab Scrabble letters one after the other, it’s actually pretty likely that you’ll frequently generate a meaningful word like CAT or DOG. That’s not that impressive and obviously is totally explainable by reference to chance. But if you choose (specify) a sequence like “CAT” BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly. And that’s just a three-letter sequence.

  32. TRoutMac: One thing that I think is confusing, and makes probabilities difficult to understand, (for me at least) is that saying that an event has a “one in one million” chance of happening is NOT the same (correct me if I’m wrong) as saying that it WILL happen once in one million times. Isn’t that right?

    That is correct. About this question, others have already provided helpful information. I just want to point out that it is not hard to see why and to see the difference exactly.

    If you want to see the probability that some unlikely event will happen at least once in some number of independent attempts, it is actually easiest to

    1) figure the odds that it won’t happen at all over all those attempts, and then

    2) subtract this from 100%

    EXAMPLE: If you have a 10% chance of success (1 in 10) for a single try, then you have a 90% chance of failure on each try. Two attempts is 90% x 90% = 81% chance of not succeeding on any try. N attempts is (90%)^N, so ten tries is (90%)^10 = 34.9% chance of not succeeding on any of those attempts.

    So, even with a chance of 1 in 10, over ten attempts you still only have about 100% – 34.9% = 65.1% chance of success — NOT 100%.

    Likewise, trying for heads with coin flips (1 in 2 chance), there is a 50% x 50% = 25% chance that even after two flips you will still not get any heads, so only a 100%-25% = 75% chance you will get at least one head — again NOT 100%.

  33. This is where I don’t understand how all this probability discussion applies to ID/evolution:

    For example, if you randomly grab Scrabble letters one after the other, it’s actually pretty likely that you’ll frequently generate a meaningful word like CAT or DOG. That’s not that impressive and obviously is totally explainable by reference to chance. But if you choose (specify) a sequence like “CAT” BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly. And that’s just a three-letter sequence.

    Isn’t that the point of critics of ID attacks on evolution? You are specifying the “pattern” after it’s already been observed in nature. Since evolution does not have a “goal” or a specific end point, is it fair to use probabilities calculated after the fact?

    Moreover, since evolution isn’t acting all at once, it’s not a good analogy to claim it’s like throwing out letters and forming an 8-letter word. The analogy would have to account for a simple beginning and then some methodology for mimicking natural selection, et. al. (or whatever term you wish to use for the evolutionary process).

  34. I understand that ericB, thanks for the discussion.

    Imagine we set up a coin flipper to spit out 501 random bits. We let it run, and it spits out a 501 bit string that doesn’t match any pattern we know of. Our coin flipper is fair, and we know the string was generated randomly . Obviously the universe and laws of nature allowed that particular string of low probability to occur.

    Now we get ready to run the flipper again, over and over until the end of the cosmos. But before we do, we write down a string of 501 bits on paper.

    Question: Will the coin flipper be able to produce this specific bit string, given from now until the end of the cosmos, by random chance?

    If yes, then we just produced a CSI string by chance, since I independently pre-specified my string. (It is meaningful to me, but to a stranger may appear as random as any other string.)

    If not, then something prevents this particular 501 bit length string from occuring. Remember, our flipper can produce many 501 bit length strings, of equally low probability. The only difference with this string and my first one was that I took the time to specify this one. So what in my specification act changes things?

  35. I guess an easier way to ask would be:

    What keeps CSI strings from forming by chance in random processes? Low-Probability?

    If Low-Probability, why doesn’t Low-Probability stop even more unlikely unspecified events from occuring all the time?

    If Specification, what about specifying changes what is allowed by nature?

  36. ” For example, if you randomly grab Scrabble … But if you choose (specify) a sequence like “CAT” BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly. And that’s just a three-letter sequence.”

    Or even better yet what are the odds of throwing all the letters on a table and spell “cat” with the C standing perfect on top of A and A on top of T on their ends. Here you have the law of gravity working against you.

  37. Okay . . .

    “Once more unto the breach, dear friends . . .”

    I see the issue has now “evolved” to the question of both complexity and specification. I will make a few remarks on that, but first, on the issue . . .

    1] The problem of selective hyperskepticism

    In Western thinking, we often meet those who imagine that by default the objection must prevail. For instance, we typically hear a quote from Sagan; “extraordinary claims require extraordinary evidence.”

    They are wrong, and wrong based on the issue of consistency. For, properly, claims only require ADEQUATE not extraordinary evidence — on pain of inconsistency between standards on what we accept and those on what we reject. And, in the empirical world of science, our evidence and arguments are provisional, so we look on which explanation is best or most credible relative to accounting for the material facts, being coherent and being explanatorily powerful but bot simplistic or ad hoc.

    Otherwise, we are simply begging the question, and may be guilty of the most blatant inconsistencies.

    2] Fisherian Hyp testing and inference to design

    This immediately exposes the problem on rejecting inference to design on observing CSI.

    For, routinely, we characterise estimated distributions for events, and when they are sufficiently far out into the tails [usually at 1 in 20 or 1 in 100 levels] we accept that chance — the usual null hypothesis — is an inadequate explanation and revert to agency or natural regularity depending on the case in view. [Of course we run risks of errors: accepting chance when we should reject it, or rejecting it when we should accept it; but since when is the risk of error a new, or even an avoidable, thing?]

    As someone in that Feb thread said, as I recall, Dembski’s 1 in 10^150 is just about as conservative a rejection region as he has ever seen. In short, only the most extraordinary cases will be rejected, relative to the chance null hyp!

    So, let us see the selective hyperskepticism at work here for what it is.

    3] Now, what of [F]CSI and the explanatory filter?

    Here we look at two criteria, having first insisted on contingency so that natural regularities do not determine the outcome:

    [1] complexity, in the sense of being beyond 500 or so bits of information storing capacity,

    [2] specification, in the sense of fitting with an independently known pattern; in the cases of interest, a FUNCTIONAL pattern. (Sure, any 500 coin set is equi-probable, but when someone tells you he just tossed and lo THTH .. TH appeared, no-one will believe him! For excellent reason. Just as Moshe didn’t believe Aaaron when he said they just tossed the gold in the fire and the golden calf “just came out” . . . the first recorded inference to design!]

    In short, it is like having miles of a wall, with just one fly on it in a 100 yard stretch. then, bang, a bullet hits the fly. A lucky shot, or an aimed shot? [And why do you plunk for the "lucky shot" explanation?]

    4] Relevant cases:

    In the case of bio-information, DNA ranges 500k to in excess of 3 billion storage units, each capable of storing 2 bits of information. Even in bacteria, cutting out below about 360 k of storage units, destroys bio-function.

    But, 4^360,000 ~ 4.0 *10^216,741 bits. That’s many orders of magnitude beyond 500 bits!

    Then too, the functionality of the DNA chain’s stored information is quite easily observed: do you have a viable life form that can feed, move, reproduce etc in appropriate environments?

    So, how did we hit this lonely fly on the wall, given the obvious deep isolation of the functional states in the overall configuration space? [And cf here the way that statistics is deployed in thermodynamics to ground the 2nd law of thermodynamics. My nanobots and microjets example in the always linked through my name will help see this. In short I am pointing to selective hyperskepticism at work, again. Probability hurdles are as real as potential walls!]

    –> Of course, introducing the ideas that we can only infer to material entities in science simply begs the question.

    –> Similarly, trying to speculate on a vastly wider unobserved cosmos than what we see is a resort to speculative metaphysics, and is ad hoc to try to rescue a hyp that is preferred but otherwise in deep trouble on accounting for observed facts.

    GEM of TKI

    PS: Onlookers, look at my always linked through my handle for updated details, esp in the appendix on thermodynamics, after a follow-up debate with Pixie in my own blog.

  38. THat filter strikes again.

    Trying:

    Okay . . .

    “Once more unto the breach, dear friends . . .”

    I see the issue has now “evolved” to the question of both complexity and specification. I will make a few remarks on that, but first, on the issue . . .

    1] The problem of selective hyperskepticism

    In Western thinking, we often meet those who imagine that by default the objection must prevail. For instance, we typically hear a quote from Sagan; “extraordinary claims require extraordinary evidence.”

    They are wrong, and wrong based on the issue of consistency. For, properly, claims only require ADEQUATE not extraordinary evidence — on pain of inconsistency between standards on what we accept and those on what we reject. And, in the empirical world of science, our evidence and arguments are provisional, so we look on which explanation is best or most credible relative to accounting for the material facts, being coherent and being explanatorily powerful but bot simplistic or ad hoc.

    Otherwise, we are simply begging the question, and may be guilty of the most blatant inconsistencies.

    2] Fisherian Hyp testing and inference to design

    This immediately exposes the problem on rejecting inference to design on observing CSI.

    For, routinely, we characterise estimated distributions for events, and when they are sufficiently far out into the tails [usually at 1 in 20 or 1 in 100 levels] we accept that chance — the usual null hypothesis — is an inadequate explanation and revert to agency or natural regularity depending on the case in view. [Of course we run risks of errors: accepting chance when we should reject it, or rejecting it when we should accept it; but since when is the risk of error a new, or even an avoidable, thing?]

    As someone in that Feb thread said, as I recall, Dembski’s 1 in 10^150 is just about as conservative a rejection region as he has ever seen. In short, only the most extraordinary cases will be rejected, relative to the chance null hyp!

    So, let us see the selective hyperskepticism at work here for what it is.

    Pause . . .

    GEM of TKI

  39. Continuing . . .

    3] Now, what of [F]CSI and the explanatory filter?

    Here we look at two criteria, having first insisted on contingency so that natural regularities do not determine the outcome:

    [1] complexity, in the sense of being beyond 500 or so bits of information storing capacity,

    [2] specification, in the sense of fitting with an independently known pattern; in the cases of interest, a FUNCTIONAL pattern. (Sure, any 500 coin set is equi-probable, but when someone tells you he just tossed and lo THTH .. TH appeared, no-one will believe him! For excellent reason. Just as Moshe didn’t believe Aaaron when he said they just tossed the gold in the fire and the golden calf “just came out” . . . the first recorded inference to design!]

    In short, it is like having miles of a wall, with just one fly on it in a 100 yard stretch. then, bang, a bullet hits the fly. A lucky shot, or an aimed shot? [And why do you plunk for the "lucky shot" explanation?]

    4] Relevant cases:

    In the case of bio-information, DNA ranges 500k to in excess of 3 billion storage units, each capable of storing 2 bits of information. Even in bacteria, cutting out below about 360 k of storage units, destroys bio-function.

    But, 4^360,000 ~ 4.0 *10^216,741 bits. That’s many orders of magnitude beyond 500 bits!

    Then too, the functionality of the DNA chain’s stored information is quite easily observed: do you have a viable life form that can feed, move, reproduce etc in appropriate environments?

    So, how did we hit this lonely fly on the wall, given the obvious deep isolation of the functional states in the overall configuration space? [And cf here the way that statistics is deployed in thermodynamics to ground the 2nd law of thermodynamics. My nanobots and microjets example in the always linked through my name will help see this. In short I am pointing to selective hyperskepticism at work, again. Probability hurdles are as real as potential walls!]

    –> Of course, introducing the ideas that we can only infer to material entities in science simply begs the question.

    –> Similarly, trying to speculate on a vastly wider unobserved cosmos than what we see is a resort to speculative metaphysics, and is ad hoc to try to rescue a hyp that is preferred but otherwise in deep trouble on accounting for observed facts.

    GEM of TKI

    PS: Onlookers, look at my always linked through my handle for updated details, esp in the appendix on thermodynamics, after a follow-up debate with Pixie in my own blog.

  40. Try again part 2:

    3] Now, what of [F]CSI and the explanatory filter?

    Here we look at two criteria, having first insisted on contingency so that natural regularities do not determine the outcome:

    [1] complexity, in the sense of being beyond 500 or so bits of information storing capacity,

    [2] specification, in the sense of fitting with an independently known pattern; in the cases of interest, a FUNCTIONAL pattern. (Sure, any 500 coin set is equi-probable, but when someone tells you he just tossed and lo THTH .. TH appeared, no-one will believe him! For excellent reason. Just as Moshe didn’t believe Aaaron when he said they just tossed the gold in the fire and the golden calf “just came out” . . . the first recorded inference to design!]

    In short, it is like having miles of a wall, with just one fly on it in a 100 yard stretch. then, bang, a bullet hits the fly. A lucky shot, or an aimed shot? [And why do you plunk for the "lucky shot" explanation?]

    pause 2 . . .

  41. Thread locks up on continuation . . . maybe a new spam filter will help, folks?

    GEM of TKI

  42. The real issue or lottery is that there could be an enormous number of possible starting points for life all of them of incredibly low probability and the one that happened or “won the lottery” was just one of these starting points.

    Jerry, I appreciate your posts and demands for ever clearer explanations. They only help improve our articulation.

    It really comes down to what is the most reasonable explanation. It’s theoretically possible for wind and rain to etch the alphabet on a rock, but if you found the alphabet etched on a rock would you really, really believe it was done by wind and rain?

    The materialists arguing that life came by known natural causes is akin to arguing that wind and rain etched not just an alphabet but the works of Shakespeare on a rock.

    It gets to the point where you have to shake your head, understand they are fools, then go on to more practical things.

  43. kariosfocus — GREAT POST.

    or half-post anyway :-)

  44. Hey kairosfocus,

    What does “GEM of TKI” mean? It sounds like a Hip-Hop crew shout out:

    “Yo, this is the G.E.M., representing TKI to the fullest. Tha Killa Instinct crew remains from 2007 until the Heat Death!”

    lol. Seriously though, is that just your initials and the shire you come from?

  45. An addition/correction to a previous post of mine when I responded to this (in part):

    But if you choose (specify) a sequence like “CAT” BEFORE you start grabbing the letters, then the likelihood that you will pick that particular sequence will drop significantly.

    I forgot to point out that this is not technically true. Picking the sequence does not change the probability of that sequence. It does change the probability of a “success” in your trials, but only because you changed what defined a success.
    In the first case, your specified set was “any three letter word” as opposed to “that” three letter word.

    Again, these probabilities are only really meaningful when the set of desired outcomes is specified BEFORE the trials. To specify them AFTER the trial has run vastly confuses the discussion. Besides, it is not an analogy to how evolution works, since evolution is not completely random. Using pure probability on the end result only is not a realistic “test” of the theory.

  46. Just as Moshe didn’t believe Aaaron when he said they just tossed the gold in the fire and the golden calf “just came out” . . . the first recorded inference to design!

    LOL!

  47. Kairosfocus,

    Thank you for your posts, I always appreciate your insights.

    Just so you know, I am not trying to undermine ID with my questions regarding low-probability and specification. They are honest questions, places I feel IDers can give clearer answers.

    The answers given so far go towards answering them. So I appreciate them.

  48. tribune7,

    You do not have to convince me that the process could have happened by accident, chance, law or whatever non-intelligent process someone names. But I find the ID answers to the materialists objections/claims often vague and simplistic. While I generally understand the concept of CSI, I have yet to see a convincing explanation of it that is not picked apart.

    The materialist will argue that it all happened in steps over deep time because that is the way they handle the low probabilities. They will sometimes even assert it is a certainity given enough time. They haven’t any proof of there ever existing any steps but they dare you to prove it couldn’t have happened this way. ID should focus much of its attention on refuting the step approach. That is the best way I believe to refute the lottery cop-out instead of quoting just absurdly low probabilities.

    Also I once listened to a shrill student challenging ID by saying where is your proof when she said someone like Darwin was a real scientist who meticulously gathered empirical data to support his hypotheses. The answer was look at the complexity of the cell when the answer should have emphasized that Darwin had no empirical evidence for his hypotheses and that exactly zero new species have been confirmed to have arrived on the planet by natural selection.

    The main argument for Darwin these days is common descent when even if you accept common descent there is no evidence that it ever happened in a gradual fashion nor is any other mechanism indicated.

  49. I have yet to see a convincing explanation of it that is not picked apart.

    Just because you can’t convince someone doesn’t mean you aren’t right and haven’t made a reasonable case.

    People are self-delusional. The materialist is self-delusional. You can never convince them. Just because your opponent says you haven’t won the argument doesn’t mean you haven’t.

    A good sign is when they start resorting to ridicule.

    30 percent of the Democratic Party believes that President Bush knew beforehand of the attacks on the World Trade Center.

    You can never convince most of them otherwise.

    Debating a materialist is like debating one of those people.

  50. Hi Folks . . .

    OOPS: From it won’t post to it multiple posts! [Pardon . . . BTW, Trib 7, how do you get those neat smileys to post at UD?]

    Next, thanks for the many kind words.

    Atom, FYQI: GEM is the acrostic formed by my initials, and TKI is my organisation. I hail from, live and work in the Caribbean — currently volcano island, now with a suspiciously quiet ol’ smokie . . .

    Now on a few follow up points:

    1] Jerry & Trib 7: The real issue or lottery is that there could be an enormous number of possible starting points for life all of them of incredibly low probability and the one that happened or “won the lottery” was just one of these starting points

    The real issue is isolation of the islands of functionality.

    As John Leslie pointed out long ago now, whether or no there are many regions of the wall with flies, and even regions positively carpeted ans swarming with then, when we see quite isolated regions beyond the reasonable reach of a random walk or random targeting, then the wondrousness of hitting the target emerges. [In short, LOCALLY isolated islands of function is all we need to defeat the chance hyp.]

    And, whatever other life technologies or architectures are possible — maybe even non-physical [dare I say "spiritual?!] — the fact is that in general small random disturbances of the observed bio-functional molecules typically destroy function.

    Further, the required information to get to the cluster of functional molecules for life is way, way beyond the sort of 500 or so bit limit on chance increments we have discussed. In short, even having got to life, body plan innovation by RM + NS is seriously problematic. Also, the OOL problem is so intractable that Shapiro recently panned the whole RNA world hyp in Sci Am; with issues of getting to information (which he appar does not see equally apply to his own metabolism first model).

    Of course, Leslie was in the main talking about the quasi-infinite array of sub cosmi issue,a nd was in effect highlighting that since local changes in many many key parameters make for a radically life-hostile cosmos, the finetuning issue does not go away so easily.

    2] Eric: Picking the sequence does not change the probability of that sequence. It does change the probability of a “success” in your trials, but only because you changed what defined a success.

    Go to the head of the class!

    The vital point relevant to both origin and body plan level diversification of life, is that “success” is independently “functionally specified.

    Next, it is empirically, highly complex and integrated in such a way as to in many cases be credibly irreducibly complex, which strains the capacity of variations and co-opting of parts for other purposes.

    Third, it is so information rich [in the Shannon sense of data storing capacity] that it is utterly unlikely for something that meets the three observed constraints to happen by a chance-dominated process in the gamut of the observed cosmos. (The same extends in effect to the formation of a life-habitable cosmos. For details cf my always linked and freshly updated after the exchanges with Pixie and Dave Scott.)

    But, routinely, agents produce such entities — even this post is a case in point — i.e we KNOW a credible source for FSCI/CSI.

    Therefore on inference to best explanation, the cosmos we observe, the habitable planet we inhabit, and the life [including life that requires mind and morals . . . i.e us] we see on that planet all most credibly due to intelligent agency.

    So strong is this case, that it is only by selective hyperskepticism linked to institutional dominance of evolutionary materialism and associated methodological naturalism that blocks this from being he new paradigm.

    But, that is coming.

    GEM of TKI

  51. BTW, Trib 7, how do you get those neat smileys to post at UD?

    colon +dash + right parenthesis :-)

  52. Following up . . .

    Had to do family deliveries.

    3] Jerry: While I generally understand the concept of CSI, I have yet to see a convincing explanation of it that is not picked apart.

    Of course the first problem with that,is just what Trib 7 pointed out: selective hyperskepticism can easily demand an undue degree of “proof” on an inconvenient claim, whilst on other claims that are acceptable tot he agenda, a much lower standard of evidence will do.

    But also, let us note:

    a] CSI is NOT — repeat, “NOT” — an ID-originated concept. CSI predates ID and in fact is part of the set of emerging ideas on the challenge of the origin of life that helped trigger the emergence of the design school of thought in the early 1980′s. (And contra Forrest et al, ID dates to the late 70s to early 80s.]

    b] If you look up Thaxton et al’s online chapters from The Mystery of Life’s Origin, you will see the following in CH 8 [I leave off the link because of the spam filter . . .]:

    Only recently has it been appreciated that the distinguishing feature of living systems is complexity rather than order.4 This distinction has come from the observation that the essential ingredients for a replicating system—enzymes and nucleic acids—are all information-bearing molecules. In contrast, consider crystals. They are very orderly, spatially periodic arrangements of atoms (or molecules) but they carry very little information. Nylon is another example of an orderly, periodic polymer (a polyamide) which carries little information. Nucleic acids and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information. By definition then, a periodic structure has order. An aperiodic structure has complexity. In terms of information, periodic polymers (like nylon) and crystals are analogous to a book in which the same sentence is repeated throughout. The arrangement of “letters” in the book is highly ordered, but the book contains little information since the information presented—the single word or sentence—is highly redundant . . . . Only certain sequences of letters correspond to sentences, and only certain sequences of sentences correspond to paragraphs, etc. In the same way only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being and in a specified sequence.5 Orgel notes:

    Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 . . . .

    Yockey7 and Wickens5 develop the same distinction, that “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.

    c] Thus, we see major non ID names in OOL coming up with the term specified complexity, BEFORE the ID movement originated, as a natural outcome of their work at the turn of the 80s.

    d] What does the idea in essence mean? Here, TBO in TMLO ch 8, contrast three sequences:

    (i) An ordered (periodic) and therefore specified arrangement:

    THE END THE END THE END THE END

    Example: Nylon, or a crystal.

    (ii) A complex (aperiodic) unspecified arrangement:

    AGDCBFE GBCAFED ACEDFBG

    Example: Random polymers (polypeptides)

    (iii) A complex (aperiodic) specified arrangement:

    THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!

    Example: DNA, protein.

    e] Dembski therefore “only” provided a mathematical model of an observed phenomenon. He also put up a criterion, that the chain in question should store at least 500 or so bits of information for a unique specified state, or the equivalent of being less than 1 in 10^150 within the relevant configurational space. Life forms by far exceed any reasonable version of this filter.

    f] IMHCO, the model and the associated inferential filter are appropriate and effective [insofar as I have followed his Mathematics and in light of my own knowledge of thermodynamics].

    g] But because of the implications we often see the classic philosophical move, to reverse the implication in order to deny the antecedent: P => Q, but I reject Q so I deny P.

    h] The trap in that is the issue of selective hyperskepticism, and indeed, the objectors routinely accept on in fact far less evidence, similar cases. [Cf the general acceptance and common USE of Fisherian reasoning in inferential statistics. Dembski has competently addressed debates over Bayesian inference, for those who wish to make such points.]

    In short, CSI is coherent, properly empirically anchored, and originated BEFORE the ID movement. Its denial and contentiousness as a concept today reflect selective hyperskepticism and debate tactics, not the actual state of the case on the merits.

    Typical for ID-related issues, in my observation.

    GEM of TKI

  53. Thanks, let’s try: :-)

  54. Hi Jerry (et al.):

    Did that stab at defining and warranting CSI as a coherent and useful concept help?

    Note the points in essence:

    1] CSI emerged circa turn of the 80s, and it emerged from the general development of OOL research. Indeed, it served to help trigger the emergence of the first identifiable modern design school analysis, TBO’s TMLO [~ 1979 - 84].

    2] In particular, the concept of CSI was identified as OOL researchers sought to distinguish the sort of polymer molecular pattern seen in life forms from the simple order of repetitive crystals on one hand, and from random chains of monomers on the other. (In short this was a further step in the discussion triggered by Prigogine’s dissipative structures and similar cases of spontaneous ordering, e.g. the formation of crystals from solution, and that of a hurricane.]

    3] In doing that, the concept of an informational macromolecule emerged, and it was seen as complex, aperiodic and informational in function, thus specific and in effect storing information through physically instantiating a code, either in stored form [DNA] or in bio-functionally expressed form [proteins].

    4] Dembski’s model and his upper probability bound estimates seek to identify whether it is credible that particular cases of such functionality could credibly have formed through chance-dominated forces, as opposed to agency; deterministic natural forces being of only instrumental character here, as contingency dominates. To do that he in effect forms a chance origin null hypothesis, then rejects it if its probability of occurrence under reasonable principles of such calculation, falls below what it is at all reasonable could occur in the gamut of the observed cosmos over its reasonably estimated lifetime. [Cf on this, the statistical thermodynamics use of phase spaces and the discussion of relative statistical weights of macrostates and associated probabilities, which undergirds the statistical form of the second law of thermodynamics. I discuss in Appendix A in my always linked.]

    In short, there are two distinct levels on the issue, so to dismiss the concept as incoherent and/or irrelevant and/or factually inadequate and/or excessively ad hoc, both would have to be properly and cogently addressed.

    IMHCO, on long observation, that is simply not done in the attempted rebuttals I have seen and especially in those I have engaged “live.” Instead, routine resort is made to an evidentiary double-standard, which I have come to descriptively term selective hyperskepticism. To that the proper counter is that we should have a criterion of consistent adequacy on addressing the realm of facts, so that we only need adequate evidence in particular cases, not “extraordinary” evidence.

    Trust that helps

    Cheerio

    GEM of TKI

  55. kairosfocus,

    Thanks for all the long posts on CSI. I just saw them and have copied them to print out and read. Hopefully, sometime this weekend I can get a chance to see how much of it I can digest and will respond with any questions I have.

    One of the good things about this site is that there are many people who care and think out what they say. So thank you again and I will see what I can assimilate.

  56. kairosfocus,

    I read all your comments above and I have two reactions.

    First, I do not have to be convinced of the incredibly low likelihood that life could arise by chance or even law if even the universe was designed to lead to life which is the theistic evolutionist’s assumption. I cannot imagine of any way it could happen even given an eternal universe.

    Second, I have seen enough rabbits pulled out of a hat to know that when a gambler bets you that he can cut the ace of spades, you better not bet against him. I will get back to the ace of spades bet at the end.

    Nothing in your discussion covers the lottery example. Namely, that there could be an extremely large number of ways to construct life and we happen to be the one example that emerged. Given different initial conditions, or different accidents of nature, another form may have emerged at a later time.

    Also, I wouldn’t want to challenge God to say He could not have made life differently or that the way He chose here was the only one or even one of only a few. So if you believe that, then why couldn’t one of these many, many ways that God could have created life be one that could have emerged without direct agency through law and chance.

    Now, the term “emerge” is a hot term in the evolutionary biology circles because it explains everything. You just have to say this is what emerged or evolved without giving the process or steps that took place. Everybody then knows that what you mean is that some random process by chance with the help of the laws of physics produced more than order at one time and this particular thing is the result. And if it could do it once, then it could do it again. And if it could do it for small example, then it could do it again and add to this small example to make it a slightly bigger example. And once you are there, what is to prevent it from going even bigger. So it is on and on and deep time is your ally. And to use a Darwinian metaphor, it could be that some of these complexities were more stable than others and that these are the ones that survived and that the conditions that produced them no longer exist except that the complexities survived. It is all Darwinian just so story telling but how are we to know ti could not have happened or even if God wanted it to happen this way.

    For example, at the time of the Cambrian Explosion, our solar system was in the galactic spiral arm and the amount of radiation falling on the earth was probably much higher than it is today as it was surrounded by millions of stars much closer than today. The night sky would have looked much different then. Now as the Privileged Planet so rightly points out we are out on own with very few other stars off the galactic arm. Could this have had an effect?

    So what has to be addressed is the step approach and I am not sure if CSI does this and after reading your comments, I am not sure I yet understand a general definition of CSI.

    Where does the specificity lie? Is in the sequence itself or in some outside reference to this sequence. For example, in an English sentence, the specificity comes from the grammar and dictionary of the English language not the sequence itself or else we wouldn’t know it wasn’t nonsense. But in a fair coin toss of 500 heads in a row it is in the sequence itself. In a DNA sequence the specificity comes from the fact that the process produces functional proteins not that a sequence of ACTG’s have meaning of themselves. If the proteins were not functional, would we say the DNA was specified. Actually it is what produces the tRNA’s and the Ribosome that gives specificity to the DNA which I also assume is somehow produced by DNA. I have only seen how proteins are assembled, not how such things as Ribosomes are made. (Does anyone have a cite to explain this?)

    So your discussion of CSI should discuss what gives specificity to a sequence and this is generally lacking in any discussion of what I have seen. Why are coin tosses and sentences both specified? Why is the same term used in each example? What specifies DNA or proteins? I prefer not some abstract information theory approach but something in plain English that the average person could understand.

    The tendency is to give examples and not a definition that would help one decide if this sequence is specified or not. I tried to read Dembski’s book but gave up.

    We all can recognize unlikely events. And I understand that DNA is different from a specific rock outcropping which is also both rare and complex. But how can you completely eliminate some small specified sequence happening, less than the probability limits you have set up and then having this event accepted and then why could there not be another. And then when combined with the other low but acceptable event at a later time to form the event that is outside the boundaries.

    There are a lot of questions to be answered.

    By the way the gambler example came from a an old TV show called Maverick. A gambler bet Maverick who was also a gambler a thousand dollars he could cut the Ace of Spades. Maverick took the bet and examined the deck to see if it was fair. The gambler then put the deck of card on the table and took a knife out of his pocket and thrush the knife through the deck of cards into the table. He demanded his thousand dollars where upon Maverick pulled out the Ace of Spades from his pocket. He had taken it out when he examined the cards. The moral is don’t bet a gambler or someone who can do magic tricks because you may not be able to imagine what is going to happen.

  57. Hi Jerry:

    I do not really want to get into a long back-forth exchange, having just had 3/4 MB with Pixie over in my own blog, on closely related subjects. [You can follow up through my always linked . . .]

    I comment:

    1] Lottery example.

    I addressed that further back above, and others did so; IMHCO, cogently.

    In the case in view, the onward point relative to Leslie’s remarks, is that LOCAL isolation is sufficiently wondrous to raise the issue of design as the best explanation relative to what we know about the origins of FSCI systems.

    You might want to look at Denton’s remarks on the matter back in 1985 or so. It boils down to if islands of functionality [they don't have to be unique] are sufficiently isolated, random search-dominated, none purposive strategies cannot credibly access the first island or hop from one island to the next. In short, the OOL and body plan level evolutionary claims lack empirical foundation. And, so does the origin of THIS life-habitable universe.

    2] Are there a large number of ways to construct life, apart from the DNA and proteins design we see around us?

    First, I DID raise even spiritual forms of life as a case in point considered seriously by a great many informed people across the ages. [Indeed, some argue – IMHCO, compellingly; cf my always linked -- that we are spiritual-material hybrids, and that this is the root of the credibility of mind and the compelling nature of morality.]

    Second, I explicitly noted that the DNA-protein architecture we see manifests FSCI, and on a local basis the configurations that work are sufficiently mutually isolated that random search dominated strategies cannot credibly originate and diversify what we see. But agent action easily explains what we do see in light of what we do know.

    3] Gambler example:

    Of course, the point of the tricky gambler is that this is an instance of a naïve person imagining that random forces are in play when in fact clever design is at work. FSCI is again the product of design . . .

    Break

  58. Continuing . . .

    4] why couldn’t one of these many, many ways that God could have created life be one that could have emerged without direct agency through law and chance.

    Note, it is you who are introducing Gods into the discussion, I have spoken of intelligent agents and their capacity relative to undirected chance plus necessity. Inference that the principal agent involved in life as we observe it is God, is beyond the proper current realms of Science as there is no commonly accepted base of empirical observation. In philosophy – the topic thus introduced – there are good reasons to infer to God as principal agent, but that is a different subject and not on topic. [If scientific arguments can be marshalled in a phil context when they are thought to support atheism, surely they can also be regarded as properly science even thought hey may now lend support to theistic worldviews!]

    5] Everybody then knows that what you mean is that some random process by chance with the help of the laws of physics produced more than order at one time and this particular thing is the result. And if it could do it once, then it could do it again. And if it could do it for small example, then it could do it again and add to this small example to make it a slightly bigger example. And once you are there, what is to prevent it from going even bigger.

    An admirable summary of a common, but fatally flawed, perception. The individual step of complexity is too big, and the islands of functionality are too isolated, by far. [Cf my always linked.]

    Unfortunately, it fails to see and understand just how fast the probabilistic resources available run out, i.e 500 – 1,000 bits of functional information is more than enough to kill off such a mechanism as hopelessly beyond the reach of the observed universe [~ 10^80 atoms, and 13.7 BY]. Thus, the relevance of Dembski’s upper probability bound!

    And note that the relevant range is that is a configuration or cluster of configs is isolated in the space of all configs of such information carrying capacity, to better than one in 10^120 – 150, it is beyond the reach of the observable universe. Remember, even a small DNA strand is 500k long nd life function breaks down if knockouts go beyond about 360k. 4^360k ~ 3.95*10^216,741. I doubt there have been/can be as many as 10^5000 examples of DNA over the years from origin to heat death of the universe – I am being incredibly over generous as the Dembski number is the number of QUANTUM STATES possible across the lifetime and gamut of the cosmos. 10^5,000 is ¼ * 10^215,741th fraction of the possible states for just a 360,000 base pair DNA strand. The fly is incredibly isolated on the wall!

    But, we are not dealing with Mr Spock here. So, we have to address the problem of worldview level question-begging though selective hyperskepticism; as I have pointed out above.

    Pause . . .

  59. Concluding . . .

    [OOps, 1 in 1/4 * 10^211,741th fraction . . .]

    6] Defining CSI

    Again, we have an independent control on the “specification”: the macromolecules involved must hit the fly on the wall, i.e. they must function in a viable organism, and we have good reason to believe that the configs in question are incredibly isolated in the overall config space, even at the bottom end of the range for viable life.

    Then, in the Cambrian, for instance we have to get dozens of new body plans in a context where for instance a modern arthropod (oddly enough, a fruit fly!) has a DNA of order 180 million base pairs. If just 10% of that is functional, we need to account for, on an earth of ~ 6 * 10^24 kg, and estimated lifespan ~ 4.6 BY [again being over-generous] the origin of some 17.5 mn base pairs of biofunctional information. The config space for that is ~ 7.05*10^10,536,049 “cells.” This swamps the 10^5000 overgenerous possible organisms again.]

    We see complexity here: information storage capacity required to express the functional system, and we see specification (and associated intricate, integrated structure and life function algortihms): an easily observable and highly specific target: either it flies or it fails. Either it hits the target or [far more likely on a random search] it hopelessly misses. And, we have no need to entangle ourselves in the project of trying to get a global precising definition that every rational agent is compelled to agree with; we have cases in point enough to have to deal with the facts as observed.

    So the OOL researchers have been forced to identify and accept that there is an objective, easily observed and abundantly instantiated difference between [1] order and [2] complexity and [3] complex, specified information that is characteristic of life, over 25 years ago. They provided cases in point, based on linear strings of potentially information storing elements. [From a digital pespective the difference is simply that of how many states the “alphabets” have: English, 27 letters and a space, in the simplest version; DNA – 4 states, proitein, 20. We simply exponentiate N^i = number of accessible states.

    The sudden rhetorical backtracking and in some cases pretence that it is ID thinkers who have the burden of proof beyond all rational dispute, is ill-informed at best or even frankly dishonest.

    Speaking of which:

    7] how can you completely eliminate some small specified sequence happening, less than the probability limits you have set up and then having this event accepted and then why could there not be another.

    Provisionality is a part and parcel of scientific work, which seeks to account for the currently known and credible facts through the best explanation to date. Of course it is always conceivable that we can come up with an exception.

    For instance, we could conceivably come up with a perpetual motion machine, and so throw all of thermodynamics into a cocked hat.

    But, based on what we do know and can best explain, this is not likely, so we accept that thermodynamics is a well-warranted science. So, I refuse the improper shifting of the burden of proof through selective hyperskepticism, and so should you.

    On the evidence, the architecture of life has both the complexity and specificity that I have noted on above, which go beyond the reasonable reach of the proposed chance + necessity mechanisms. But, on inference to best explanation, this is well within the reach of agent action – and absent certain institutionally powerful worldview commitments, would immediately be seen and accepted – so I simply point out that this is selective hypersketicism at work.

    So, if you want me to accept your speculative mechanism, you must meet a simple empirical test, commonly used in the sciences: demonstrate, on a replicable basis, the creation of functionally specified complex information beyond the Dembski type bound through mechanisms that only rely on chance plus undirected natural forces, even incrementally, starting with simple stages and cumulating. [But, TARGETTED searches based on generic algorithms are not anything but illustrations of how agentscan use reasonable random searchesto find targets within the reach of probbailistic resources. Methinks it is a weasel etc fail.]

    A simple case in point would be to take 1 billion PCs, load them with unformatted floppies and rigged drives that will spew random noise across them once every minute for a year, then test for a properly formatted disk – which can be replicated to your heart’s content. After success in that exercise, further random noise will be spewed across the surviving disks [let's be generous and say that the formatting is not to be touched], and look for a properly formatted document, image or program in any reasonable PC document format, with at least 1 k bits of information in it.

    How many years would we have to wait for success?

    GEM of TKI

  60. PS: I discuss the technical nature and origins of the fallacy I descriptively term selective hyperskepticism here. I of course put the link in its own little post to see if that will get through the ever watchful spam filter . . .

    PPS: Part 3 seems to have got itself swallowed by the ever lurking spam filter . . . [It includes a test case and addresses the usual Genetic Algorithm type objection . . .]

  61. kairosfocus,

    You have to understand that I support the ID position and think Darwin’s gradualistic approach is nonsense and that OOL is probably the best case for ID there is, followed by the Cambrian Explosion.

    I believe the anti-gradualism information or lack of it is the killer for current biology’s preoccupation with neo Darwinism. At the moment I am half way through watching a hour and 50 minute video I got on Itunes by a Stanford professor who so far has not introuduced any empirical evidence for gradualism but is just extolling about how wonderful a theory it is. He has spent the last 10 minutes talking about dog and pigeon breeding which is interesting and I am sure he is going to use the same rhetorical approach Darwin did to convince you it works the same way in the wild.

    So I ascribe very readily to half of ID that presents the information that is critical of gradualism. On the other side I understand all the arguments from small probabilities and complexity that cannot be overcome in any likely manner by chance and law. What I have failed to see is a clear cogent discussion of CSI in plain English. The concept of specificity seems to be all over the place and what I was looking for is a simple definition and then seeing it applied to the many examples offered.

    I also did not see any good refutation of the lottery argument other than hand waving. People generally misunderstand the lottery argument and the materialist’s objections to the ID use of small probabilities.

    It is not necessary to go into a long exchange. I do not have time to reply. Over time I will see if there is anything that I think is easy to understand that explains CSI clearly. So far I have not seen it but there is a lot more to read.

    I was a math major and have had many courses in statistics and probability in graduate schol but these were quite a while ago and I never used them in work so I am familiar with the arguments but the details have long faded into the background.

  62. tribune7: It really comes down to what is the most reasonable explanation. It’s theoretically possible for wind and rain to etch the alphabet on a rock, but if you found the alphabet etched on a rock would you really, really believe it was done by wind and rain?

    This is a good point to keep in mind. Science makes (or should be making) the best inferences possible — not proofs — based on the data it can access so far. Even long standing perspectives (e.g. Aristotle’s understanding of the content of the heavens, Newtonian physics) may eventually be superceded as we learn more.

    About the alphabet example, there is one crucial aspect that we tend to overlook.

    The worst problem for the unguided hypothesis is not the improbability of forming symbols (though that may be unlikely). The unattainable aspect is getting mindless processes to attach associated meanings to symbols used as a code.

    It comes so easily and naturally for us, we might think only of the difficulty of forming letters (visions of grade school trauma?).

    But mindless nature has no reason or motive to construct the encoding, storage/transmission, retrieval, and decoding mechanisms necessary to associate meaning with symbols taken as a code. Any part of that system is useless without the others.

  63. The unattainable aspect is getting mindless processes to attach associated meanings to symbols used as a code.

    Good point

  64. jerry, regarding lotteries and responses to ID skeptics, it is not surprising to find ID skeptics suggesting that there might be Many paths to “life”. One path they may like is to redefine life to include other options.

    I wouldn’t attempt to deny that “life” could be redefined such that even generations of stars might be counted as life.

    But none of that would tell us that unguided processes can create the language-based life we actually observe. Redefinitions of life become a dodge.

    IOW, the issue is not whether unguided nature could make anything that might be called life (including various unseen and unthought of possibilities). The issue is whether the best inference concerning the language-based life we do observe and study is that it requires intelligent agency.

    I do not believe that lottery reasoning can defeat that inference because I do not consider the inference to be based merely on a probability argument.

    In particular, I would submit that unguided nature has zero ability to cross the Language Barrier to processing of symbol sequences as coded messages, regardless of how many possible routes it tries to run up to the Language Barrier.

  65. jerry, regarding CSI and clear explanations for the average person, although I love math, I find most people don’t and they do not trust it, even with clear explanations. (Try getting people to accept that 0.999… repeating exactly equals 1. ;-)

    This is doubly so for probability. Even professional mathematicians and university professers can be convinced and yet dead wrong. See Marilyn vos Savant The Game Show Problem

    I believe it is important for mathematicians such as Dembski to make their case, which is a genuine contribution, but I also expect this to remain a black box for most people.

    That said, consider for a moment swapping the coin flipping machine for a prebiotic DNA base pair generating machine. Given a particular assumed genetic code, one might consider probabilities for chance generation of sequences corresponding to functional proteins, etc.

    However, probabilities calculated based just on the sequence itself are independent of and do not reflect whether decoding mechanisms exist. Without an actual decoding mechanism to give associated semantic meaning to the sequence, all sequences are equivalently meaningless as well as equally unlikely. A sequence without decoding is just noise, no different than random bits.

    Some ID skeptics think that because random noise counts as Shannon information, nature’s ability to generate random noise can solve the information problem. But analysis that only goes as far as sequence improbability has not yet touched the much deeper issue of creating associated semantic content and meaning.

    I believe that the fact that language requires intelligence can be made more accessible to the average person than pure probability analysis.

  66. Hi again:

    I see Dave Scott has started a definition of ID thread now that his one has slipped off the opening page.

    Now on key points:

    1] Jerry: ID etc

    There are three strands of issues on the Evo Mat view that fall generally within the ambit of science-dominated reasoning:

    i] Critique of the NDT thesis that RM + NV accounts for body-plan level bio-diversity, with a secondary [but it should be primary] issue on OOL. (Secondary arises because, conveniently, the origin of life is not viewed as part of NDT proper,though of course there is a close association.) This is far broader than ID or Creationism, and has quite a distinguished history actually.

    ii] ID, biological: that agency best explains origin and macro-level biodiversity, as opposed to chance + necessity in the NDT paradigm

    iii] Creationism [Biblical form, there is a generic Creationism that is not pinned to specific texts] : asserts that the Biblical account is an accurate record of origins by credible witnesses and the Creator, which leaves sufficient evidence that is observable that we can see good reason to take it at appropriate level of interpretation.

    –> The first is pretty well established though hotly contended. The re is good reason to see that for the second, also. The third is more controversial.

    2] The concept of specificity seems to be all over the place and what I was looking for is a simple definition and then seeing it applied to the many examples offered.

    Part of the problem here is the concept of a definition.

    Definitions can be seen as falling under two general heads: by example, and by precising statement. The latter falls into two sub-heads: genus and difference [i.e taxonomy], and statements of in effect what is necessary and sufficient to see that a putative case is really in/out of the target zone.

    The trick to is is to understand that we first form concepts by abstracting commonalities from experiences with examples of a pattern. (Think about how we come to understand chairs, tables, furniture, artifacts etc.) Precising verbal definitions then apply boundaries to the concept, and depend for their credibility on ability in the first instance to include recognised examples, and exclude recognised non-examples. Then, we tend to give the precising statement a status of gatekeeper over whether or not something is in/out. But, note what has happened – the examples and concepts come first and are logically prior to the statements.

    In many real-world cases, we are unable to come up with such precising statements that are acceptable to every rational agent, not least because worldviews and agendas are often at stake. In other cases, we just simply cannot figure out how to do so: try to define “life.” But the concept is valid, and we can identify many clear instances and non-instances, with borderline cases that challenge our ability to precisely state terms and conditions for in/out. But, family resemblance rules.
    It is fair comment that above, I have cited adequate examples to give a clear enough concept, and that Dembski’s model offers a reasonable filter for deciding at least on clear cases. And, the cases in view are more than clear – they are more or less plain as day: agency is by far and away the best EXPLANATION for the OOL and its macro-level diversity – in the realm of facts we are dealing with inference to best explanation, not demonstrative proofs to an arbitrary standard (which is often “conveniently” substituted when the best explanation does not sit well with one’s worldview; i.e. resort to selective hyperskepticism, as Simon Greenleaf long ago pointed out.). It is also the best explanation by far for the fine-tuning of the observed cosmos for life like ours. Just, this cuts across institutionallly dominant worldviews.

    BTW, the config space issues and associated probabilities are tied to the issue of IBE: when the raw probability of something happening by chance that just happens to come out very conveniently to fit a target zone becomes too incredible, agency makes a lot better sense.

    Pausing . . .

  67. Continuing . . .

    3] EB: The worst problem for the unguided hypothesis is not the improbability of forming symbols (though that may be unlikely). The unattainable aspect is getting mindless processes to attach associated meanings to symbols used as a code.

    Strictly, since any one config in a set of possible outcomes is by the Laplacian principle of indifference, just as attainable as another [absent specific reason to weight possible outcomes unevenly, which simply makes the point a bit more complicated to calculate], we can “just by coincidence” get to a code and an associated integrated information system – no logical or physical/force/energy barrier directly prevents it.

    Just as, the oxygen molecules in the room in which you sit can all rush to one end by chance, leaving you to collapse mysteriously. But, the probabilities for such happening by chance are so remote that the “lottery” to attain that target by chance is unfeasible – this is the base for the statistical form of the second law of thermodynamics. (In short, a probabilistic resources hurdle is as real and as effective as a direct physical barrier. Also, J, while you may have indeed studied Math did you do statistical thermodynamics? That is the materially relevant discipline. Unfortunately, it is also one of the more abstruse and subtle provinces of physics; cf. My appendix A in my always linked for some sketched in thoughts on what I am getting at.)

    What the proposed chance origin of a code with the symbols, and the associated algorithms and complex information processing system does, is to compound utterly beyond mere astronomical odds, making the point that there is a probability hurdle there plainer and plainer by reduction to absurdity.

    After all, there is a simpler, even obvious explanation: agents routinely use symbolic code, target purposes, create algorithms, and use physical resources to implement technologies that express the codes and algors then execute them to achieve the targets. That is what we see with OOL, macro-level biodiversity and cosmogenesis, including the multidimensional Goldilocks zone effect that leads to us on Earth.

    So, the probability issue I have emphasised is more general, but the symbolic language and algorithms by chance issue makes the absurdity more directly obvious in a computer literate age.

    4] The real issue: Philosophy and institutional politics, NOT science and analysis

    So much is the just above the case, that the real real device being used to block inference to design is the attempted imposed redefinition of science as excluding agency on relevant matters through so-called methodological naturalism.

    This is evident in court decisions as well as statements of leading institutions and spokesmen. In short, it is only by begging major definitional and historical questions on the nature of science that the illusion that evolutionary materialism is “science” can now be sustained.

    That is why the current Gonzalez case is so blatantly political and patently unjust in character. But unless the public wises up, then rises up, the power brokers hope to get away with injustice and oppression.

    BTW, that is one of the reasons why a favourite accusation of the evo mat advocates these days is that ID thinkers are doing politics and PR not science. For, so long as they control the institutions and the media mikes, they can block hiring or promotion or tenure or publications and break careers unjustly to their heart’s content. But they know that if they face an accurately informed and justly angry public, they don’t stand a chance. So, we see the classic resort to the bodyguard of lies and the turnabout false or misleading accusations to keep such an uprising at bay for as long as possible.

    Time to wise up and rise up, folks

    Seriously . . .

    GEM of TKI

  68. PS: Based on comments in another thread, Jerry may wish to look here to see more on the limitations of our thinking and reasoning. In brief, reasoning inherently embeds faith-commitments, and science in particular is no exception. Further tot his, scientific reasoning is by inference to best current explanation, not demonstrative proof. Indeed, even proofs rely on faith points. [Or we end up in infinite regresses.]

  69. kairosfocus,

    I have read the other thread and your comments and have yet to see a coherent discussion of specificity. All is offered is low probability examples and the materialists have an answer for that even if it is bogus. It is the God of the Gaps argument and it is winning the day ever since LaPlace made Newton look foolish.

    As I said, the discussion is all over the lot and people keep using low probability events as examples of specificity but offer no definition or why that word should be used. How do Mt. Rushmore, coin flips, DNA, card orders etc. justify the use of the word specificity?

    The people you have to convince are the ones doing medical research, running the government agencies and universities and those supporting them not the people on this site. You also have to convince the typical science student in the country that you have a coherent scientific explanation for what you propose.

    ID has two sides to it and one is open to a lot of criticism because it does not seem to have any lucid argument for it other than a general appeal to low probability events. I don’t think bringing up faith is useful as part of the discussion especially when ID is primarily looked at as a conversion opportunity by many.

  70. 4] The real issue: Philosophy and institutional politics, NOT science and analysis

    Dittos.

    Jerry, something for you to mulll:

    How do you know there really aren’t green men on the moon? I mean to 100 percent certainty.

    As a mental exercise imagine a scenario in which green men are living on the moon unbeknownst to us.

    Then ponder this: whatever you come up with will be more likely than for life to have occurred without a designer.

  71. tribune7,

    Actually I always thought there were one-eyed, one-horned, flyin’ purple people eaters up there.

    Thank you for letting me know that I was mistaken.

    But maybe they both could be up there. Since we are not 100% certain and they could be symbiotic.

  72. Actually I always thought there were one-eyed, one-horned, flyin’ purple people eaters up there.

    :-)

  73. Hi Jerry

    I believe I have already said enough, by objective measures.

    I have shown by discussion in light of examples tied to digital strings, what the CSI concept is, and that it emerged form the OOL research community at the turn of the 1980′s as a means for understanding how life is different from the sort of ordering that say Prigogine investigated. [BTW, Prigogine as cited in the linked by TBO gives some interesting comments too . . .]

    I have also dicussed the difference between concepts and definiitons,and addressed the issue of the limitations of proof, in science and generally.

    In my judgement, I have given you and others enough, and note that your own comment is that:

    the materialists have an answer for that even if it is bogus

    Now, too, the insistent use of “bogus” — i.e objectively fallacious — but persuasive arguments is the mark of the manipulative rhetor or even propagandist, not the serious thinker. (Onlookers, cf my always linked on what I think of the God of the notorious gaps fallacy.)

    The proper answer to such dishonest advocacy is not to let them get away with such selective hyperskepticism, but to expose them, and point out how long since this has been exposed.

    That too, I have done.

    Cheerio :-)

    GEM of TKI

  74. TroutMac, “One thing that I think is confusing, and makes probabilities difficult to understand, (for me at least) is that saying that an event has a “one in one million” chance of happening is NOT the same (correct me if I’m wrong) as saying that it WILL happen once in one million times.”

    This “somebody has to win the lottery” argument is useless because it ignores relevant issues. For example, consider a Rubics Cube. It has a limited number of configurations or states. To get from one state to another state requires a certain minimum number of intermediate states, or steps. No amount of random activity or “luckiness” can change this fact. There are certain states that can never occur for a Rubics Cube, never, ever, unless, of course, you peel the stickers off and put them back on in one of these impossible states.

    If I gave a five year old kid one of these cubes in an impossible state, he probably would not be able to detect the fraud. But an adult scientist who is familiar with Rubics Cubes would most certainly be able.

    My point is, blindwatchmaker devotees merely assume that it possible to get configurations of proteins, like a flagellum, without demonstrating that the material path is possible without a “cheat” imposed by an intelligence with insight.

    When they can demonstrate this – (Matzke’s paper doesn’t come close to giving a complete developement of the assembly process) – I will begin to take them seriously.

    I’m neither religionist nor “anti-evolution”. I’m just an engineer who demands proof of concept for claims made. If these blindwatchmaker devotees were engineers and approach reality that waym, I would not hire them.

  75. O’Leary:

    A person who accepts a court ruling as definitive in a matter of this type apparently believes in the social construction of reality.

    Physics can be socially constructed too! See:

    http://www.jefflindsay.com/PCPhysics.shtml
    An excerpt:

    The prohibitive, traditional “laws” of physics must be rejected in favor of new models that foster tolerance, empowerment, and social justice. Under the old order, radical conservative forces have imposed “conservative” laws restricting the use of energy, mass, momentum, and electrical charge. Rather than conserving such forces and powers, they must be increased and made available to all people, regardless of race, gender, or sexual orientation.

    If anyone needs a smile today check out the whole article.

  76. But I was taken aback that he would justify this identification not with an argument but simply by citing Judge Jones’s decision in Dover, saying “That’s good enough for me.”

    Or imagine him saying Dred Scott was “good enough for me” or Plessy v Ferguson.

  77. [...] John Derbyshire: “I will not do my homework!” Whose knowledge of ID isn’t shallow? I will grant the ID-ists one thing: their tactics are clever. Make the public think the advanced math in Dembski’s books makes ID an esoteric subject requiring ‘lots of homework’. The air of expertise has certainly proven effective as a propaganda snow-job. But, apart from some obvious blunders that people do make in the restatement of the Dembski version of design, the question of the design argument is not complex. It seems complex because it tweaks one’s metaphysical unconscious (as does ‘natural selection’), but in fact noone has ever gained an inch of ground on the question since Kant and Hume. John Derbyshire has written some respectable books on the history of mathematics (e.g., his biography of Riemann). He has also been a snooty critic of ID. Given his snootiness, one might think that he could identify and speak intelligently on substantive problems with ID. But in fact, his knowledge of ID is shallow, as is his knowledge of the history of science and Darwin’s writings. [...]

  78. 1. If you follow social construction theory all the way through, you will find that it follows a Heglian world view, and its special application allows for a thing to be true and false at the same time and under the same formal circumstances. That is one subtle reason why it is impossible to have a rational discussion with its advocates.

    2. Not only will Derbyshire not do his homework, he will not even respond when someone else does it for him. The difference between CS and DI was explained right in front of him. Does anyone doubt, nevertherless, that he will continue to promote the big lie by conflating CS and Id even after having been instructed. How can anyone attribute ignorace to behavior that is clearly dishonest?

  79. Sorry to join this interesting thread so late.

    I see that much has already been said (thanks kairosfocus, for always being so generous and pertinent) and we have discussed similar things before, so I will just try to sum up some aspects which are specially dear to my heart:

    1) Consensus: Jerry seems to think, if I understand well, that in our discussions nobody has been able to give a clear definition of specification. I disagree. Dembski has done that very well, and we have tried, in our simple way, to “explain” some aspects in our discussions. The facts that some remain unconvinced is nobody’s fault, but I am very happy with the concept of specification and with the general understanding of it in this blog.

    2) Specification: I will try anyway to sum up my personal understanding of specification, even if I am certain that the concept is still so deep that ot will be clarified further in the years to come. Specification, as I see it, is any characteristic which makes some piece of information specially recognizable by conscious and intelligent beings. Specification can be given by at least three different mechanisms:
    a) Pre-specification: if a specific information has been defined in advance, it can be “recognized” when it occurs again. In this case, specification is not a property of the information itself, but rather of the precious occurrance of the definition.
    b) Compressibility (order): some (rare) pieces of information are higly compressible in terms of bits, and that makes them recognizable to conscious, intelligent beings. A sequence of 1000 identical bits is a good example of that.
    c) Function: some (rare) pieces of information can “accomplish” specific tasks. In other words, they have a recognizable “meaning”, llinked to what they can do or communicate. The classical sequence of prime numbers is a good example. A functional protein is another one. A computational algoritm is still another one. This is, perhaps, the most important kind of specification, and the most represented in nature. I believe that all languages (both natural and programming languages) fall in this category.

    3) Low probability: specification alone is not enough. Low probability is necessary too, so that we can have CSI. A sequence of ten identical bits is specified, but its probability is not very low (1: 2^10). A sequence of 10^150 random bits has an extremely low probability, but it can easily occur (unless it is pre-specified). But specification “plus” extremely low probability cannot practically occur in reality. And if we assume a very, very generous level of low probability, like Dembski’s UPB, then you can be really sure that any CSI is due to design. It didn’t occur by chance, it was conceived and written by an intelligent being.

    4) Lottery: any discussion about lotteries is completely overcome if one really understands the previous points. No lottery in the universe could “win” a specified outcome with UPB probability (or lower). Unless, of course, you believe in the multiverse argument. But then, that’s your problem…

    5) The “life could have been built in many other ways” argument. Maybe that’s true. But the argument is completely insignificant. Let’s go back to the problem of function, and let’s consider, for simplicity, computational algorithms. Let’s consider ordering algorithms, that is sequences of bits which, in a specific hardware environment, can order data. We know that the ordering process can be realized in many different ways. You have many different ordering algorythms, of different complexity, length, efficiency. All of them can perform the task, even if in different ways and times. But how many sequences, in the limit of a certain length of bits, do you think are ordering algorithms? Not one, but certainly not many. So, the probability of finding by chance an ordering algorithm in random sequence of, say, 500 bits, will not probably be 1:2^500. Let’s suppose that ten of those sequences are functional ordering algorithms. Then, if I am not wrong, the probability will be ten times higher, that is something less than 1:2^496. Do you think that makes a big difference?

    5) So, unless you believe in the multiverse fantasy, or unless you believe that for some even more fantastic reason a lot, but really a lot, of the random combinations of information can give rise to life, the CSI argument is solid truth, and the “lottery” argument is completely bogus.

    6) Finally, just stop and consider that, according to what we can daily see in our world daily, all around us (living beings of all kinds and forms, just to be celar), the supposed “lottery” should have been won billions of times, each one at a level of CSI abundantly above (or below, if you prefer) the UPB limit: first of all to have a universe capable of order and life; then to get the OOL, the DNA code, the transcription and translation system; then to get eukaryotes from prokaryotes; then to get multicellular beings, sex reproduction, cambrian explosion, different body plans, each single new functional protein, each single new regulation network, the evolution of each species, the development of the nervous system, of the immune system, and so on… Not to mention the flagellum, just not to repeat ourselves.

  80. I didn’t get in on the hash and rehash re the “specification” in that other thread–unless I did and have absent mindedly forgotten–but maybe the problem is that there is no algorithm, no set of mechanistic procedures that could be programed into a robot that could pinpoint a specification. It’s a pretty mechanical procedure that would disqualify simple repetitive patterns, but it may take a designer to recognize the specification behind design. And these folks who don’t recognize that there is such a thing as mind–could they admit to something that only a mind could recognize?

    By the way–O’leary and Beauregard’s The Spiritual Brain was waiting for me when I got home.

    Y’all have a great weekend too!

  81. Rude’s:

    It takes a mind to spot a mind — paraphrased.

    Great issue . . . though in the cases of funcxtionally specified information-rich entities, rarity of function in the config space leads to derangement of function from relatively small random changes [it's hard to build redundancy and error-correction up to take in large changes!]

    GP- Thanks for the kind words.

    GEM of TKI

  82. ericB (63): “But mindless nature has no reason or motive to construct the encoding, storage/transmission, retrieval, and decoding mechanisms necessary to associate meaning with symbols taken as a code. Any part of that system is useless without the others.”

    I agree, but of course the standard response of Darwinists would be that since this is at base a physical mechanism, “meaning” is an abstraction only existing in the conscious minds of us humans considering the matter. Since they cannot for a moment entertain teleology or intelligence in the process, they assume as a given that the genetic code and translation system arose by numerous small, successive adaptive steps. Since they can sort of imagine this (even with no specific “just so” story), then it must be how it came about (including apparently indefinite numbers of levels of alternate frame coding). Such closed-minded ideological thinking is impervious to reason.

  83. [...] last year I reported on this blog that (go here) that John Derbyshire, despite repeatedly weighing in against intelligent design online and in [...]

Leave a Reply