Home » Intelligent Design » Complexity, Specification, Design Inference, and Designers

Complexity, Specification, Design Inference, and Designers

I often see misunderstanding of what ID is about. It’s about inferring design by critical analysis of a pattern and the ways that pattern could have come to exist. I find a comparison with a lottery to be the easiest way to understand this.

Suppose there is a state lottery and each month for 12 consecutive months 10 million tickets are sold and one winning ticket is drawn at random. Obviously there must be 12 winners at the end of the year. While each winner beats odds of 10 million to 1 there’s nothing unusual about that as someone must beat the odds each time.

Now suppose that the 12 winners are all siblings in order from oldest to youngest.

This lottery result constitutes a pattern.

First of all we have complexity in the pattern. The odds of any particular sequence of 12 winners are 1 in 10^84 (that’s 10 followed by 84 zeroes). Any single pattern where there are trillions and trillions of patterns possible is complex. But complex things like this happen all the time because the result must be one of those many sequences. A sequence of 10 coin flips, no matter the result, is not complex as there are only 1024 possible results. This is roughly how we define complexity. Complex results happen all the time and in themselves are no indication of design.

Next, the pattern has specification. The pattern conforms to an independently given specification. In this case siblings from the same family is the indendently given specification.

Now we have identified the lottery result as a complex specified pattern (or complex specified information if you will). This is a reliable indicator of design. The more complex the result and the more definitive the pattern the more reliable the design inference.

No matter how convincingly it can be told that the lottery was secure from cheating no reasonable person will be convinced that there was no cheating involved. So we can almost certainly rest assured that the result of the lottery was not random but was the result of design (cheating; rigged).

However, even though we know the result was rigged we have no clue how it was rigged (the mechanism) nor who did the rigging (the designer).

ID is the theory that certain patterns in nature exhibit specified complexity that can only reasonably be attributed to design. ID does not and cannot reveal how the design was accomplished nor what entity or entities did the designing. ID is nothing more or less than design inference based upon high improbability of independently given patterns arising by chance.

Now let’s quickly look at the flagellum. There’s no room for debate about complexity. It’s a precise arrangement of millions of protein molecules from a set of dozens of different proteins, each protein itself a complex pattern. There’s little room for debate that it conforms to an independently given pattern. It’s a propulsion device. Where there is room for debate is in what Bill Dembski calls “probabilistic resources”. These are the resources that “chance” (or unintelligent cause) has to draw upon in forming the pattern. This is why ID seems to be an attack on mutation & selection. Mutation & selection are the leading known probabilistic resource that could form the specified complexity of the flagellum.

Logically one can never prove a negative. ID proponents will never be able to prove that some unknown probabilistic resource wasn’t the source of design in the flagellum. However, this is a problem with nearly every hypothesis in science and it’s why you often hear that all of science is tentative. Some bits are just more tentative than others. This is why most philosophers of science say a hypothesis has to be, at least in principle, falsifiable. If we can’t prove something is true, if we can at least be able to prove it false in principle, then it’s science. The ID hypothesis of the flagellum is falsifiable. In principle a neoDarwinian pathway for its evolution can be plotted on paper and confirmed in a laboratory.

The greater question in my mind regarding falsifiability is whether there’s any method in principle of falsifying a hypothetical neoDarwinian pathway for the flagellum. The only real contender for falsification is a design inference! So you see, if ID didn’t exist, neoDarwinists would have to invent it just so they have a method of falsification in principle for random mutation plus natural selection in creating things like the flagellum.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

45 Responses to Complexity, Specification, Design Inference, and Designers

  1. DS – nicely related to something that hits home for all of us – our pocketbooks. There is also an important idea here that I hope everyone catches – design is so easily detected that it actually takes deliberate effort and significant energy to hide the fact of design when it’s designers don’t want it detected. Did that make sense?

  2. Intelligent Design 101

    My new blog incorporates Mike Genne’s ID 101, plus adds other insights. All comments are welcome.

  3. Sorry- Mike Gene’s ID 101

  4. “The ID hypothesis of the flagellum is falsifiable. In principle a neoDarwinian pathway for its evolution can be plotted on paper and confirmed in a laboratory.”

    How does the existence of an alternative hypothesis (the neoDarwinian pathway) falsify the ID hypothesis? Couldn’t the designer just have made it *look* like the flagellum evolved?

    Not in a scientific explanation. That would be metaphysical explanation. Why do neoDarwinists always resort to metaphysical arguments in order to dispute the detectability of design in patterns found in nature? It appears you can’t dispute it with science. Metaphysical arguments and court orders. How very lame. -ds

  5. -ds,
    so what is it, am i banned?

    No. If you have anything constructive to say I’ll approve it. Read the moderation policy. -ds

  6. remember the time when the New York pick three lotto had 911 as the 3 numbers? The amazing thing is it happened on Sept 11th one year after the Twin Tower attack.

    Very good! There’s an independently given pattern there. The result however isn’t very complex as on any given day the odds would be 1:1000 for that result. Dembski defines the universal probability bound where a design inference is warranted as one out of ten to the 150th power (1/10^150) which is almost infinitely smaller chance than 1/10^3. But you’re thinking along the right lines! -ds

  7. Thanks for the explination. The only concern I have is that with the lottery, and most other expamples I have seen used, it is possible to reliably estimate the probability of the event occuring by chance. Even in the case of Mount Rushmore this may be possible by studying erosion patterns. However in the case of biological systems I dont think our current knowledge allows accurate estimates of the probability. I may be wrong so please feel free to correct.

    In regards to the falsifiability, you may not be able to falsify a particular pathway, unless you can calculate that pathway couldnt have conceivably occured. However there are several cases where it may be possible to falsify evolution/conclude design. A good example is the glow in the dark pigs(http://news.bbc.co.uk/2/hi/asi.....605202.stm). Here we have an example of an entirely new gene appearing in a population, not only that but a comparative genomic analysis will show that it does not appear in other populations of pigs or other mammals, and it is infact a jellyfish gene. In this case we would be able to reliably infer design.

    In this case we would be able to reliably infer design.

    No Chris, you can’t. Didn’t you get the memo A New Paradigm for Biology?

    I explained that the difficulty with design inference is that there can never be proof that every probabilistic resource has been factored into the equation. I also explained that this isn’t a unique problem for ID and is why all of science is tentative – we can never know everything. ID can’t be held to a higher standard. Its opponents don’t get to have their cake and eat it too. -ds

  8. George:
    How does the existence of an alternative hypothesis (the neoDarwinian pathway) falsify the ID hypothesis? Couldn’t the designer just have made it *look* like the flagellum evolved?

    If it could be demonstrated that unintelligent, blind/ undirected processes can account for something then Occam’s Razor slices off the requirement of an intelligent designer. Remember science is not about “proof”. It is about the best explanation/ inference based on the available data.

  9. ds:

    Of course it’s not a scientific argument. In my experience, most ID supporters are born-again creationists (some present company may be excepted), and that is exactly the sort of argument they make. The only lame thing about court orders is that they come about when ID supporters try to take the fast track to scientific acceptance straight into school textbooks.

    Oh give me a major break. The Cobb county case was about a textbook sticker that didn’t mention ID, didn’t mention religion, it’s only crime being it called evolution a theory that should be carefully studied and critically considered. The courts are giving evolution the fast track to become scripture is what’s happening. -ds

    Joseph:
    I know science is not about “proving” theories. I’m a scientist. It’s about hypothesis testing. The way the point was phrased above- “a neoDarwinian pathway for its evolution can be plotted on paper and confirmed in a laboratory” is not a test of ID. It is a test of the particular evolutionary model proposed. If the test of this alternative hypothesis rejects the hypothesis, then ds implies that this supports a null hypothesis of “intelligent design”. The true null hypothesis would simply be “did not come about by this evolutionary pathway”.

    What I want to see is a test of ID that can possibly support Ha: this thing was intelligently designed, and rejects the Ho: this thing wasn’t.

    So what repeatable test was performed that said “Ha” the eukaryote nucleus was the result of random mutation and natural selection? [sound of crickets chirping] NeoDarwinian dogmatists have such a double standard. You can’t possibly not see these things I’ve mentioned after its brought to your attention. -ds

  10. DaveScot, there is one problem I see with your lottery analogy. The analogy begins with unstated foreknowledge of what kinds of things might be able to influence a lottery, human interference being one such influence. Because that influence is ruled into your argument by the existance of people at many points in the lottery system (sales, drawing, distribution, etc), it is a justified argument. Other influences which might cause a non random sequence could also be present and we could expect a different specified outcome from them. Unevenly weighted lotto-balls might cause certain numbers to be selected more frequently, for example. However, the pattern you suppose points us in the direction of human interference only because it is the kind of pattern we expect from our previous experiences with human sources. Your argument for a design inference from the CSI is correct, from that point.

    However, that justification does not exist for the flagellum or other IC biological systems. In order to infer design for a biological system, which came about at a time when no human agent was demonstrably present, we must first rule-in the presence of a designer and know something about what kind of design would be expected. We know of the existance of naturally selective environmental pressures and we know that there are many kinds of “designs” that evolution can bring about. Therefore, evolution is very hard to rule out as a possible designer. The existance of CSI in biological forms could, logically, be pointing us towards an evolutionary pressure in many cases. However, to differentiate the presence of a designer apart from evolution, we must first establish what kinds of designs that designer would employ. But, can anyone do that?

    Did everyone just stop reading when I got down to explaining there is room for debate in probabilistic resources? -ds

  11. Disclosure: I am not a statistician, nor am I an expert on Dembski’s approach, having read only his online articles. I hope that neither my rookie status nor the length of this post will disqualify me from discussion, as I’m eager to learn.

    I submit that complexity calculations, in order to be meaningful, must be based on the aggregate of all relevant chance hypotheses, and that care should be taken to include any plausible hypotheses that posit a dependent relationship between specification and event. Furthermore, I submit that complexity calculations are meaningful only to the degree that the underlying chance hypotheses are based on knowledge rather than assumption.

    The lottery scenario is a case in which we have excellent knowledge of the processes involved, since lotteries are designed and operated by humans. We know, for instance, that a uniform distribution is presumably designed into the selection procedure. We also know that information regarding family affiliation is typically far removed from the lottery operation, so the odds of such information being inadvertently incorporated into the selection procedure is negligible. (In real life, we would try to increase our knowledge by asking questions such as, “Did the whole family go to Kwik-E-Mart and buy their tickets together?” For this example, I’ll assume that we’re restricted to the information stated in the scenario description.) Based on our knowledge, it seems reasonable to consider only one chance hypothesis – a fair lottery. As Dave explained, this hypothesis confers very low probability on the outcome, so the complexity condition is met. Since the specification condition is also met, we should infer intentional cheating.

    Now consider a case in which our knowledge is lacking. Suppose that we receive an RF signal from space that exhibits a cyclical on-off-on-off pattern, and suppose further that pulsars have not yet been discovered. It’s clear that a specified pattern is manifest, and if we consider only a uniform noise hypothesis, then the long string 10101010… easily meets the complexity condition. (Note that the universal distribution gives us the opposite result, but I’ve never seen the universal distribution used in the CSI approach.) CSI indicates design in this case as in the case above, but here the indication is unwarranted because our set of chance hypotheses has very little informational basis.

    Note that the correct but uncontemplated hypothesis in this case ties the specification to the event by a non-telic process, resulting in a high probability for the specified pattern. Processes like this are abundant in nature, but the CSI analyses that I’ve seen don’t account for them unless they’re already known to be part of the causal story. I hope that someone can clarify this issue for me. Thanks.

    The lottery example was only intended to illustrate the concepts of specified complexity and design inferences. I went on to explain probalisitic resources and the difficulties associated with fully delineating them in complex biological machinery. Given we have no direct or indirect evidence that chance mechanisms had anything to do with the creation of novel cell types, tissue types, organs, or body plans it makes it difficult to assign them any plausible chance at all. That’s the basic problem with neoDarwinian evolution – it attempts to assign a mechanism for creative events which were not observed in the past and cannot be repeated via experiment. -ds

  12. Dave, thanks for your response. Your point regarding the epistemic limitations of science is a good one.

    I can’t speak to the biology aspect, so I’ll take your word for it that there are no known non-telic processes that result in biological systems. I assume that we also agree on the fact that there are no known _telic_ processes that result in biological systems. So in biology, as in the pulsar example above, we’re faced with the question of how to formulate meaningful chance hypotheses from a dearth of information.

    The question is vital to the reliability of CSI-based design inference. Under a uniform-noise chance hypothesis, cyclical radio signals exhibit CSI. The resulting design inference turns out to be a false positive, and not because of some freakish Gettier counterexample, but because of the unremarkable discovery of yet another non-telic pattern generator – a natural computational process, if you will. The abundance of such processes in nature, which are intractable as an aggregate, would seem to render CSI-based design theory less reliable than other scientific theories that can be negated only by extraordinary new evidence.

    As always, I hope someone will correct my misconceptions.

  13. Secondclass,

    Which natural cyclical radio signal has CSI is excess of 1/10^150?

  14. Irving, that’s the point. The complexity of the signal depends on your chance hypotheses. Under a white noise hypothesis, any signal of sufficient length has a probability of less than 1/10^150. As far as specification, even a signal of constant amplitude and frequency exhibits a recognizable pattern, kind of like flipping millions of heads in a row. So, under a white noise hypothesis, a constant signal infers design. The design inference of CSI is only as good as our chance hypotheses, which in some cases are pretty arbitrary.

  15. Dave Scot,

    Your lottery analogy is similar to something that had a big impact on me. I think it was Michael Behe who explained that the odds are about 1/10^ 125 of a random “coming together” of even a simple protein made up of just 100 amino acids [with 20 different amino acids to choose from, given that they must all have peptide bonds, and all be “left” shaped—I am sorry I don’t know the correct terminology].

    To most of us non-scientists, non-mathematicians, the number 10^125 does not convey a sense of its size. We need to ‘see’ how big that number is.

    I “translated” the number into something regular people might make sense of. First, create 57 stacks of cards, each stack containing three decks of cards, each deck a different color [red, yellow, and blue]. Thus, each stack contains 156 unique cards. Take a card from each stack. The odds of correctly drawing 57 straight cards in a certain pre-specified order, such as 57 straight blue aces of hearts {wouldn’t that essentially be ‘specified complexity’?} is about 1/10^ 125.

    Now design a super computer that can play this poker game. How many computers, playing the game at how many times a second, for how long, will it take to generate 10^ 125 hands?

    1. Make each computer really fast, able to play one trillion hands per second [10^12].
    2. Make one computer for every neutron and proton in the universe [I understand that’s about 10^85].
    3. Have these computers run continuously since the universe began [10^17 seconds].

    That gets us ‘only’ 10^112 hands total. We need more universes!! Or more time. Or faster computers. Or all three.

    Given that, who would bet their life on an accidentally-stumbled together protein molecule?

  16. The basic notion of an intelligent inference is not the issue. There is some question regarding the most plausible assumption, however.

    As shown by the general acceptance of SETI, this notion of an “intelligent signal” is not deeply contested. If, for example a protacted sequence of prime numbers was broadcast to us via radio signal, most scientists would accept that it was a plausibly of an intelligent origin.

    Though certainly no one would look down upon those scientists (including myself) who would consider an unknown natural explanation. It is hard to fathom, but it might be possible. There were peculiar radio signals recieved here that while initially suspicious, later proved to be the origin on a naturally occuring astronomical phenonmem. A long sequence of primes is awfully tough to imagine as a natural occurence though.

    And certainly if we, for example, received even a low-def broadcast of some strange green aliens introducing themselves in English, along with a quick description of themselves and their best understanding on humans based on the radio broadcasts they’ve recieved, I think just about everyone would accept the “intelligent design hypothesis” regarding that signal. The only issue of debate would likely be whether it was a hoax (human intelligence) or real (some distant alien intelligence).

    But for the moment, let us assume some prime numbers from space. We might reasonably assume the origin of the signal suggested an environment sustaining life not unlike ours. Further, let us assume that subsequent, detailed study of the origin star system suggested a young star and largely a dust solar system (ie. no planets). Let’s even go so far as to imagine that we send a probe there, and find no planets and no existence of an intelligent species. And even worse, we find no evidence that some intelligence had visited that system prior to our probe (no evidence for an exhaust trail, for example).

    Then, we scientists on Earth are faced with this: a sequence of prime numbers sent out to the cosmos, of which we recieved, with no discernable evidence that an intelligence was the cause. Even then, the default assumption is that there was an intelligence, but we not have figured out how to detect it’s existence. But certainly, an entirely valid approach would be to find a natural explanation for the origin of the prime numbers. In fact, it is the only obvious way forward. We have no potential to understand the numbers from space except by natural, observable means. Even if it is hopeless and wrong, it’s simply the only thing we could do.

    Further, assume some scientist did find a plausible explanation for the natural origin of primes as radio signals, but not necessarily that particular signal we recieved (though almost, and importantly, plausibly in a way we don’t quite understand yet). Then the default assumption might be in debate. And if we find, over time, that the prime number emission is characteristic of a number of astronomical emissions — even if we didn’t understand how that initial one worked in particular — the default assumption might even move to that of a natural mechanism.

    Of course, the video transmission would blow the methodolgical naturalism gasket of every scientist right from the start. The idea of a natural process emitting an NTSC broadcast with coherent English-speaking apparent-aliens discussing their society in the context of observed human radio broadcasts… yeah, that’s some alien intelligence. It’s might be, somehow, still possible it was a natural occurence, but I’d likely have won the lottery millions and millions of times before that occured.

    We can tell the difference between Mount Rushmore and the “Face on Mars”, and if we had found a laptop in the fossil record — even in the 19h century before we’d made any laptops — we’d all be blown away.

    The difference is simply that the “interdependent, complex nanomachinery of the cell” is (a) not a revelation, and (b) rather unlike the known products of existing “top-down” intelligent engineering. That is, to the vast majority of professional biologists, it looks more like the “Face on Mars” than it does Mount Rushmore. I know you disagree with that, but your disagreement is largely over possible mechanism rather than pure incredulity.

    I don’t expect you to just believe me, and I completely understand the concern over believing a purely RM/NS method settles everything. All I can say is that it is a work in progress, and that I think a new “layer of science” is forming. It doesn’t replace RM/NS — it builds on it — and it provides the abstraction necessary to intuitively understand the process. At the moment, understanding complex evolution in the framework of specific genetic events is likely akin to understanding chemistry in the framework of particle physics. This issue is clearly already true of certain aspects in evolutionary theory (advanced hierarchical problem solving and endosymbiosis — generally, cooption of function — the indirect mechanisms of evolution).

    Obviously there will need to be more detailed experimental linkage and grounding of this “novel” discipline, but I think the recourse to an unexplainable statement of mechanism is a tad premature.

    An aside to those agnostic engineers who happen to read this, I suggest you read up on John Doyle’s work (an engineer with expertise in system control theory at Caltech). He basically came at this problem with a similar mindset, but developed a very intriguing and purely natural theory. Basically that complex, “organically grown” human engineered systems show many basic properties of biological systems because they were solving the same fundamental problem: accumulation of control systems (largely of a feedback variety) to accomodate common environmental noise. His basic argument is that a 747 (and a cell) is a fairly simple system, as long as you don’t mind it crashing every few seconds. The not crashing bit requires control systems. He describes a 747 as a massive, complex, computational control system that just, almost irrelevantly, happens to fly. He also has a theory regarding the “conservation of fragility”, which is related to the oscillations in any simple feedback control system. Basically you can pick what you are robust to, but you cannot be robust to everything. The mindless patching (both in biology and, to a significant degree, engineering beyond our intuitive level — I’ve written some wacky computer programs in my day) is the same process, and results in the same issue: growing complexity.

    I honestly believe that the engineers reading this forum will find more intellectual significance in Doyle’s work than they do with Intelligent Design. It’s fascinating work, and I’d love to seem more engineering experts expand on it.

    Justin

    Comparing nanomachinery in living cells to the face on Mars is a false analogy which I addressed just recently. This is like comparing the pyramids of Egypt and everything in them to a rock that resembles a stone axe. Your argument falls apart from there as it depends on that logical fallacy. -ds

    Further
    - a string of primes is a false analogy.
    - a broadcast of little green men saying hello is a false analogy
    - complexity of the cell IS a revelation and it just keeps getting revealed as more complex every single day
    - it rather IS like the products of human engineering – a ribosome and its digital control program in DNA, basic machinery shared by every living cell, is a robotic assembler amazingly similar in form and function to human designed robotic assemblers,
    - a biologist knows nothing of human engineered factory automation so they can’t see the congruence with cellular automata
    - there’s nothing unexplainable in principle about intelligent design anymore than other fundamental phenomenon like what’s beyond the visible universe or what caused the visible universe; some questions may not be answerable and that invalidates neither the question nor the answers that lead up to the question
    - Paley’s watchmaker analogy is still a good one but given what we know about cells today a better analogy is the space shuttle and all its support infrastructure at Cape Canaveral instead of a watch
    - tell Doyle to envision a 747 that can make copies of itself and uses sunflower seeds as both fuel to fly and the raw material to copy itself then I’ll read what he thinks about it

  17. Secondclass. It’s not just a pattern, but the complexity of the independent specification of that pattern. A constant signal does not infer design. Repetition of a low CSI pattern does not infer design. Certainly recognizing a specification and determining it’s independence is not a trivial matter (in some cases). A Pulsar is cyclical. It’s repetition is a pattern, and might be considered an independently sepecified binary pattern…i.e. 10101010101. However, that specification is of low complexity. But if the signal conformed to an “independently specified” pattern say…3.1415926535 out to say 500 decimal places. Then it is specified, complex, and independent…as there is no conceivable reason why natural, un-guided physics would build a pattern equal to a base 10 expression of Pi.

    It’s conformance to external (independent) requirements, not local ones. Consider the rovers currently on Mars. Complex machines, built on Earth for a Martian environment. There’s no reason what RM / NS operating in a Terran environment would build a complex machine suited for a Martian environment. The Martian environment is “independent.” Now there may be similarities between the two environments, and those features of the rovers would be excluded. And, certainly, a random mutation on Earth might just, by happenchance might result in some feature better suited for Mars than Earth. BUT such a feature is mere mimicry, since there is no Terran selection pressure to build upon it.

    The question then is, what are the odds that highly complex, inter-related, Mars specific features might be hit uppon by mere chance? The likelihood decreases as the complexity increases.

  18. seconclass:
    The question is vital to the reliability of CSI-based design inference. Under a uniform-noise chance hypothesis, cyclical radio signals exhibit CSI.

    Only if you don’t know what CSI is. CSI, at the minimum, is 500 bits of information. What information does your uniform-noise, cyclical radio signals contain?

  19. Irving, my perception of CSI may be flawed. According to my understanding, it’s the event, not the specification, that should exhibit high complexity. Dembski seems to characterize CSI specifications as simple, short descriptions.

    Joseph, it depends on our definition of information. Shannon’s information measure is a function of the signal source. If we know nothing about the source and assume white noise, then all signals are information-rich, including those that exhibit a simple pattern.

  20. Secondclass,

    Shannon’s “information” is useless here. It is useless because it doesn’t care about content. Meaning is useless to Shannon.

    Here (ID) information is all about content and meaning is very relevant. First there is the information required to build proteins (for example). Then there is the information required to assemble the proteins in a precise sequence (as in the bacterial flagellum). And finally is the information required for using it (the flagellum).

  21. Joseph, your point is well taken, but I don’t see how CSI requires meaningful content, unless you equate meaning with specification. My understanding is that specifications are characterized by short descriptions, so it would seem that a string of a million 1′s can easily constitute CSI (depending, as I mentioned, on our choice of chance hypotheses).

    In information theory a pattern is only as complex as the simplest way of expressing it. “one million ones” is the simplest way of way representing a string composed of 1 million ones. It doesn’t take anywhere near a million characters to represent it. This is basic information theory. There’s no particular independently given specification for it and nature produces simple repeating patterns like this in abundance so the realm of chance hypothesis would usually be very large. -ds

  22. Secondclass,

    I would suggest your perception is flawed. Not unusual, I find that Specified Complexity is the most routinely mis-understood concept within ID. If it was the “event,” it would be known as CSE – Complex Specified Event! You say:

    “Dembski seems to characterize CSI specifications as simple, short descriptions.”

    From Explaining Specified Complexity, by Dembski

    “A single letter of the alphabet is specified without being complex (i.e., it conforms to an independently given pattern but is simple). A long sequence of random letters is complex without being specified (i.e., it requires a complicated instruction-set to characterize but conforms to no independently given pattern). A Shakespearean sonnet is both complex and specified.”

  23. Irving, I didn’t convey my point very well, so we’re probably talking past each other. My perception of CSI as a simple specification coupled with a complex event is taken from Dembski’s articles, like this one:

    Thus, what makes the pattern exhibited by (ψR) a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance. It’s this combination of pattern-simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (ψR) — but not (R) — a specification.

  24. “First there is the information required to build proteins (for example). Then there is the information required to assemble the proteins in a precise sequence (as in the bacterial flagellum). And finally is the information required for using it (the flagellum)”

    The information for assembling the proteins (if you dont count the translation machinery), and assembling the complex(assuming no molecular chaperones are involved) is exactly the same thing, as it is specified by the protein structure. The information for ‘using it’ is presumably the mechanism by which it is activated, which is a signalling cascade that I have not seen mentioned in these discussions(depending on which strain you are referring to). Is the “information” referring to the configuration of the proteins themselves of the genetic information required to encode them.

    Also, how do we decide that the flagellum, or any biological feature, conforms to an independantly given specification? Is is by reference to human structures, it is the molecular function, or the biological function, and if so, what particular level of function is used? Do we have examples of biological structures that do not exhibit CSI to make a coimparison?

  25. Secondclass, Yes, I can see how that could get confusing. As you read the quote carefully, you see that it is the “pattern” that is the specification, and not the event…

    “…that makes the pattern exhibited by (ψR) — but not (R) — a specification.”

    I believe he’s referring to the “likelihood” of something occuring by chance as “event-complexity.” In other words, the series of events required to build the pattern is complex…

    “…difficulty of reproducing the corresponding event by chance”

    Even though that pattern may be easily described….such as Pi=C/2r. I don’t have time at the moment to find the greater context within the paper you linked to place your quote in context…but I’ll take great risk in assuming that Dembski’s defining Specification in “building up” to a definiton of Specified Complexity? Given that one must first define “specification” before one describes a complex “specification.”

  26. “First there is the information required to build proteins (for example). Then there is the information required to assemble the proteins in a precise sequence (as in the bacterial flagellum). And finally is the information required for using it (the flagellum)”

    Chris Hyland:
    The information for assembling the proteins (if you dont count the translation machinery), and assembling the complex(assuming no molecular chaperones are involved) is exactly the same thing, as it is specified by the protein structure.

    That is false. Chapter XIII “What Teaches Proteins Their Shapes?” of geneticist Giuseppe Sermonti’s book Why is a Fly Not a Horse? paints a different story than what you just posted.

    The problem is that in order to get proteins to function one needs not only an orderly and correct sequence of amino acids, but also a spatial configuration that folds them into the proper association with each other and enables them to interact with the molecules on which they are supposed to work.

    He goes on to say:

    The spatial information necessary for specifying the three-dimensional structure of a protein is vastly greater than the information contained in the sequence.

    And it still doesn’t say where the information came from.

    Chris Hyland:
    The information for ‘using it’ is presumably the mechanism by which it is activated, which is a signalling cascade that I have not seen mentioned in these discussions(depending on which strain you are referring to). Is the “information” referring to the configuration of the proteins themselves of the genetic information required to encode them.

    Something controls the bacterial flagellum. It can rotate clockwise at varying speeds, stop and then rotate counter-clockwise. Soemthing is telling it to do so.
    IOW the bacteria has to know how to use the structure once it is in place. That means a communication link must also exist.

  27. Yes I know, this is why i included my remarks in brackets, the protein structure also depends on the translational machinery, possibly molecular chaperones, and the proteins involved in flagellular assembly. These are then of course enocded on the genome themselves and so on. The flagellum is controlled by, in the standard case, a signalling cascade caused by sensors on the front of the bacteria, which cause the flagellum to rotate in resonse to one or various external stimuli. My question is how do you include all of this in a calculation of the complexity of just the flagellum.

  28. Chris Hyland:
    The flagellum is controlled by, in the standard case, a signalling cascade caused by sensors on the front of the bacteria, which cause the flagellum to rotate in resonse to one or various external stimuli. My question is how do you include all of this in a calculation of the complexity of just the flagellum.

    Alrighty then. IMHO, IDists generally don’t include it because the issue is difficult enough to contend with just given the flagellum. And anything “deeper” than that would be the origin of life itself.

    However it does go to show that the issue is much deeper than just the physical aspect of the flagellum, which is something IDists have been saying for years.

  29. Dave, regarding your comment in #21, I fully agree that simple, repeating patterns are abundant in nature. Furthermore, I see them as specified, as Dembski associates high compressibility with specification. For a string of a million 1′s, a chance hypothesis of uniform noise would lead to a verdict of CSI, which would be problematic in many cases.

    My point is that our choice of chance hypotheses is crucial to the reliability of a CSI-based design inference. I make this point only because most of the CSI examples that I’ve seen do not explicitly state or justify their set of chance hypotheses, which reduces my assurance that their conclusions are correct.

    It’s usually chance hypotheses (plural not singular, as there’s usually more than one way to skin a cat). Our knowledge of chance hypotheses isn’t just crucial to a design inference, it’s everything. The whole shootin’ match is wound up in probabilistic resources which I prefer to “chance hypotheses”. CSI is a highly improbable pattern that conforms to an independently given specification. Probabilisitc resources are the set of processes that might hypothetically produce the pattern. If those processes are well understood and the set is believed to be complete, and none are reasonably capable of producing the pattern, a design inference is warranted. In patterns exhibited by cellular nanoscale machinery there is really only one chance hypothesis with empirical support and that is RM+NS. Of course there are other possibilites in the set such as undiscovered laws of nature (the structuralists I believe they are called) which cause self-organization but I tend to dismiss claims of undiscovered laws of nature until they are discovered. That’s just wool gathering. RM+NS is the only real contender for chance hypotheses IMO. The problem is that under observation RM+NS is quite limited and certainly can’t be seen to cause evolution of new genera nor can it be seen creating new cell types, tissue types, organs, or body plans. This capability of RM+NS is pure extrapolation of the observed capabilities. Its creative power beyond what’s been observed is purely an argument from ignorance i.e. “if not rm+ns then what?”. This is illustrative that the set of chance hypotheses for biological evolution is a set with really only one member. So if we admit the possibility that intelligent design is among the set of all things that can fully explain evolution the question then becomes “if not RM+NS and not ID then what?”. It’s still an argument from ignorance because there might something other than RM+NS and ID, or it might turn out that with better understanding an RM+NS pathway is possible to such things as flagella and ribosomes and eukaryotic nuclei. What bugs me is why is the default and only acceptable explanation for evolution a chance hypotheses when the appearance of design is universally acknowledged even by NeoDarwinian dogmatists. It seems to me that ID should be the null hypothesis in any objective analysis i.e. rather than saying the appearance of design is an illusion why not say the appearance of chance is an illusion? -ds

  30. Irving, perhaps we should step back and reframe the issue. It is my impression that Dembski sees specification in simple, rather than complex, patterns. An example of the former would be a highly compressible bit string. Do you share this view?

  31. I think that Chris and I are driving at the same point, namely that complexity assessments are often poorly justified. This problem is aggravated by the frequent conflation of Dembski’s definition of complexity with the more common definition, which leads some to attribute self-evident complexity to events with potentially complicated causal histories.

    (Note: This is one of 4 main issues that I have with CSI, this one being the least problematic. I can state the other 3 if anyone’s interested.)

  32. Secondclass, my impression is that Dembski sees specification in BOTH simple and complex patterns.

    From No Free Lunch:

    “What is specified complexity? An object, event, or structure exhibits specified complexity if it is both complex (i.e., one of many live possibilities) and specified (i.e., displays an independently given pattern). A long sequence of randomly strewn scrabble pieces is complex without being specified. A short sequence spelling the word “the” is specified without being complex.”

    So “the” is specified without being complex…

    And from ISCID.ORG

    “The second component in the notion of specified complexity is the criterion of specificity. The idea behind specificity is that not only must an event be unlikely (complex), it must also conform to an independently given, detachable pattern.”

    One cannot work with Specified Complexity, absent a full appreciation of Independence.

  33. Irving, you make a good point. I suppose a complex pattern could constitute a specification if it’s independently given, although recognition of such a pattern may, in some cases, be difficult or even intractable. This raises a question of whether CSI requires that the specification be smaller than the event.

    This brings up another question I have regarding specifications. Must a specification contain a complete description of an event? In other words, must a spec contain enough information to reconstruct an event exactly? I see problems attached to both a yes and a no answer.

    You’ve totally lost the plot here. A family of blood relatives is the specfication in the lottery example. A propulsion device is the specification for a flagellum. These are not intractible or difficult. I suspect you’re just being argumentative. In any case your comments are not constructive. Take a break from this thread. -ds

  34. Dave, I’m sorry if I came across as argumentative. Irving said that specifications could be complex. You seem to be saying that specifications are simple. I’m in your camp, but I’m trying to be agreeable to everyone here.

    My question as to whether a specification must contain a complete description of the event is a sincere one.

    Specification is some distinguishing characteristic that gives meaning to a pattern. Independence means the meaning isn’t a tautalogy – the pattern must mean something independent of itself. Simple and complex, compete and incomplete description, are irrelevant since meaning can be construed independent of those terms. -ds

  35. Dave, I think if we’d both properly read each other’s post, you would realize that a “Face on Mars” was not my concern. A discussion regarding nature of (wholly valid) pro-design vs. (will you at least consider considering) natural “apparent design” is a worthwhile, a reasonable, position, no?

    I added more commentary to yours. I think you misunderstand my position. I don’t discount material mechanisms. However, science is about demonstration and no material mechanism has been demonstrated capable of producing novel cell types, tissue types, organs, organelles, or body plans. Extrapolation of a mechanism demonstrably able to produce small changes is a reasonable position as long as one doesn’t lose sight of the fact that it is extrapolation, might not be the correct answer, and has not even in theory been shown to have a plausible way of creating the observed complexity in living systems in the time and space and environment available. Intelligent design is the only other option on the table that many people see as a viable alternative and people have recognized it as an option for millenia. Furthermore there’s no reason in principle why the intelligent agency can’t be material in origin. In fact as a materialist I believe that the source of intelligence, when and if it is discovered and characterized, will be comprehensible in material terms. It’s difficult for me to grasp why any objective, rational person would exclude design as a live possibility for the origin of complexity in life on earth. I can only conclude that resistance is driven by irrational and/or subjective motives such as fear, ignorance, hubris, financial, and philosphical concerns. -ds

  36. Dave, is independent meaning always a requirement? For instance, does the following sequence have a pattern with independent meaning? DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

    If incomplete descriptions are allowed, then is “string of a million bits” a valid specification? How about “10 ton rock”?

    Independent meaning is always a requirement for a design inference. The sequence you give has no meaning to me so I can’t begin to make a design inference. However, it may have meaning I don’t know about. Maybe it’s the password someone used for their Swiss bank account. String of a million bits has no independently given meaning and neither does 10 ton rock. -ds

  37. Dave, Dembski attributes CSI to the above sequence. (See here) What independent meaning does he see that you and I don’t see?

    DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

    “D” represents “Democrat first on the ballot” and R is “Republican first on the ballot”. There is an advantage in being first on the ballot. That advantage is an independently given specification. A string of 41 Democrat advantages and 1 Republican advantage on a ballot is an approximate 1 in 1 trillion possibility and is therefore fairly complex. Thus we have complexity that conforms to an independently given pattern. The only reason I didn’t see it when presented with it before is the definition and significance of “D” and “R” were withheld. What now follows in a design inference is analysis of chance hypotheses (also called probabilistic resources) that could compose this CSI without intelligent agency involved. -ds

  38. “No matter how convincingly it can be told that the lottery was secure from cheating no reasonable person will be convinced that there was no cheating involved. So we can almost certainly rest assured that the result of the lottery was not random but was the result of design (cheating; rigged).”

    I don’t see this. All we can reliably infer from your lottery example is that the outcome is almost certainly not due to random chance. But cheating suggests intention, and a mechanism used in the service of that intention.

    Perhaps the family works at the ticket factory and managed a not-so-clever swindle? Then our pattern would have been generated by cheating. But perhaps there was a printing error at the ticket factory and 12 identical tickets were all sent to the same store in sequence. If all the family members went to the store together or (more likely) one or two of them bought twelve tickets for the others, then we’d get the outcome you describe, but without cheating.

    If you want to infer intentional design in the lottery case, you’re going to have to look at more than outcomes and likelihoods: you’ll have to dig further, finding out whether there were machine errors, and if not, then investigating the family members in question, and probably other lottery employees. We know the outcome is too improbable to result from simple chance, but until we figure out the mechanism generating the distribution we observe, that’s all we can reliably say about the outcome.

    “However, even though we know the result was rigged we have no clue how it was rigged (the mechanism) nor who did the rigging (the designer).”

    Again, we don’t know that the result was rigged: all we know is that it almost certainly wasn’t due to random chance. Something systematic, non-random is at work here, but we don’t know whether this is a machine error and a lucky shopping day for one family, or a not-so-clever inside job designed to cheat the lottery commission.

    “ID is the theory that certain patterns in nature exhibit specified complexity that can only reasonably be attributed to design.”

    Then it isn’t a very interesting theory. We study intelligent design in the historical and social sciences all the time, and if we stopped at simply saying “that education policy definitely didn’t arise from chance” then we wouldn’t be taken very seriously by anyone. It isn’t until historians and archeologists, for instance, tell plausible, evidence-based, and independently verifiable stories about designers and mechanisms that they do interesting descriptive and explanatory work.

    “ID does not and cannot reveal how the design was accomplished nor what entity or entities did the designing.”

    Then it’s not a very interesting scientific appproach. Science isn’t just the business of speculating about patterns (although a lot of good science starts that way … of course, plenty of bad science starts that way as well). Science is about understanding causal mechanisms. If you think important biological systems are designed, then start figuring out ways to identify and study designers and their mechanisms.

    You completely overlooked the following:

    Suppose there is a state lottery and each month for 12 consecutive months 10 million tickets are sold and one winning ticket is drawn at random.

    Tickets are sold monthly for each monthly lottery 10 million of them. One winning ticket is drawn from that number. The scenario you outline, 12 winning tickets sold at one time, was not possible in that circumstance. -ds

  39. Dave, it seems I’m completely confused regarding specificity. I see that the advantage of being first on the ballot is a motive for cheating, but I don’t understand how it constitutes a specification. My understanding of an independently given specification would entail a description of or pattern in the sequence that is recognizable even if the source of the sequence is unknown. Am I way off track?

    The source of DDDRDDD need not be known. In fact I don’t know the source. As far as I know it could be a faulty random number generator or a 1 in 1 trillion odd happenstance. The point is that further investigation of chance hypotheses is warranted. -ds

  40. Dave, now I’m more confused than ever.

    1) My understanding of the detachability requirement is that the specification should be recognizable independent of any facts behind the occurence of the event, which is why I presented the sequence without mentioning the Caputo story.

    You didn’t present the specification. You presented the sequence.

    2) My understanding is that specifications are descriptions or patterns, not motivating factors.

    Specifications can be anything sets the pattern in question apart.

    3) I don’t understand what chance hypotheses have to do with specifications.

    Chance hypotheses are a step in inferring design.

    Sorry to keep bugging you, but I need some help understanding this.

  41. Well Dave, it looks like you and I have very different definitions of specificity, detachment, etc. My guess is that we won’t be able to find enough common ground for a discussion, so I’ll let it go. Keep up the good work!

  42. Secondclass, there is a difference between a specification, and specified complexity.

    “My understanding of the detachability requirement is that the specification should be recognizable independent of any facts behind the occurence of the event…”

    Perhaps…”independent of the factors behing the occurence…”

  43. Actually, the odds “of any particular set of 12 people winning the lottery” are not “1 in 10,000,000,000,000,000,000″, but rather 1 in 2.1 times 10^75. That’s roughly 55 orders of magnitude off the mark. I know, I know, it’s pedantic, but typical of certain probabilistic calculations in the context ID.

    Actually it’s 1 in 10^84 …….. 1/((10^7)^12). Looks like your math is worse than mine. At least I knew that the correct answer had to be a power of ten. I did it in my head and added 12 zeroes to 10 million when I should have raised it to the 12th power. I’m not sure what your excuse is but I’m dying to hear it. -ds

  44. “Actually it’s 1 in 10^84 …….. 1/((10^7)^12). Looks like your math is worse than mine. At least I knew that the correct answer had to be a power of ten. I did it in my head and added 12 zeroes to 10 million when I should have raised it to the 12th power. I’m not sure what your excuse is but I’m dying to hear it. -ds”

    Still off by 9 orders of magnitude, but getting closer. You overlook that there are many ways to draw the same 12 people from 10^7. In fact there are n!/(k!(n-k)!) [where n!=n.(n-1).(n-2)...1] ways to draw k people from a group of size n. Maple 9.5 tells me that’s roughly 2.1 10^75. The reason is as follows: you can order n people in n! ways. There are k! times (n-k)! ways to split n! up in subgroups of sizes k and n-k so you have to divide n! by that to get the result (aka binomial coefficient).

    You’re right. I didn’t explicitely say the winners were an ordered set. It’s now explicitely an ordered set. Thanks for pointing out the ambiguity. -ds

  45. I was just wondering has anyone come up with a good layman’s explanation of the Design Inference of Dembski. I have ready “Intelligent Design” and to be honest the chapter on the Design Inference lost me. It seems that Dembski has come up with a way to determine if something specified but I have been unable to grasp his mathematical arguments. Can anyone help me?

Leave a Reply