Home » Biology, Darwinism, Intelligent Design » Michael Egnor Responds to Michael Lemonick at Time Online

Michael Egnor Responds to Michael Lemonick at Time Online

In a piece at Time Online, More Spin from the Anti-Evolutionists, senior writer Michael Lemonick attacks ID, the Discovery Institute, the signatories of the Dissent From Darwin list, and Michael Egnor in particular.

Dr. Michael Egnor (a professor of neurosurgery and pediatrics at State University of New York, Stony Brook, and an award-winning brain surgeon named one of New York’s best doctors by New York Magazine) is quoted: “Darwinism is a trivial idea that has been elevated to the status of the scientific theory that governs modern biology.” You can imagine the ire this comment would provoke from a Time science journalist.

The comments section is very illuminating as Dr. Egnor replies to and challenges Lemonick.

Egnor comments:

Can random heritable variation and natural selection generate a code, a language, with letters (nucleotide bases), words (codons), punctuation (stop codons), and syntax? There is even new evidence that DNA can encode parallel information, readable in different reading frames.

I ask this question as a scientific question, not a theological or philosophical question. The only codes or languages we observe in the natural world, aside from biology, are codes generated by minds. In 150 years, Darwinists have failed to provide even rudimentary evidence that significant new information, such as a code or language, can emerge without intelligent agency.

I am asking a simple question: show me the evidence (journal, date, page) that new information, measured in bits or any appropriate units, can emerge from random variation and natural selection, without intelligent agency.

Egnor repeats this request for evidence several times in his comments. Incredibly, Lemonick not only never provides an answer, he retorts: “[One possibility is that] your question isn’t a legitimate one in the first place, and thus doesn’t even interest actual scientists.”

Lemonick goes on to comment: “Invoking a mysterious ‘intelligent designer’ is tantamount to saying ‘it’s magic.’”

Egnor replies:

Your assertion that ID is “magic,” however, is ironic. You are asserting that life, in its astonishing complexity, arose spontaneously from the mud, by chance. Even the UFO nuts would balk at that.

It gets worse. Your assertion that the question, “How much biological information can natural selection actually generate?” might not be of interest to Darwinists staggers me. The question is the heart of Darwinism’s central claim: the claim that, to paraphrase Richard Dawkins, “biology is the study of complex things that appear to be designed, but aren’t.” It’s the hinge on which the argument about Darwinism turns. And you tell me that the reason that Darwinists have no answer is that they don’t care about the question (!).

More comments from Egnor:

There are two reasons that people you trust might not find arguments like mine very persuasive:

They’re right about the science, and they understand that I’m wrong.
or
They’re wrong about the science, and they’re evading questions that would reveal that they’re wrong.

My “argument” is just a question: How much new information can Darwinian mechanisms generate? It’s a quantitative question, and it needs more than an <i>ad hominem</a> answer. If I ask a physicist, “How much energy can fission of uranium generate?” he can tell me the answer, without much difficulty, in ergs/ mass of uranium/unit time. He can provide references in scientific journals (journal, issue, page) detailing the experiments that generated the number. Valid scientific theories are transparent, in this sense.

So if “people you trust” are right about the science, they should have no difficulty answering my question, with checkable references and reproducible experiments, which would get to the heart of Darwinists’ claims: that the appearance of design in living things is illusory.

[...]

One of the things that has flipped me to the ID side, besides the science, is the incivility of the Darwinists. Their collective behavior is a scandal to science. Look at what happened to Richard Sternberg at the Smithsonian, or at the sneering denunciations of ID folks who ask fairly obvious questions that Darwinists can’t answer.

The most distressing thing about Darwinists’ behavior has been their almost unanimous support for censorship of criticism of Darwinism in public schools. It’s sobering to reflect on this: this very discussion we’re having now, were it to be presented to school children in a Dover, Pennsylvania public school, would violate a federal court order and thus be a federal crime.

There’s lots more interesting stuff in the comments section referenced above. I encourage you to check it out. I was pleasantly surprised at the number of commentaters who stood up for ID and challenged Darwinian theory along with Dr. Egnor.

[HT: Evolution News & Views]

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

198 Responses to Michael Egnor Responds to Michael Lemonick at Time Online

  1. NS + RV might be a better way of describing what Darwinists claim than NS + RM.

    You would stop the sidetracking arguments that HGT and genetic drifts are not mutations.

  2. From the comments section:
    “As for Dr. Egnor, his quote suggests that he is not as familiar with evolutionary theory as one would hope/expect.”

    Yeah great response – what a surprise. It seems to be the default/knee-jerk reaction of Internet Darwinists. Usually goes something like this: 1. Raise a valid question about NDE 2. You are accused of not understanding “evolution” 3. Ask question again 4. Hordes of Darwinists attack you, yet never answer original question. Par for course. Gee how difficult is it to understand? “RM + NS = everything” Yeah takes a real “brain surgeon” to understand that one.

  3. tribune7, I appreciate your frustration with the darwinian sidetracking. Interestingly, genetic drift is nothing more than an accumulation of mutations. HGT is a slightly different story. In truth I wonder if the ubiquitous HGT isn’t just evidence of intelligent genetic engineering. However, the neo-Darwinian community clearly assumes that the genes that transfer do so for purely “randomness + selection” reasons.

    There is an argument from the neo-Darwinian community that there actually is more to the theory than random mutation plus natural selection, but I haven’t found it. (BFast gets drowned out by the drone of “YOU OBVIOUSLY DON’T KNOW THE FIRST THING ABOUT EVOLUTION!“) However, I am happy to challenge any darwinist out there — Ph.D. or not, to show me any understood mechanism that is not merely a natural extension of the simple Random Mutation + Natural Selection tennet, or of a mechanism that was supposedly developed by the first. I believe that there is none. RM+NS explains it ALL!

  4. “YOU OBVIOUSLY DON’T KNOW THE FIRST THING ABOUT EVOLUTION!“

    That happens to you too? :-)

  5. 5
    sagebrush gardener

    Invoking a mysterious “intelligent designer” is tantamount to saying it’s magic.”
    –Michael Lemonick

    “Any sufficiently advanced technology is indistinguishable from magic.”

    – Arthur C. Clarke

  6. Just a clarification. Genetic drift has nothing to do with mutations and is thought to be more powerful than natural selection for influencing allele distribution in populations. Also natural selection operates on the current set of alleles as well and does not necessarily need random mutations to change the allele frequency of a population.
    The definition of genetic drift from wikipedia

    “In population genetics, genetic drift is the statistical effect that results from the influence that chance has on the success of alleles (variants of a gene). The effect may cause an allele and the biological trait that it confers to become more common or more rare over successive generations. Ultimately, the drift may either remove the allele from the gene pool or remove all other alleles. Whereas natural selection is the tendency of beneficial alleles to become more common over time (and detrimental ones less common), genetic drift is the fundamental tendency of any allele to vary randomly in frequency over time due to statistical variation alone, so long as it does not comprise all or none of the distribution.”

    http://en.wikipedia.org/wiki/Genetic_drift

    I heard someplace that brown eyes will become fixed in the human population over time by genetic drift. Though I do not have the actual reference for this.

    Changes in alleles can arise from several sources, of which random mutations is just one, though it is the one made famous by Darwin (I think he called it spontaneous changes) and the original formulation of NDE in the 1920′s and 1930′s based on Morgan’s work with fruit flies. Most evolutionary biologists today don’t restrict themselves to RM + NS. What is taught in evolutionary biology courses as a source of new alleles includes much more than just random mutations. Natural selection has nothing to do with the origin of new alleles only how they might get fixed in a population.

    What the evolutionary biology courses do not include is any intelligent input to the appearance of new alleles and they openly disparage the need to even consider it. Of course evolutionary biology really has no answer to the creation of new complex systems of alleles either by random mutation, HGT, gene duplication or any other natural mechanism which is why you get the nonsense by Lemonick in response to Egnor’s questions.

  7. The bottom line is that Neo-Darwinian Evolution proponents have no plausible theory of the generative, only a theory of garbage disposal. This is not hard. The essentials of NDE theory are remarkably simple, remarkably easy to understand, and remarkably deficient. Whether it’s random mutation, genetic drift, or other mechanisms, they are all stochastic in nature. Trial and error can be a useful tool in a limited domain in which the search space is sufficiently limited, but probabilistic resources run out very quickly for difficult (and even somewhat simple) problems when combinatorics raise their ugly head.

    Any engineer knows this, and knows when to use trial and error and when to resort to intelligent design when the limits of trial and error have been reached.

    NDE dudes don’t have a clue about this, so they must resort to really boring and vacuous arguments like those of Lemonick.

  8. “The bottom line is that Neo-Darwinian Evolution proponents have no plausible theory of the generative, only a theory of garbage disposal. This is not hard.”

    I remember when I used to hear how easy “evolution” was to understand, and how it didn’t make sense that more people didn’t accept it because it’s so easy to grasp. Then once ID became a real threat, the buzz phrase was “it’s hard to understand” which was why there was so much “confusion” about it among “common folk” (read, everyone who doesn’t buy into the creative power of the blind watchmaker). Egnor’s simple question is enough to illustrate the vacuity of Darwinism.

  9. jerry,

    Genetic drift has nothing to do with mutations and is thought to be more powerful than natural selection for influencing allele distribution in populations.

    Sorry, I confused genetic drift with molecular clocks. In any case, genetic drift is nothing more than natural selection when natural selection doesn’t find anything particular to select for. The Modern Evolutionary Theory is NOTHING more THAN RM+NS!

  10. 10

    The comments section is very illuminating as Dr. Egnor replies to and challenges Lemonick.

    This reminds me of the Meyer-Ward debate. Egnor makes thoughtful comments on substantive issues while Lemonick invariably responds with non sequiturs and childish put-downs. Even if I wasn’t already ID-friendly I know who I would favor based on behavior alone.

  11. bFast,

    I believe geneticists think that natural selection and genetic drift are very different processes. In genetics they have calculation schemes for genetic drift that are based solely on random differences between generations on what alleles gets expressed that show that a random favoring of one allele can lead to its fixing in the population. It has nothing to do with an allele’s effect on the survival of offspring.

    About a month ago when Larry Moran was featured for some of his more stupid comments, I went to his site and he said he was a “drifter” and favored it over natural selection. So they obviously think there are distinct differences.

    I do not agree with your comment that “Modern Evolutionary Theory is NOTHING more THAN RM+NS!”. It is not what is taught in the university evolutionary biology courses though this is a common perception here. In begining biology courses they frequently do not go to far beyond Darwin’s original theory. But even advanced plalcement courses will include genetics and thus genetic drift as the main cause for fixing alleles.

    Here is a comment by Behe on the process of evolutionary biology that Joseph provided:

    “Intelligent design is a good explanation for a number of biochemical systems, but I should insert a word of caution. Intelligent design theory has to be seen in context: it does not try to explain everything. We live in a complex world where lots of different things can happen. When deciding how various rocks came to be shaped the way they are a geologist might consider a whole range of factors: rain, wind, the movement of glaciers, the activity of moss and lichens, volcanic action, nuclear explosions, asteroid impact, or the hand of a sculptor. The shape of one rock might have been determined primarily by one mechanism, the shape of another rock by another mechanism.

    Similarly, evolutionary biologists have recognized that a number of factors might have affected the development of life: common descent, natural selection, migration, population size, founder effects (effects that may be due to the limited number of organisms that begin a new species), genetic drift (spread of “neutral,” nonselective mutations), gene flow (the incorporation of genes into a population from a separate population), linkage (occurrence of two genes on the same chromosome), and much more. The fact that some biochemical systems were designed by an intelligent agent does not mean that any of the other factors are not operative, common, or important.–Dr. Behe”

    So evolutionary biology is a lot more than RM which is just one source of new alleles and NS which is just one process which affects changes in allele frequecy in a population.

    At present evolutionary biology does not include anything from ID which it should because as Dr. Egnor says, all these mechanisms of allele creation cannot explain the formation of the incredible information in biological systems.

    This last point is the sole point of ID’s dispute with evolutionary biology. We do not dispute natural selection, just that natural selection rarely ever has anything but trivial stuff on which it can have an effect.

  12. Wow. That is one angry journalist.

    Michael Egnor managed to keep himself calm and reasonable the entire time, and Lemonick was reduced to talking points. I always get a bit weirded out when people get that passionate over simply questioning what’s supposed to be a popular, but nevertheless purely scientific theory. It’s not as if Lemonick is on the side that’s being censored in the, as he puts it, tiny minority.

  13. bFast:

    “The Modern Evolutionary Theory is NOTHING more THAN RM+NS!”

    You are absolutely right! I understand what jerry is saying, but still bFast statement is perfectly true. That’s why:

    1) Maybe RM is not the only source of variation, if you restrict the term to single nucleotide polymorphisms. But if we accept tribune7′s correction, just to avoid ambiguity, and speak of Random Variation, that can include everything: SNP, deletions, inversions, duplications, genetic drift (HGT is another story). The key word here is “random”. Any known variation of genetic material, except possible intelligent interventions (such as those that genetic engineers daily accomplish) is by definition random.
    2) What is the matter with genetic drift? It is a process which certainly exists, and so? It can only fix some existing allele in a random way! It cannot generate new information (such as a new sequence or protein), and it cannot select anything on a functional basis. Therefore, it cannot add anything to random chance, and therefore anything more specifically complex than the Dembski limit of 1 to 10^150 can never come into existence by genetic drift, or by any combination of random variation. If there is something which I can’t see in the genetic drift theory, please someone explain it to me!
    2) Deletions, inversions, duplications, are kust variant froms of random mutation. hey cannot do anything different than random mutation in the form of SNP. I agree, the “variation” created by a deletion or duplication is different in form from the variation created by a SNP, but they are anyway random events. Do you think that he chance of a monkey randomly producing Shakespeare’s works on a keybord may be increased if you add a key which, instead of typing a new character, just “moves” randomly a set of alredy written characters to another point? I think that the equivalence of all random manipolations should be easy to demonstrate mathematically, although I cannot do that.
    3) HGT is another story. That is a powerful instrument which allows to reutilize existing code in different organisms, and is tha main cause of the only well documented adaptive variation in nature: antibiotic resistance in bacteria. Well, but it is a reutilization of “existing” code. It has no ability to create new information, only to use what alredy is there for the purpose for ehich it has always been there, only in different bacteria. And where it is a cause of variation (for instance for the different possible integrations of a new gene in the genetic material of the host), there its variation effect is, again, random.
    4) So, what are we left with, to “explain” information? Natural selection, I am afraid. What a pity that it does not work! But pretending that there are other explanations, which have never existed, will not improve the chances…

  14. gpuccio,

    Thank you for your explanation. It will help us separate out the various arguments made by the evolutionary biologists and focus on what is right and what is wrong.

  15. He hasn’t published my comment yet.

    I basically said that ID is NOT anti-evolution. Rather if anything ID could be considered to be anti-the blind watchmaker having sole dominion over the evolutionary process(es).

    Then I talked about sheer dumb luck and its implications for science. (sheer dumb luck being the materialistic anti-ID position)

    I also said that we exist and there is only ONE reality behind that existence. And to disallow ID just because is an injustice to science and mankind.

  16. Regarding this funny guy, Michael Lemonick, I think he has found the perfect way to answer all questions without risking to be found wrong. Here are a few examples, from his “comments” in that blog:

    “Maybe so, maybe not.”

    “As for your other points, I’ll give you credit for one thing: you take them very seriously”

    “So you and a tiny band of others keep saying.”

    “Or so you believe.”

    Wonderful! No wonder darwinists are not falsifiable! I am afraid all of these statements are (not very brilliant) variations of the classical semantic way to be always right: just answer, to anybody, whatever he is saying: “That’s what you are saying”.

    But, seriously, my most heartfelt admiration goes to Michael Egnor for his infinite patience, fairness, precision, and scientific openness. If that is the difference between an IDer and a darwinist, we have strong moral reasons, beyond the scientific ones, to stay where we are in this specific “culture war”.

  17. Jerry:

    I do not agree with your comment that “Modern Evolutionary Theory is NOTHING more THAN RM+NS!”. It is not what is taught in the university evolutionary biology courses though this is a common perception here.

    I agree with you that this is not what is taught in biology courses. I also agree that it is a common perception here. Further, this “common perception here” is at the heart of the “your problem is that you don’t understand the theory of evolution”tyrade.

    It is for this reason that I dare to prove that RM+NS=MET is valid and complete.

    Consider, for instance, genetic drift. I contend that genetic drift is not a mechanism at all, but a phenomenon. Let me distinguish the two. In physics we have a number of mechanisms, two of which are inertia and gravity. If all other forces are ignored, such as friction, and if a small object is passed close to a very large object, the small object — due to these two forces alone — enters into orbit around the large one. Orbit is a phenomenon. The eliptical shape of the orbit is a phenomenon. It is not, in itself acting on the objects, it is only observed and described.

    Genetic drift is a phenomenon. When selective pressure is reduced to zero, genetic drift must, ipso facto, happen. Now, that said, there are clearly currents within the genetic ocean upon which an allele frequency is drifting. For instance, brown eyes are genetically associated with darker skinned people. If the human population is selecting for a darker complexion by natural selection, then the current of this selection will migrate the color of the human eye towards brown. As such, we see that the current that underpins genetic drift is none other than our good friend natural selection.

    All of the other phenomena that you mention are intriguing discoveries of the intricacies of RM+NS at work. It is not that I am unaware of these phenomena (though I am not professionally versed) nor that I do not find them intriguing. It is merely that they are the natural, though not always obvious, effects of RM+NS.

    Even HGT, if I understand, is explained within MET on purely RM+NS terms. I understand that there may be some component of a secondary mechanism — seems that viruses may play an active role in HGT. In any case, according to MET, any such mechanism is a mechanism that came about as a product of RM+NS.

    Joseph

    ID is NOT anti-evolution. Rather if anything ID could be considered to be anti-the blind watchmaker having sole dominion over the evolutionary process(es).

    I wholeheartedly agree. I fully support common descent. I recognize that HGT puts a wrinkle in “common descent”, but HGT notwithstanding, common descent is the best explanation I can find for how life developed on earth.

    As for where natural selection fits into the puzzle of life, I think that most IDers would agree with me that NS has a potend preserving power. Natural Selection is clearly a primary mechanism for sustaining and balancing the ecosystem. I think that most IDers would agree with me also, however, that RM is pretty much just destructive; that assuming that RM did it (filtered through NS) is really, well, asinine.

  18. “assuming that RM did it (filtered through NS) is really, well, asinine.”

    This statement on my part is a bit strong. Let me just clarify it a bit. “assuming that RM did it (filtered through NS) is really, well, asinine.” Ie, if it is true that RM, filtered through NS is actually capable of doing what is claimed, the burden of proof is certainly squarely on the sholders of science to prove it — at least to prove that RM+NS is a sufficiently potent combination that it could have pulled it off. So far the best case I have seen for RM+NS is, well, it’s the only theory that fits the philosophy.

  19. bFast,

    I do not disagree with a lot of what you are saying. My objection is that we are doing a very poor job of communicating ID to the world. And one of the things contributing to this poor job is that there is not a consistent and scientifically oriented message.

    We often make broad and vague attacks on evolutionary science and I am suggesting that we not do that and concentrate on what is actually vulnerable. One way to do this is to accept what is valid in evolutionary science, which is a fair amount, and focus on the actual differences.

    I maintain the only actual difference between ID and current evolutionary science is how new alleles are/were created and this is it. Nothing else.

    We do not disagree with natural selection, genetic drift, random mutations or as been suggested random variations from whatever source. We do not disagree that traditional Darwinism works in some cases. What we do disagree with, is that there is any source of naturally occurring phenomena that can explain the complexity of allele generation over time. It is here and here alone we should make our case.

    Many evolutionary biologists think traditional Darwinism is passé. For example, the papers of Woese and Schwartz and Alan MacNeil’s admission a few months ago all say that traditional Darwinism is dead. Read Larry Moran’s blog and he makes the same point over and over.

    So when we celebrate here the papers of Woese, Schwartz or MacNeil’s admission we are being foolish because we are really no closer to our objectives than before and actually look dumb because these papers/admissions are just examples of more naturalistic ideology. All these people have contempt for ID and one of the reasons why is that we fail to understand the valid science that is out there. And they are right.

    So persisting in demonizing RM + NS looks silly because it is a correct process. Nearly everyone agrees micro evolution works in a lot of cases and part of micro evolution is RM + NS. But it only correct for limited examples and this is the case we should make. Also since NS can always apply, when ever we denigrate NS we are actually wrong and look foolish. The issue is not that NS doesn’t work but that it rarely or maybe never has anything but trivial positive allele changes to work with.

    So call it a tirade. I find the process of thinking this out a great learning process and look forward to critiques. But I fail to see how when we are mistaken we are advancing anything.

  20. It is encouraging to see that several of you are becoming more familiar with population genetics. That should elevate the level of discussion a bit. I only wish I were knowledgeable enough in information theory to address what is your core claim, that RM+NS can not increase the information in a system in a nontrivial way. I have heard others, more knowledgeable than myself, say that information–at least as Shannon defined it–can readily be increased via noise. I do not, however, recall what formal concept of information/complexity you all are operating under, so I’ll leave it at that. Biological evidence arguing against this basic premise (i.e. no complexity via RM+NS) are cases of gene duplication followed by diversification, yielding new functionality. Also there are cases where modest modifications in enzymes yielded new functionality (e.g. leaf digestion in leaf-eater monkeys). These seem easily enough achievable through RM+NS. But note how in each case, it is only possible b/c that novel function was “there” close by in the potential fitness landscape, just a few mutational “clicks” away. I’ve been around long enough to know that none of this is going to convince anyone, particularly without a strong theoretical model demonstrating information/complexity accumulation under (RM + drift)+NS. In my mind, as I’ve indicated here before, the main sticking point for such a model is the shape of the adaptive landscape itself. It is no-doubt a high-dimensional space whose form we only have the vaguest hints of (physics, for instance, sets some basic constraints on terrestrial organisms’ possible weights). If that landscape shape is such that viable “solutions” are ubiquitous, then the plausibility of evolution via RM+NS would be much higher. In other words, it wouldn’t matter which direction you’re wandering via mutation, there’s very frequently a selective or neutral avenue to a viable fitness peak within this rich adaptive landscape. While the formation of complexity via RM+NS appears to be a very tall order, even to myself, I don’t know how to evaluate just how tall it is without understanding that fitness landscape more thoroughly. Until then, the fact that all the basic naturalistic/materialistic components are there (mutation, drift, selection) to create–however terribly unlikely–the complexity of life without intervention, scientists will not “unnecessarily propagate the number of entities” by invoking another entity, the designer, as having intervened in the process. That is why we feel the burden of demonstration falls on those who claim biological complexity is impossible without intervention. We see all the “pieces” and “mechanisms” are present naturally, and just what the likelihood of these parts/mechanisms actually *accomplishing* this complex feat in an undirected manner is, in my mind, altogether unknown given limited understanding of the fitness landscape.

  21. jerry:

    I appreciate your good intentions, but I would like to illustrate better a few points.
    As far as I know, ID sources have always been extremely clear in specifying that RM and NS may work in fixing some traits or explaining microevolution. If you read all the ID material (especially Behe and Dembski, which are the pillars of ID thought) you will find no ambiguity there. The critics of ID are only for the theory which attributes macroevolution (and therefore the generation of new information, of complex specified information and irreducibly complex information) to blind evolution, that is to RM + NS. But that’s exactly the important point. That’s the point about which darwinists have started, in the last few years, a real witch hunt, an unbearable intellectual persecution which is absolutely intolerant and irrational, whose purpose is explicitly to discredit all the good work of ID and prevent anybody from thinking that ID thought is a scientific thought. A denigration campaign made of ad hominem attacks, of insinuations, of constant intellectual cowardice (because even confronting the ID thought would be admitting that it exists, and maybe also because they know that they are irrevocably destined to lose).
    Obviously, there are exceptions, but they are very rare.
    So, it is true, the ID vs evolution debate takes place now in great part in the blogoshere (but not only there), and we know that blogs are also places of gossip (but also of very good things). So I admit, we can find a lot of gossip about the subject, on both sides, but believe me, can’t you see whose fault it is? Have you ever tried to read even a few patagraphs in Panda’s Thumb? Have you read the comments of Michael Lemonick which gave rise to this thread?
    So, it would be beautiful to discuss the differences between darwinism and ID in a serene way, in a collaborative spirit of search for knowledge and truth, and in full respect of those who think differently from us. But with whom would you do that? PZ Mywers? Dawkins? Michael Lemonick?
    Darwinian scientists are very good people when they do their good job: researching, objectively collecting new data. I have already said that, in my opinion, one of the strongest weapons of ID is the constant flux of new knowledge, especially new biological knowledge, which day after day contradicts the present paradigm and prepares the final scientific revolution which will put a stop to all the lies which we have been obliged to bear for half a century and more. And such a new information is often discovered by good “darwinian” reasearchers who are just doing their job, and doing it well.
    But darwinian scientists become true “devils” when they desecrate their duty towards knowledge and truth by compulsively forcing an ideological, irrational and contradictory interpretation of their data, when they end each scientific paper with false declarations of the evolutional political correctness of what they are saying. I have seen that happen in almost any new biological article which has something new and interesting to say, and where the others are very, very quick to declare, in the end of the paper, that those results are a new wonderful confirmation of how evolution can perform miracles, even completely unexpected ones. Conformist thought, you know, is a very bad thing.
    Jerry, it’s macroevolution they are speaking of. Nobody is really interested in microevolution, that’s just smoke in the eyes. Macroevolution is the big deal. Naturalism or teleologism is the big deal. Determinism or free will. Blind laws or intelligence. Chance or meaning. It’s no small “culture war”, and the other side is very much aware of that. That’s why they are so “nervous”, that’s why they fight so hard.

  22. Great_Ape,

    You use the term “fitness landscape” and “adaptive landscape” a couple times each. Is there anything you know of that is accessible (easily understandable) that discusses these concepts.

  23. OK I posted another comment asking him to substantiate the claim that vision systems and the bacterial flagellum have been “solved” by science. (he says both ID icons have been refuted)

    To BFast:

    I support (alien) colonization over Common Descent from some unknown population(s) of single-celled organisms. But that’s just me…

  24. great_ape:

    I have just read your interesting and very correct post, and I am very happy to immediately aknowledge that, with someone, it is perhaps possible to have a constructive discussion. So, let’s try it.

    “I only wish I were knowledgeable enough in information theory to address what is your core claim, that RM+NS can not increase the information in a system in a nontrivial way”

    Well, if we want to be very sincere, that’s only the “strong” point of view of ID, that of Dembski. Behe, in my opinion, restricts the discussion to “irreducible complexity”. But I personally agree with Dembski on that point, so let’s try to discuss it, although I am not a mathematician.
    I think there are two kinds of arguments against the generation of Complex Specified Information by random events, even if “selected” by some specific environment. The first is that the mathematical and statistical model, even if the RM + NS worked in some way, cannot really work. You see, when Dembski gives the Universal Probability Bound at 1 to 10^150, he is really being very generous. Jsu to be sure, he is pushing the limit to the final limit, the nuber of bits in the universe. That’s really heavy a limit! After all, we should only consider the possible number of single variations in 5 billion years (again, generous!), which is much lower.
    But let’s accept Dembski’s limit. Well, any single protein of about one hundred aminoacids is about that complex. 20^100. Do the math…
    Darwinists are constantly trying to avoid the true consideration of the complexity they are trying to explain. And, again, even if RM + NS worked (and I don’t believe it works), the times and spaces are billions of times insufficient to generate that complexity.
    Understand me, I am not trying to say that ID is only affirming that biological being are too complex. ID is affirming that RM and NS cannot add significant information (Dembski), that anyway they cannot create irreducibly complex information (Behe), that they cannot create information from inert matter, where there is no reproduction and therefore no selection (anybody who thinks seriously abou abiogenesis), that they cannot generate anything in beings who are few, live long, and have only a few million years to evolve incredible new characteristics (humans). And, just to be complete, that anyway biological information is far too complex, various, rich, differentiated and so on to be created by and undefined blind force such as selection.
    You say: “That is why we feel the burden of demonstration falls on those who claim biological complexity is impossible without intervention”. You can feel as you please, but I cannot agree with that kind of affirmations.
    The burden of demonstration falls on anyone who proposes a theory. You have a theory (RM + NS), and so you have the burden to demonstrate:
    a) That it is possible
    b) That it is true
    ID can certainly say many things about b, but it is not even necessary, because there is absolutely no evidence that that theory is true (all the so called evidences of evolution are, in the best case, evidences of common ancestry, period). But ID has also a lot to say about a. Darwinian evolution is not only not true, it is also not possible.
    So, the burden of the proof of darwinian evolution, ot it being possible and true, falls on darwinian evolutionst.
    On the other hand, the burden of proof of ID fall on IDers. Well, that’s air. But ID is, in reality, a complex movement of thought, made of many parts.
    a) A set of very strong theorical and practical objections to darwinian evolutions. This part of ID is exactly the part which should convince the evolutionists to demonstrate what they have never demonstrated, and to give rational answers to those objections. This part of ID is not, in itself, an alternative theory. And please, don’t affirm, like many, that we can’t discard ascientific theiry unless we have a better one available! We can and we must! If a theory is repatedly proven false, we must discard it, even if we think we have not a better explanation. After all, there areso many things that we cannot still explain (see dark energy for a good example), and human thought has propered for centuries without necessarily explaining anything.
    But, luckily, we have an alternative theory, and that is ID, or rather the “constructive” part of ID. Well, that part is not, as may want to believe, a religious faith or a dognatic belief. It is a very simple theory. It is a theory which ackowledges the existence of intelligent beings (me, you, my cat, and so on) who can make intelligent choices and reate information where it was not present before. A very specific kind of information. Recognizable information. Useful nformation. Sometimes, very complex information. Not necessarily perfect. Not necessarily good.
    Well, try for once to read the definition of ID which IDers give (Dembski for example). We know there are intelligent beings. We knoe that intelligently designed things are bery often recognizable. We know that biological beings very, very stronly look kile intelligently designed things (even Dawkins agrees on that).
    So, the alternative theory is there, and is very simple: biological beings may have been designed by somebody who has the characteristics of intelligent beings as we know them. Very simple, isn’t it?
    This theory is:
    a) Certainly possible: we know that CSI can be created by inelligent beings. We see that happen every day.
    b) Possibly true: the burden of the proof fall on IDers, obviously. But, at present, it is a perfeclty correct scientific hypothesis, and certainly can be preferred to the only alternative, which is instead absolutely impossible.
    So, where is the problem? You may ask: who designe biological beings. Dembski and others say: that’s not the field of ID. And I agree. But there are obvious, realistic and reasonable possibilities: just to cite a couple of them:
    a) aliens
    b) a God
    Ah, but I can hear them asking: aliens we can bear, but God? That is not science!
    But why? Because you don’t believe a God exists. It is not science that does not admit a God. It is you who don’t believe it can exist.
    Perfectly fair. You can have your religion or non religion, but I can have mine. So, let’s say that I believe that a God may exist (I am not alone, you know, a vast majority of human beings today, and an even greater majority in the previous centuries of human thought have believed that). Why can’t I? I am not saying that science makes me believe that. I am just saying that, if I believe that, there is no problem with my scientific theory of intelligent design. Maybe a God designed the. Or maybe aliens.
    But my theory is anyway valid. It is anyway possible. It is, anyway, scientific and possibly true. And perfectly falsifiable, if you are a Popper fan (just proving evolution would be enough, for me; I am not like Ken Miller, I don’t believe that evolution in the strict sense is compatible with any spiritual perspective).
    So, you see, I am giving you a very strong weapon: just prove darwinian evolution true, and you will have falsified, at least for me, all ID, and even all spiritual approaches to reality. I agree with Dawkins on that.
    So, just prove it. Prove it possible. Prove it true.

  25. “You use the term “fitness landscape” and “adaptive landscape” a couple times each. Is there anything you know of that is accessible (easily understandable) that discusses these concepts.” =Jerry

    Hi Jerry,
    As far as the web goes, the basics are covered decently
    in the wikipedia and answers.com entries for “fitness landscape” or “adaptive landscape.” There have, in addition, been numerous theoretical articles employing these concepts, but invariably they (necessarily) a) have to drastically simplify the landscape to make the questions they are addressing tractable OR b) simply don’t know the empirical values of parameters for the lower n-dimensional landscapes, let alone the high n-dimensional landscapes that represent real biological systems.

    Here’s a unfortunately poor-quality pdf scan of S. Wright:
    http://www.blackwellpublishing.....wright.pdf

    I briefly looked for a relevant review that was freely available, but turned up nothing… The basic idea is fairly straightforward, though, and can be gathered from the wikipedia article. It’s the details about what the space really “looks like” for a given organism and how various discrete mutational events move you around that space that represent the difficult part to grasp. And in that regard, we’re all about in the same boat b/c no one seems to have a good handle on it either. When you read the simplistic explanations with 2-D mountain peaks, etc, representing “overall fitness” just keep in mind that that single axis representing parameter space needs ultimately to be expanded in dimensions to cover a whole range of parameters (biophysical constraints,ecological constraints, etc)to accurately represent the “fitness” of a given genotype. That, in turn, dictates whether point (B) is feasibly reachable from point (A) on the landscape given (RM+drift+NS). Remember also, that “getting to point B” isn’t the most relevant question. The real question is, from any arbitrary viable point on the landscape, does an organism have *somewhere* to go along a viable path (i.e. could evolve via anagenesis)to a different type of organism or even could the organismal population take *more than one* viable path and speciate (cladogenesis). I like to think of the population more like a viscous fluid flowing through this space, occasionally pouring down multiple outlets.

  26. If Dr Egnor wants to know if evolution can generate information, isn’t the first thing we need to ask him what he means by ‘information’ in this context?

  27. Good post gpuccio. You responded on all the right points.

  28. gpuccio,

    Since, I have been reading about evolution, it seems that both sides have been talking past each other and it is obvious both sides are to blame. Each will state their respective positions which are obvious to themselves. Just look at the debate between the two Michaels, Egnor and Lemonick. We all know that Lemonick is wrong and being ignorant. But does he think he is? I believe that a lot of the problem is that Egnor and Lemonick are talking about different things. I find that those here at UD often do the same thing which is why I have been pushing for common definitions and correct understanding of what NDE is about.

    Another way to illustrate this is the following analysis of evolutionary theory:

    Evolution is a 4 tier theory.
    The first tier is the origin of life or how did a cell and DNA, RNA and proteins arise. Quite a sticky issue with no sensible answer by science. Lots of speculation and wishful thinking but nothing that makes sense. A high percentage of ID concerns are in this tier and zero concerns by NDE. The recent thread on Shapiro’s article in Scientific American is about this and it is interesting that he invokes Darwinian processes to bolster his claims. Usually, evolutionary biology stays away from OOL.

    The second tier is how did a one cell organism form multi-cell organisms and this include how did such complex organisms as the eye arise as these multi-cell organisms arose. How, did brains, limbs, digestive systems, neurological systems arise. These are immensely complicated but get little discussion except it all happened over time. We have all seen the “it must have evolved” comment. This is also an important area for ID but not as much so for Darwinists. Irreducible complexity operates in this tier. Also most of these systems must have developed before the Cambrian Explosion so there is relatively little geological time for these complexities to have developed.

    The third tier is the one that gets the most debate in the popular press and that is how did one species arise from another species when there are substantial functional differences between them. This is macro evolution. How did birds and bats get wings to fly, how did land creatures develop oxygen breathing systems or how did man get opposable thumbs or such a big brain and why such a long time for children to develop. How did 4 chamber hearts and warm vs. cold blooded arise. There is lots of speculation but no hard evidence. An occasional fossil is brought up to show the progression ignoring the fact that there had to be tens of thousands of other steps for these progressions of which only a handful have been found. I believe the forest animal to whale is now NDE’s best example here. In this tier the ID and the Darwinist are sometimes on common ground fighting it out. But ID is much relatively interested in the issues here.

    There is another part of the third tier which I call macro-evolution light. This is how did a lot of the orders and families develop? For example, within Carnivora how did all the families arise? ID seldom cares about this area but evolutionary biology does. I don’t think ID would care much if someone showed how all the family canidae arose but yet the evolutionary biologists would claim that would be a major verification of their theory. This area is a bridge between the third tier and the fourth tier.

    The fourth tier is what Darwin observed on his trip on the Beagle and what most of evolutionist are talking about when they think evolution, namely micro-evolution and can be explained by basic genetics, occasion mutations, environmental pressures and of course, natural selection. Few disagree on this fourth tier including those who call themselves Intelligent Design proponents yet this is where all the evidence is that is used to persuade everyone that Darwinism is a valid theory. The evidence in this tier is used to justify the first three tiers because the materialist needs all four tiers to justify their philosophy of life but the relevance of the evidence in tier 4 for the other tiers is scant at best.
    So to sum up, my experience is that ID concentrates on tier 1 and 2, a little bit on tier 3 and are not concerned at all with tier 4. And until all here and those who are confirmed backers of Darwinism realize the differences there will be no joint intelligent conversation.

  29. Jerry, nice summation!

  30. Nice summation indeed. It is interesting that anti-IDers usually defend things that ID theory doesn’t challenge or place much importance on (that “evolution” has occurred, common ancestry, antibiotic resistance, etc.) in an attempt to refute ID. As Lemonick reveals, they often don’t defend what ID does challenge: the origin of complex biological information by Darwinian mechanisms. Of course, the reason for this is obvious: Darwinian mechanisms can’t account for it.

  31. Jerry, I remember you posting that a while ago. I appreciated it then and I appreciate it now. And I think you are right about the sword fighting that goes on. Ninety percent of it could be eliminated if some of the misunderstandings were cleared up. But that’s life. There’s hard feelings on both sides that has been festering for years and unfortunately people (being people) can’t help but use the debate to pick at each other, often forgetting what they were debating in the first place. (Of course, I am above all of that).

    I have a question. A post awhile back from “scheesman” illustrated how the history of science has come full circle. In very ancient times the design we see in living and non-living things was thought of as designed by the direct hand of God. Then scientific method came along and told us that many things appear designed, but are simply the result of natural causes and some of this was proven by experiment. This pretty much ushered in the era of naturalism runamock (i.e. the notion that all of life can probably be attributed to natural forces of some kind). Now, today as we sit in front of our computer screens, some biologists are telling us that the constituent parts of a protein could not possibly have been the product of natural causes (at least the natural workings of RM + NS). We are again left with either an unknown natural cause or a supernatural agency. I have heard on this thread that this is especially evident when we look at the genetic code. Scheesman invoked the word “abstract” in his history lesson post to describe what we know about the genetic code. And in the Time Magazine debate linked above, an important pro-ID person invoked the word “information.” I’m still quite confused about what it is about genetics that invokes these descriptive words. Do you see the genetics or the genetic code as “abstract” and “information.” And why? Thank you in advance.

  32. Barrett1,

    Yes, I posted it before, maybe more than once I think. I added a couple things to it this time.

    I was not able to find the comment by Scheesman that you referred to. I have some of the discussions on UD saved on my computer and the Mac I own lets one search the computer for any phrase. He is a scientific software developer from Canada and maybe he is following some of the discussions so he will comment again.

  33. Anyone notice “P.Z.”‘s comment near the bottom? I’m always amused when I hear about a-telic means for generating CSI. I fail to understand why it’s so difficult for such people to grasp that mind always preceeds the generation of novel specified information. In every realm of experience.

  34. Jerry,

    I liked your very clear summation too. It can help us to give more order to our discussions.
    Although I agree with you on almost everything, I must again (for the last time, I hope) remember that any ambiguity about these “tiers” cannot be attributed to IDers. For instance, IDers have never tried, to my knowledge, to deny microevolution (tier 4).
    But even at that level, there is great ambiguity and bad reasoning on the part of darwinists. Take, for instance, the case of antibiotic resistance. If you read evolutionists’s sites, you can find that it is offered as a definitive example of observed “speciation”, as macroevolution in action, before our own eyes. Well, as we know, that is an explicit lie. Whoever interested can read the very good article linked here in UD about the subject. But just to sum up, antibiotic resistance is usually due to HGT, that is to tranference between bacteria of alredy existing genes (such as enzymes which can degrade the antibiotic). As we have discussed, HGT is a powerful tool, but it does not create any new information. The other cases of resistance are due to “negative mutations” that is to mutations which imply a loss of function, where the resistance to the antibiotic is a byproduct of the loss of function itself (for instance, because the protein which was the point of attack of the antibiotic has changed). Here the mutation is a negative one, and the acquisition of the resistance just a random sub-effect.
    Is that different from information building? Yes, it is. Definitely. In the case of a specific enzyme, let’s say penicillinase, we have a gene and a protein which have a very specific and unlikely function: degrading penicillin (which, obviously, is another bacterial product). We are inside an information network, a refined information network which rules the interaction between bacteria in a sophisticated, intelligent way. Any enzyme is a wonderful product of biochemical engineering, and no evolutionist has ever demonstrated how the information in a single enzymatic protein could have been created by RM + NS (see also the very perinent work by Behe). Genes are often activated or inhibited by environmental constraints, but never created by environmental constraint. The wonderful network of transcription regulation can give billions of different intelligent and functional patterns from the same DNA, and nobody really knows how. That mysterious complexity of response and adaptation is often misinterpreted as “evolutional change”, but that is only a gross deformation of truth.
    But let’s speak briefly of the other tiers. I would add a couple of new ones, if you pardon my audacity. One is the transition from prokaryotes to eukaryotes, which is almost as problematic as OOL (OK, I said “almost”). Another one is the appearance of sex reproduction, which is very difficult to explain in a “step by step” way.
    And, finally, I would not forget the generation of the incredible complexity of the human nervous system: about 10^11 neurons, a much greater number of ordered connections, a structure and working mechanisms which still defy any human understanding, and a processing capacity, and mathematical and informational functionality, which dwarfs any computer and confounds any information theory.
    And that last “tier” has apparently evolved in a few millions years, the wink of an eye in evolutional times, and in a species (chimps or similar) whose numerosity was certainly ridiculously low (compare to bacteria) and whose reproductive time was impossibly long (again, compare to bacteria), and whose complexity was already so high that any simple, “step by step” ladder of modification is virtually inconceivable.
    So, let’s leave to darwinists their “tier 4″ of microevolution. It really doesn’t matter.
    But for tier 1 (OOL) there is no game. And neither for tier 1 bis (prokaryote to eukaryote) or 2 (one cellular to multicellular). Yes, we can discuss tier 3 (subsequent speciation), but just for the fun.
    And again, for the last tier (human nervous system and its functions), I am sorry, but really there is no game at all (By the way, I am looking forward to reading Denyse’s new book about science and mind…)

  35. Barrett1:

    Yes, the genetic code (at least the part we understand) is “abstract” and is “information”. That is obvious when you look at its role. It is, indeed, a code, and a redundant one. Each DNA triple has a specific “meaning”, corresponding to one of the twenty aminoacids, and that meaning is almost universal in living beings. And there are punctuation triplets. So, it is a “language”, that is, an abstract form of communication between the nucleus, depositary of the information for the proteins (and of who knows how many other things), and the translation mechanism. The translation is, indeed, a “translation”, that is: the abstract information in the DNA code is recognized and transformed in a real sequence of aminoacids by the rybosomes.
    In other words, there is no known reason why, for instance, the aminoacid leucin is coded by four different triplets: CUU, CUC, CUA, CUG, and not by others. In some way, the translational mechanism is “fine tuned” on the same language, and the couplings work.
    In the same way, any biochemical network whose purpose is to transfer information is, in essence, abstract and informational. Such are, for instance, the cytokines network, and the pathways which trasmit signals from the cell membrane to the nucleus. Each single protein is not important in itself, for what it does (as could be the case, for instance, for final effective protein whose purpose is to synthesize some important product). Informational proteins are important because they transmit a signal and help regulate it and integrate it. Their final target is, again, the incredibly complex and little knpwn world of the netwotk of transcription factors in the nucleus, the real “master” of all cell life. Again, information in its purest, most abstract form.

  36. Note that P.Z. refers to five billion years, with the implication that in such a long period of time almost anything ought to be possible. Five billion years is about 1.5 x 10^17 seconds, and a single average protein represents about 1.5 x 10^260 possible amino acid sequences.

    There are two sides to the equation: probabilistic resources and improbabilities that must be overcome. Darwinists always cite the resources but never the obstacles.

  37. Great ape wrote:
    I have heard others, more knowledgeable than myself, say that information–at least as Shannon defined it–can readily be increased via noise.

    That is true. One can take a source of noise digitize it and fill a 40 gig disk drive with it. Such a noisy process is arguably increasing some form of information. The file size will show that as the noise is input into the computer, the file size increases, hence an information increase.

    But information coming from noise generators cannot be Complex Specified Information by definition. What Darwinists unwittingly try to explain is the presence of specified information. The word “information” in ID literature is referring to specified information, which is a special subset of Shannon information, not Shannon information in general.

    Why is it for example you can readily recognize music? Music is a form of specified complexity. Noise is unspecified. Music fits a pattern. Surprisingly you can recognize music as music even if you’ve never heard it before or explicitly have the pattern before hand in your brain. Why is that? The answer as to why you can recognize patterns you’ve never seen before is in Dembski’s latest work on specification which you can read for free at designinference.com

    10 megs of music and 10 megs of noise are both shannon information measures of bytewise content on your disk drive, but hopefully you can see that 10 megs of music is specified information and 10 megs of noise is not (or at least not demonstratably specified).

    The question is why does biology (like music) give us recognizable patterns rather than noise? Noise can not be the answer (by definition), but design can be.

    Sal

  38. Gil,

    The time is much less than 5 billion. Somewhere about 600-650 mya the first multi-celled animals showed up in the fossil record and by 525 mya there was the Cambrian Explosion with such things as all the eyes that ever developed already “evolved.” I am sure there were numberous other systems that had to be present in these primitive Cambrian animals for them to survive and all this happens in roughly 100 million years.

    It is amazing how 5 billion turns into 100 million in a minute of reflection. Now 100 million is a long time but it is not 5 billion.

    Also the first cells turned up almost as soon as water formed on the planet about 3.6 to 3.8 billion years ago. Which was absolutely rapid. So the time frame for the formation of the first proteins is much smaller than you gave it or PZ wants to admit to.

  39. Jerry,

    Even assuming the most optimistic speculations of Darwinists, the origin of life and its subsequent evolution by purely materialistic means is utterly ludicrous in light of what is now known from modern science.

    It’s really a no-brainer, but some people just won’t give up and admit the obvious.

  40. Gil,

    My favorite delusional Darwinian fundie, PZ, attempts to respond to Egnor on information, over at his groupthink:

    http://scienceblogs.com/pharyn.....ges_ev.php

    The errors and canards about information theory and CSI that he makes, probably deserve a post of it’s own here.

  41. As far as I’m concerned, PZ could be categorized near (or at) the bottom of the lowest class of ID critics. He’s little more than a vicious mocker who deserves nothing more than contemptuous laughter.

  42. Then again, maybe I should just feel sorry for him. He doesn’t seem to be very bright.

  43. “they often don’t defend what ID does challenge: the origin of complex biological information by Darwinian mechanisms.” ==Gil

    The problem with the big ID tent, though–at least as I see it–is that far more than these core ID concerns are raised by those waving the ID banner. There is the frequent jab at common decent, old earth, etc. If the criticism directed at NDE were not from a sort of disorganized barrage from people with numerous agendas, the more core and interesting criticisms would get more attention. Otherwise, the low hanging fruit for evolution proponents will always be to take on the fellow who insists god hid the bones in the ground to test our faith.

  44. “The word “information” in ID literature is referring to specified information, which is a special subset of Shannon information, not Shannon information in general.” ==Sal

    Hi Sal,

    Thanks for trying to explain the distinction concerning information vs. specified information. Information theory is well outside of my domain, but I have been trying to familiarize myself enough to evaluate some of these arguments for myself. I am by no means there yet. I would, however, be curious to know how you would respond to the criticism that I’ve heard made that “specified complex information” does not exist as a formal I.T. concept outside of ID writings? Are there other (non-ID) contexts in which these concepts are formally used in academia, etc? (Not that establishment in academia is everything, but it does suggest more credibility b/c more brains have mulled over and criticized possible weaknesses)

    Thanks,
    Ape

  45. great_ape

    I agree with your take on the big tent. To be fair, ID doesn’t address the age of the universe or the nature of the designer or methods employed to impliment the designs or common ancestry. Stricly speaking even atheists can be on board with ID as intelligence capable of creating the organic life available for us to directly examine could be material in origin. Only someone who absolutely rejects the possibility of intelligence predating life on earth must necessarily reject ID as well.

    That said I can sympathize with people opposed to ID who link it with religion as there are certainly plenty of people who support ID and link it with religion. If they were a minority of ID supporters it would be harder to do but there’s a LOT of ID supporters who are only embracing it for the religious implications.

    I don’t know how to work around this. Given that ID is simply a declaration that certain complex structures in the universe can be best explained by intelligent causes how can that be modified to incorporate an assertion that common descent is true? That’s purely beyond the scope of ID. There’s nothing at all about ID that either disputes or supports common descent. This is like asking Darwinists to modify orthodox evolution theory so that it explicitly supports the big bang theory.

  46. There is the frequent jab at common descent, old earth,

    ID does not address common descent or old earth. Now, threads on these boards sometimes go far off-range but there seems to be a common theme namely that the dogma of the scientific establishment makes it blind to evidence and/or (much worse) isn’t blind to evidence but marginalizes it for a political agenda.

  47. great_ape,

    One of the peculiarities of the discussions of CSI to me has been the lack of a good definition of the “S” part of the designation. What does it mean to be specified.

    Now there are lots of good examples of specified information but I have not seen any good definition of it. For example, a typical English sentence is specified but what definition leads to this conclusion?

    I have seen people use the analogy of a thunder storm as complex specified information because it is obviously complex and the distribution of all the various molecules at different temperatures contains a huge amount of information and there are some systematic elements to it that differ from normal atmospheric combinations that give it its unique attributes. Now a thunder storm which is a combination of natural forces is obviously different that a composed English sentence. But what is the essential difference that could be put in a definition that would automatically distinnguish the two.

    Maybe someone here could help with this and put it into layman’s language.

    The obvious extension to biology is DNA and its ability to govern complex systems. Now we know that thunder storms can arise from the forces of physics playing out and we know that an English sentence can never arise from any random event. So is DNA like the thunder storm or is it like the English sentence?

    To me it seems obvious but what is the definitional distinction that would lead to saying no to a thunder storm as specified and yes to an English sentence or DNA as specified.

    Maybe someone here who has read more on this can help. I can follow the probabilistic arguments fairly easily having been a mathematics major and had several courses in statistics and probability. But what has been interesting is that in all the discussions here I have never seen a good definition of what it means to be specified. Is it like “obscene” that you find it impossible to define but recognize it when you see it?

  48. Stephen Gould was on to something with NOMA — the problem is the scientific establishment did not/does not believe religion should have its “magisteria” i.e. if the scientific establishment declares something, it’s right regardless of how it steps on a religious tradition.

    One observation is if we (the U.S.) had the educational standards we had from 1776 to 1962 (prayer and Bible readings allowed in school) this debate would not be an iota as shrill.

    The Dover case would never have happened for instance.

  49. One of the peculiarities of the discussions of CSI to me has been the lack of a good definition of the “S” part of the designation. What does it mean to be specified?

    Jerry, Dembski defines it as: The actualization of a possibility (i.e., information) . . . if independently of the possibility’s actualization, the possibility is identifiable by means of a pattern.

  50. One of my comments made in on that Time blog. I just posted the following:

    Anyone who uses gene duplications to refute ID does NOT understand the debate. That is beacuse as far as we know gene duplications are a built-in response to an environmental cues (See Dr Spetner’s “Not By Chance”).

    IOW to call gene duplications the work of the blind watchmaker is pure wishful thinking.

    ID is NOT anti-evolution.

    If anything I could be considered anti-the blind watchmaker as having sole dominion over the evolutionary process(es).

    Computer simulations fail because in order to be used one must first fully understand that which is being simulated. We are not yet at that stage with the evolution of biological organisms.

    For those who advocate bacterial resistanec for anything please read the following:

    http://www.trueorigin.org/bacteria01.asp

    There are SEVERAL criteria that must be met BEFORE one infers design. One is:

    “Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.” Dr Behe

    1) X looks designed
    2) We have NEVER observed nature, operating freely produce X or anything X-like
    3) We have observed intelligent agencies producing things that are X-like
    4) We know that any scenario requires something either beyond nature or metaphysical, regardless of whether or not that is openly admitted to.
    5) Therefore we should at least be able to investigate the possibility of intentional design as opposed to just saying “the design is illusory”.

    Ya see we know from experience i matters a great deal to any investigation whether or not that which is being investigated arose by intent or by chance.

    So perhaps this is a good place to ask:

    What is the criteria for determining that the observed design is illusory?

    We exist. And seeing the materialistic anti-ID position is nothing more than sheer dumb luck, including the laws that govern nature, why would anyone cling to that position?

    Sheer dumb luck that the proto-earth got hit by a giant impactor which started our rotation and gave us a large stabilizing moon. Then cam another cosmic accident which wiped out the dinosaurs thereby allowing for the mammals to evolve. Sheer dumb luck.

  51. tribune7,

    After reading the definition of CSI you provided, I think the definition of obscene may be clearer.

    Now I know why no one has ever provided a clear definition and how it would select an English sentence and not a thunder storm.

    If you want to take on that problem (thunder storm vs. English sentence) maybe we could find a layman’s definition.

  52. great_ape asked:

    Are there other (non-ID) contexts in which these concepts are formally used in academia, etc? (Not that establishment in academia is everything, but it does suggest more credibility b/c more brains have mulled over and criticized possible weaknesses)

    If you asked an ID proponent the answer would be yes, and if you asked an ID opponent the answer would be no. Let me illustrate why.

    If Joe ID-Proponent said:

    f(x) = 2.146514159 x^2

    therefore calculus shows the derivate, f’(x), is described by

    f’(x) = 4.293028319x

    Joe ID-Proponent can claim calculus demonstrates his idea is true, whereas PZ ID-Opponent will claim:

    “I see no where in mathematical literature where f(x) is defined as
    2.146514159 x^2. ID proponents are liars and con artists. I dare them to cite a peer-reviewed paper where f(x) is definee this way.”

    We have a similar situation with the idea of specified complexity. It’s definition makes it a subset of the body informational constructs studied in information science. Thus all the ideas applicable to the field as a whole are applicable to specified complexity.

    I’m not aware that “specified complexity” is explicitly a term used in information science, but neither am I aware that f(x) = 2.146514159 x^2 is in any peer-reviewed math journal. It does not mean specified complexity is outside of information science any more than the idea f(x) = 2.146514159 x^2 and its derivative are outside of mathematics.

    When I say “information science shows specified complexity is destroyed by noise”, people like Mark Chu-Carroll 2ill jump all over the statement in the manner that PZ ID-opponent does.

    I’m think of posting on specified complexity sometime soon to help educate our readers. The basics are not difficult, only tedious. Most of the hard work for information science and theory is in building computers and communication systems. We don’t need that level of sophistication for the sake of most discussions. A college sophomore after reading the essay I’m preparing will hopefully be able to understand the basics.

    But if a communication engineer said, “noise destroys information” (i.e. noise destroys a musical recording), most would colloquially understand what was meant, even though, in one sense, as I pointed out, you can demonstrate “noise increases information”. But when we carefully look at the intended meaning, the paradoxes evaporate.

    Salvador

  53. My 2 cents:

    What is complex specified information (CSI)?

    CSI & specified complexity are basically the same thing. CSI can be understood as the convergence of physical information, for example the hardware of a computer and conceptual information, for example the software that allows the computer to perform a function, such as an operating system with application programs. In biology the physical information would be the components that make up an organism (arms, legs, body, head, internal organs and systems) as well as the organism itself. The conceptual information is what allows that organism to use its components and to be alive. After all a dead organism still has the same components. However it can no longer control them. And blind people may still have eyes but their vision system is incomplete or damaged.

    The bacterial flagellum*- It is a physical part. The physical information is the specific arrangement of amino acid sequences required, as well as their configuration- the “propeller” filament is comprised of more than 20,000 subunits of the flagellin protein FLiC; The three ring proteins (Flgh, I, and F) are present in about 26 subunits each; The proximal rod requires 6 subunits, FliE 9 subunits, and FliP about 5 subunits; the distal rod consists of about 25 subunits; the hook (or U-joint) consists of about 130 subunits of FlgE . The conceptual information is that which allowed for its assembly, i.e. the assembly instructions, as well as for the operation, i.e. the speed and direction of rotation.

    To summarize:

    Parts, assembly instructions plus command & control = CSI.

    *numbers from NFL

  54. Has anyone read From The Origin of Species to the origin of bacterial flagella
    by Mark J. Pallen and Nicholas J. Matzke?

    From the blurbs I have read it is about assigning homologs and just by doing that the bacterial flagellum could have “evolved” by culled genetic accidents.

    So there must be something that I haven’t read that actually demonstrates something- right?

    I ask because this article allegedly lays to rest one ID icon. And I have found it curious that the parts that would be presented in blogs do not even come close to doing so. (so what am I missing?)

  55. After reading the definition of CSI you provided, I think the definition of obscene may be clearer.

    OK, how about this: Specific information is information in a pattern if that information came into being independent of the pattern.

    Think of this: The odds of getting a Royal Flush are the same as getting any five cards in a poker hand. What has more information? The pattern is independent of the aculization.

    Dembski describes it a little different (and probably better) at his link. He also gives an excellent analogy (much better than a thunderstorm) regarding an archer shooting at a blank wall and an archer shooting at a bullseye.

    And it’s not obscenity that’s hard to define. It’s pornography :-)

  56. Jerry:

    “Now there are lots of good examples of specified information but I have not seen any good definition of it. For example, a typical English sentence is specified but what definition leads to this conclusion?”

    I think you raise a good question, Jerry. The notion of “specified information” is, intuitively, somewhat clear to me. I know the difference between specified and not, but perhaps not in an easily explained fashion.

    Here’s what Dembski wrote in the link that’s given in an above post:

    “In that scenario, by first painting the target and then shooting the arrow, the pattern is given independently of the information. On the other hand, in this, the third scenario, by first shooting the arrow and then painting the target around it, the pattern is merely read off the information.”

    I’m going to just try and think myself through it. As I mull it over, it seems to me that maybe this definition of specification might need to be rephrased. Dembski speaks of “independence” between the ‘pattern’ and the ‘information’, but instead of speaking of the independence of the ‘information’ and the ‘pattern’, I think we need to introduce the notion of the independence of ‘rules’ and ‘laws’ which give rise to “patterns”. My sense is that for “specification” to occur, there has to be ‘independence’ between the ‘rules’ governing the pattern, and laws that are at play in the ‘actualization’ of information.

    Getting back to your distinction between a thunderstorm and an English sentence, the difference seems to be that in the case of the thunderstorm, any pattern that is observed can be explained strictly in terms of the laws of nature– whereas, for example, in writing this sentence, I’m observing the “rules” of English grammar; hence, this sentence is a pattern that is textured by those rules, but is completely independent of the underlying ‘laws’ which make writing this sentence possible: e.g., electromagnetics, along with all the forces of Nature, the Kreb’s Cycle, chemical potentials in my nerve cells, etc..

    ‘Rules’ delimit the ‘range’ of possibilities. ‘Laws’ determine the ‘range’ of possibilities. (We should probably also include, in addition to the physical laws of Nature, the “Laws of Life” itself.) When no ‘rules’ are operative, then the ‘range’ of possibilities that exist cannot be delimited; hence, the emerging ‘pattern’ is not significant, but pre-ordained—as in snowflakes.

    Information emerges from the conjunction of ‘rules’ and ‘possibilities’. But, remembering that complexity and information are directly proportionate of each other in Dembski’s definition, the word ‘information’ seems almost redundant. So, perhaps, it might be better to say that ‘complexity’ plus ‘specificity’= ‘significant information’ (to distinguish it from mathematical understandings of it such as Shannon information). With this said, it now becomes clear that ‘rules’ themselves are the direct product of ‘intelligence’, implying that ‘significant information’ can only be produced by ‘intelligence’. This, in turn, has direct application to the so-called “Anthropic Principle”, where, within the ‘infinite’ possibilities for each of the physical constants of the world, the ones we have are the only ones operative. IOW, the electrical charge of the electron is not a ‘law’; it is more a ‘rule’ of the universe, with the implication that some ‘Intelligence’ has chosen it.

    To go back, now, to the analogy that Dembski gives of the archer, the arrow and the pattern, in the first scenario the arrow simply follows the ‘laws’ of physics and flies where it will given its initial conditions (tied into the “Laws of Life”; i.e., the archer’s physiology). In the second scenario, the pattern that is drawn on the wall is completely independent of those same laws of physics, and, thus, represents a ‘rule’ being superimposed upon all the ‘possibilities’ that the ‘laws of physics’ permit. Thus the ‘actualization’ of the arrow hitting the “bulls’ eye” is “significant information’ (versus simple ‘complexity’, or, Shannon information). It allows us to make some conclusion about the archer’s prowess. In the third scenario, the arrow flies according to the laws of physics, but the pattern that is drawn, is drawn in a manner that is completely dependent on the arrow’s location; so the ‘actualization’ of the arrow is insignificant because as its ‘information’ is being ‘actualized’ (basically, Shannon information, i.e., a consequence of probabilities), there is no ‘rule’ delimiting the range of all ‘possible actualizations’.

    Sorry this is so long-winded, but it isn’t a simple question to answer.

    P.S. One can also use this approach to explain the Mt. Rushmore example, and the ‘prime numbers in binary code’ example.

  57. gpuccio, Thanks for taking the time to respond. Great stuff.

  58. I rest my case, there is no definition of CSI that the common person could understand. I often wondered why it didn’t appear in any of the discussions. There seem to be examples but not a definition.

    Maybe I will try to define pornography and obscenity.

  59. I like Granville Sewell’s definition of a specification (paraphrased):

    Something that can be fully described in 1000 english words or less.

    From his essay:

    If we toss a billion coins, it is true that any sequence is as improbable as any other, but most of us would still be surprised, and suspect that something other than chance is going on, if the result were ”all heads”, or ”alternating heads and tails”, or even ”all tails except for coins 3i + 5, for integer i”. When we produce simply describable results like these, we have done something ”macroscopically” describable which is extremely improbable. There are so many simply describable results possible that it is tempting to think that all or most outcomes could be simply described in some way, but in fact, there are only about 2^30000 different 1000-word paragraphs, so the odds are about 2^999970000 to 1 that a given result will not be that highly ordered—so our surprise would be quite justified. And if it can’t be described in 1000 English words and symbols, it isn’t very simply describable.

  60. There seem to be examples but not a definition.

    Jerry, there is a definition. It is complicated — the one I tried to simplify was probably unworthy of the paper from which I took it — but one exist.

    I guess you pretty much have to read the paper, in which other terms are defined, to get the geist of it.

    Obscenity is easy to define. Pornography becomes a bit more difficult.

  61. tribune7,

    I asked if there were a definition that a common person could understand. I read the Dembsik definition and then the example of the archers and it reminded me of the Abbott and Costello skit of “Whos on first.” I got lost real quick. It was not the examples, which I understood the implications of, but the explanation afterwards.

    Maybe if we try here, we can come up with definitions and examples where someone as dumb as I doesn’t get lost immediately.

    So far all I see are examples and not simple ones at that. A goal should be to conclusively point to DNA and say that it is specified and point to something like a thunder storm and say it is not. And here’s why.

  62. “It’s definition makes it a subset of the body informational constructs studied in information science. Thus all the ideas applicable to the field as a whole are applicable to specified complexity.” == Sal

    I understand your point; there are a lot of concepts that are potentially derivable from the body of theory that have not necessarily been addressed or formally discussed in academia. I do find it odd that no one has addressed this previously/elsewhere. The ability to recognize “intelligent agency” within a given physical substrate without a known a priori pattern would seem like a fairly interesting line of research. I tried reading through Dembski’s article on specification and intelligence. I’m fairly familiar with Fisher’s approach to hypothesis testing and have used it both in traditional contexts and cases where I had to generate/simulate my own null distributions. Where he loses me is in the formalization of specified complex information and putting it in a hypothesis testing framework. I have trouble understanding how this can be done without some sort of symbolic context that gives significance to one subset of patterns vs. others.

  63. Specification deals with algorithmic complexity theory and compression. Those patterns which are independent, complex and highly compressable are suitable specifications.

    Again, Sewell defines these as patterns which can be fully defined in 1000 words or less. Dembski uses the idea of “primitive concept” instead of word, but it is the same thing.

    Specification patterns “tap into” our background knowledge, making use of ideas we already have. Thus they have a very short description length compared to their actualization. Take 8000 bits, all 1s. I just described a pattern you can independently reproduce from my specification, 8000 bits long, which has a description of only four words or so. (the four words = roughly 17 characters total = 8 bits per character * 17 = 136 bits.) As you can see 8000 bits >> 136 bits.

    Read the paragraph I quoted in post 59, I think Sewell explains it wonderfully.

  64. Correction: the specification pattern is actually simple (I said it needed to be independent, complex, and…), but what it describes should be sufficiently complex.

  65. “Obscenity is easy to define. Pornography becomes a bit more difficult.” ==tribune7

    [tangent:] I thought the line between art and pornography was drawn in US law precisely when the content in question became “obscene.” Since what is “obscene” is, by legal definition, based on community standards that vary from place to place, “obscenity” is what I thought to be the problematic concept.

    In any case, I think Jerry’s analogy is a good one. I’m always uncomfortable discussing a scientific concept whose definition I can’t grasp. It may well be that I’m too dense to get it, or that articulated in another way I would understand, but there is always the lingering doubt that the definition itself is problematic.

  66. Complex Specified Information:

    The coincidence of conceptual and physical information where the conceptual information is both identifiable independently of the physical information and also complex. page 141 of NFL

    BTW thunder storms only occur under specific conditions. With DNA it not only needs to be replicated but it also specifies other molecules required for a living organism to remain alive. IOW DNA is also complex.

  67. “Specification patterns “tap into” our background knowledge, making use of ideas we already have.” ==Atom

    Atom,
    This touches on one of my main questions about CSI: how does one formalize and enumerate the notion of “ideas we already have”? That seems like an inherently hazy endeavor. I can see how you might apply it to a restricted set of ideas (i.e. a known grammar), but how does one apply it to test a “physically actuated” set of information for which this “restricted pattern set” of potential “descriptions” (in Sewell’s terminology) are unknown?

  68. Great_Ape,

    Dembski uses the english language as well as his restricted pattern set, describing the BacFlag as a bi-directional rotary motor (or some similar phrase.) For me, I don’t see why the english language itself cannot serve as the proper backdrop.

    (This may seem strange. But since language is descriptive of patterns in the real world functionally, it is not surprising that real world patterns can be differentiated by using language as our “restricted pattern set”.)

  69. In slightly more detail, Sewell actually does enumerate the set you’re seeking, defining it (implicitly) as grammatically correct english descriptions of 1000 words or less. There are a fixed number of possible English paragraphs that length, Sewell gives the number of “2^30000 different 1000-word paragraphs.”

    So the set can be formalized. I believe Dembski does so, using his idea of “primitive concepts.”

  70. “So the set can be formalized. I believe Dembski does so, using his idea of “primitive concepts.” ” ==Atom

    I’ll have to think more about natural language descriptios as the pattern set, but let me see if I’m following so far.

    1. Something has CSI if it conforms to a pattern that is “compressible,” in the sense that it can be described by a smallish program and/or rule set.

    2. Such patterns, as opposed to non-compressible patterns, are exceptionally rare (assuming, of course, that the system generating the patterns generates them uniformly (i.e. is not biased towards compressible patterns)

    3. The observation of such a pattern is exceedingly unlikely to be due to “chance” so, having observed it, one can reasonably infer design.

    Practical concerns aside, I can now begin to see how one might fit CSI to a hypothesis-testing framework.

    One more question. I do know a few basic tidbits about compression via computers. One can, for example, much more readily compress an image if it has large patches of solid colors. That is, the simpler the image, the more amenable to compression. I suspect this idea can be extended more broadly. Does this not suggest that compressibility (i.e. simplistic description) and complexity are at odds with each other? Yet aren’t these concepts joined in CSI?

    Thanks, by the way, your descriptions and Sewell’s on CSI have been the most digestible I’ve come across thus far.

    Great_ape

  71. Since what is “obscene” is, by legal definition, based on community standards that vary from place to place, “obscenity” is what I thought to be the problematic concept.

    Well obscene is basically what the law says so the definition is (or should be) obvious i.e. if the law says you can’t walk down the street with everything flapping in the wind then what is obscene is certain.

    Now, whether the law is Constitutional is a different question and that concerns whether it’s an expression of an idea or “hard-core pornography”
    as per the famous quote by Potter Stewart:

    I have reached the conclusion, which I think is confirmed at least by negative implication in the Court’s decisions since Roth and Alberts, that under the First and Fourteenth Amendments criminal laws in this area are constitutionally limited to hard-core pornography. I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

    The dictionary definiton of porn is: the depiction of erotic behavior (as in pictures or writing) intended to cause sexual excitement.

    So in writing or depicting something how can one deterimine if one is trying to show a realistic aspect of the human experience or cause sexual excitement? That’s where the difficulty in defining it as per law comes in.

  72. I’m always uncomfortable discussing a scientific concept whose definition I can’t grasp.

    This one’s a though nut.

    What is it for a possibility to be identifiable by means of an independently given pattern? A full exposition of specification requires a detailed answer to this question. Unfortunately, such an exposition is beyond the scope of this paper. The key conceptual difficulty here is to characterize the independence condition between patterns and information. This independence condition breaks into two subsidiary conditions: (1) a condition to stochastic conditional independence between the information in question and certain relevant background knowledge; and (2) a tractability condition whereby the pattern in question can be constructed from the aforementioned background knowledge. Although these conditions make good intuitive sense, they are not easily formalized. For the details refer to my monograph The Design Inference.

  73. Great Ape,

    I think my answer concerning obscenity got caught in the filter.

    Probably because I used the famous Potter Stewart quote :-)

  74. Great_ape,

    That’s my (rough) understanding of it.

    I had to grapple with the same issue when trying to understand Dembski’s work. Like others here, I stumbled over the “specification” aspect of CSI. My rough explanation is the result of my reading and subsequent trying to dumb it down to understand it myself.

    For your other question:

    Does this not suggest that compressibility (i.e. simplistic description) and complexity are at odds with each other? Yet aren’t these concepts joined in CSI?

    Yes and yes. They are at odds, but they refer to different “things” if you will. The specification pattern (the description) is itself simple while the actualized pattern that it describes (the event) is complex. Like in my example, the specification pattern “8000 bits, all 1s” is simple, but what it describes is a string of 8000 bits, with a probability of 1/(2^8000) of being randomly selected.

    I visualize it as hitting two targets simultaneously. When an event hits both a “complex actualized pattern” target (with a very low probability of occurring) and a “simple description” target (which also has a low probability of occurring), it is the result of intelligence.

    I don’t know if that helps or confuses, but that’s my simple understanding of it.

  75. OK Jerry, I’ve been re-reading the paper.

    Does this help?

    In general, to recognize intelligent causation we must establish that one from a range of competing possibilities was actualized, determine which possibilities were excluded, and then specify the possibility that was actualized. What’s more, the competing possibilities that were excluded must be live possibilities, sufficiently numerous so that specifying the possibility that was actualized cannot be attributed to chance. In terms of probability, this means that the possibility that was specified is highly improbable. In terms of complexity, this means that the possibility that was specified is highly complex.

  76. Great_ape asks:

    Does this not suggest that compressibility (i.e. simplistic description) and complexity are at odds with each other? Yet aren’t these concepts joined in CSI?

    The question is understandable, but one must be careful, and the answer is no.

    Compressibility refers to a Kolmogorov-Chaitin conception of information and a different meaning of “complexity” than the meaning of “complexity” in the CSI sense. The CSI sense refers to Shannon conception of information.

    Here is an example of a complex system in the shannon sense but non complex in the Kolmogorov sense:

    “500 coins all heads”

    The physical outcome of 500 coins all heads represents 500 bits of information in the shannon sense. It does not represent much information in the Kolmogorove (K-complex) sense.

    Salvador

  77. By the way, 500 coins heads would be considered evidencing CSI.

  78. tribune7,

    I will try to parse out each part of your definition to see if I understand it or if I don’t maybe someone can help.

    “In general, to recognize intelligent causation we must establish that one from a range of competing possibilities was actualized”

    Does this mean that one of the possible options happened?

    “determine which possibilities were excluded”

    Does this mean that we have to know which possible options did not happened? Or could not have happened? If the latter then does this mean that these possibilities were not acutally possible?

    “then specify the possibility that was actualized.”

    You will have to help me here. I know what the ordinary use of the word specify is. Does this mean that we have to not only know that one of the possible options happened but to know which one?

    “What’s more, the competing possibilities that were excluded must be live possibilities”

    Does this mean the possible options that did not happen were actual possible options? If something is possible, isn’t it possible or is there such a thing as impossible possibilities?

    “sufficiently numerous so that specifying the possibility that was actualized cannot be attributed to chance.”

    You got me. I know of no possible English sentence I could use to try to explain this.

    “In terms of probability, this means that the possibility that was specified is highly improbable.”

    I have no idea how this follows. Why is something highly improbable from what has proceeded this. Of course I do not know what specified means in this context.

    “In terms of complexity, this means that the possibility that was specified is highly complex.”

    I have no idea how this follows either.

    Aren’t people getting the idea that no one here can actually explain the concept of CSI.

  79. Great_ape asks:

    Does this not suggest that compressibility (i.e. simplistic description) and complexity are at odds with each other? Yet aren’t these concepts joined in CSI?

    In contrast to the “500 coins all heads” there is also CSI which is likely K-complex and complex in the shannon sense:

    1. Mp3 file

    2. 500 coins arranged to specifically duplicate a previous random roll of 500 coins

    3. A zipped file of Shakespeare’s Hamlet

    When I say likely K-complex, it is actually prohibitive to know for sure it is K-complex, we can only make an estimate from a practical standpoint. For example, a compression algorith tries to make the compressed file as K-complex as possible. There is a remote chance it could be compressed further, but that is hard to actually know.

    All the above examples of K-complex are estimates that they are K-complex or approximately so….

    Sal

  80. I think we need to break CSI down into its three components to understand it.

    Great_ape, you have presented a rather Shannonish definition of information, but your definition of CSI seems to fully ignore the complex and specified bits.

    Firstly, we need to understand Shannon a bit. It is my understanding that (s)he works for AT&T (or some other phone company.) The purpose of the Shannon paper was to determine a way of distinguishing between the “information” part and the “noise” part of an electronic signal modulated by a voice, or a stream of computer generated data. As such, the algorithm is quite effective. However, Shannon was never attempting to define information, but was attempting to detect information by establishing a detectable characteristic of the information.

    To understand what information is, we must simply look at its root, to “inform”. Dictionary.com: “to give or impart knowledge…” Information, therefore, is a set of details which gives or imparts knowledge. It has a quatifiable characteristic which is that it is somewhat compressable — but compressability is not, in itself information.

    Secondly, lets consider the “complex” of CSI. Complex simply means not simple. In a Shannon world, it is what is left after the compression has happened. If you take the ASCII code of this diatribe, and compress it such as through a zip program, you will get a size. That is the complexity of this discussion. Dembski defines the threshold of “complex” v. “not complex” with the UPB. If information can be compressed so small that it could be reasonably derived by random chance, then Dembski says that it is not complex.

    Lastly, “specified”. This is the root of the word “specification”. Dictionary.com: “a detailed description or assessment of requirements, dimensions, materials, etc.” CSI specifies something — it describes something external to itself to the point where knowing the information is sufficient to make the resultant thing. DNA specifies the amino sequences in proteins. (It surely specifies a bunch of other stuff too.)

    Now, CSI is complex information that specifies something. Its that simple.

  81. Does this mean that one of the possible options happened?

    You have to understand that something happened out of several possibilities.

    Does this mean that we have to know which possible options did not happened?

    See above.

    “What’s more, the competing possibilities that were excluded must be live possibilities” . . .Does this mean the possible options

    It means they had to be legit and a lot. It would have been less confusing if you didn’t break the sentence.

    Why is something highly improbable from what has proceeded this . . .

    You’re taking it a little out of context and it’s explained in the article. If the event is the necessary result of nature then the outcome is basically certain (probability = 1) so whether its specified or not it’s going to happen hence you cannot determine design.

    There is no information in cetainty as the article notes. Hence if there are the many realistic possiblilties required for information and one is specified, and that happens then you can be sure an intelligence is at work.

    Specified means stated explicitly.

  82. Salvador,

    In your upcoming CSI post, would it be fruitful to cross-pollinate definitions of Complexity with Trevors and Abel’s Three Subsets of Biopolymeric Information?

    I keep trying to get people to read their paper because it does simplify the issues of Complex Information and it utilizes a common construct of falisfication via Null Hypotheses challenge.

    CSI = FSC would be the cross-platform introduction for the point of self-organization.

    The other two positions of random and ordered are trivial. What they show is that Rules and Law alone do not equal CSI. The boundary is set between the lower two and the upper one is intelligent exchange across boundaries. SETI is a perfect example of searching for CSI/FSC whether they want to admit it or not. What is amazing is this exist in our body, in all life forms. What SETI searches for outside of earth’s blue body, is inside our own.

    Another reason, I’d tie the Dembski papers to Trevors and Abel is their recognition of Dembski and Behe citations.

    I love Dembski’s detailed work. He is far advanced of most people’s understanding in his treatment of information theory related to life. Trevors and Abel’s simple constructs serve to open up the door to his detailed analysis.

    They in turn along with other leading world scientist now recognize the quandry for evolution by material processes is exposed to its severest test due to Demski’s work. Thus the Origins Prize of Life recognizing him in opposition.

    He detailed CSI before them. What they have done is built upon his work. They have helped me to appreciate Demski’s insights.

  83. tribune7,

    I am sorry but I do not understand one thing you said. Maybe if you expressed it in a pracitcal example. The archer example is not something I can relate to because it has no reality. No one has every done this. I don’t believe examples that have never happened or ever will happen are very useful. If an idea cannot be expressed in situations that are likely to happen, then maybe the idea is not clearly understood.

    Let’s take an example that is relevant to evolution. Can you explain in plain English why DNA is CSI and a thunder storm is not? Both are complex. Both contain large amounts of information. Both are organized systems. One, the thunderstorm is the result of basic physical forces operating randomly so that basic moleucles eventually form an organized pattern to cause such things as lightning, loud noises, rain, hale and high winds. There are zillions of possible combination of molecules but only a small subset are thunderstorms. Why isn’t DNA similar? I have no idea how the phrase “stated explicitly” has relevance here.

    I use this example because someone used it a year ago to question the usefulness of CSI to imply there was intelligence behind DNA and no one could answer the person. So I am curious.

    I intuitively know that DNA is on some plain way above a thunder storm and I like bFast’s explanation of specificity. The DNA specifies something else outside itself just as an audio signal specifies something else outside itself. Noise doesn’t. A thunder storm doesn’t. That I can relate to.

    I am not sure what Sal’s 500 heads specifies though I know it is an extremely rare event (smaller than the limits used to accept chance) and as such is not likely to happen except if it was rigged. I have no trouble understanding that but what about it is CSI?

    I like bFast’s explanation because it leads me to differentiate between DNA and a thunder storm but is that is what Meyer and Dembski means by CSI?

  84. Let me give an illustration of calculating an approxmiate level of complexity for CSI in an English Sentence of 100 characters:
    Deniable Darwin

    Linguists in the 1950’s, most notably Noam Chomsky and George Miller, asked dramatically how many grammatical English sentences could be constructed with 100 letters. Approximately 10 to the 25th power, they answered. This is a very large number. But a sentence is one thing; a sequence, another. A sentence obeys the laws of English grammar; a sequence is lawless and comprises any concatenation of those 100 letters. If there are roughly (10^25) sentences at hand, the number of sequences 100 letters in length is, by way of contrast, 26 to the 100th power. This is an inconceivably greater number. The space of possibilities has blown up, the explosive process being one of combinatorial inflation.

    Each sentence in isolation has a shannon information of about

    log2(26^100) = 470 bits

    CSI with respect to the English Grammar Specification is

    log2 (26^100/ 10^25) = 386 bits

    We all have the experience of reading an English sentence we’ve never read before, but still recognizing a grammatical pattern and knowing it is not gibberish. This gives an idea of how to measure CSI for a given alphabet and grammar, for estimating the improbability of even sentences we have never seen before.

    Extending the idea to biology we have the alphabet of molecules and then we have acceptable grammatical constructs. But English is a human language, how then can we find grammar in a non-human language, especially one we did not invent? Suffice to say, it is actually doable if one can build a correspondence to known and used systems in human engineering. That is actually a fairly bold statement!!! ID proponents claim biology is optimized to correspond to human engineered aritfacts like computers and robots and manufacturing control systems, sensing, navigation, error correction, etc.!!

    Consider the fact we knew before hieroglyphics and cuneiform were deciphered that these insriptions still it had a syntax and grammar, in other words, design! Even before we knew the meaning of such designs, we knew they were designed.

    Without going into the messy details there is a parallel situation with living systems in characterizing grammatical systems in languages we may not have yet completely deciphered. But a similar CSI calculation can be carried out even for such cases if the Intelligent Designer was willing for his works to be discovered one day!

    One linguistic construct (LOOSELY SPEAKING), is the notion of a computer or Turing Machine. One can take a computer and then write software which will function as a Turing Machine. That’s right a computer is a Turing Machine, and then one can further write a piece of software which itself will be another Turing Machine.

    One can take this Turing Machine and then write yet more softwre that will create yet another Turing Machine, etc….

    One can take the laws and materials that are in our universe and build Turing Machines (like a Dell Computer). Think of these machines (Dell Computers, Apple Computers, SGI Computers, etc.) as analagous to a grammatically correct sentence. All of these, though constructed of different materials still obey a basic form.

    One can see that in principle a calculation similar to what I did above for English sentences might be carried out to get an estimate of CSI of the computers and Turing Machines found in life.

    What Trevors and Abel have shown is that the construct of such a Turing machine, especially a self-replicating Turing machine is rare.

    I personally have been curious to get an idea of how many bits are involved, but suffice to say, even modest estimates by others are astronomical.

    More on Trevors and Abel peer-reviewed paper are here:
    Perfect architectures which scream design

  85. Can you explain in plain English why DNA is CSI and a thunder storm is not? Both are complex. Both contain large amounts of information.

    Why would a thunderstorm contain a large amount of info? As per Dembski’s paper if something is certain there is no information. As far as I can tell if you have the right humidity and the right temperature, a thunderstorm is inevitable.

    Now, what are the conditions for life to occur?

  86. Can you explain in plain English why DNA is CSI and a thunder storm is not?

    DNA (and the accompanying machinery to processing) conforms to linguistic processing system. Language processors scream design. The specification whcih DNA systems conform to the pattern of a linguistic system.

    Thunderstorms are complex, but do not conform or correspond to a highly specific pattern within the repertoire of engineering.

  87. scordova:

    thanks for all your very competent and detailed explanations about CSI. Although I certainly lack the technical knowledge to discuss these subjects in detail, I would like to express my limited understanding of their general meaning, and if possible to deepen it with your or everybody else’s help.
    First of all, I must say that I have always found Dembski’s work about CSI really exciting and stimulating, even for a non-mathematician as I am. I cannot always understand all the details, but my feeling is that Dembski has addressed a problem of fundamental importance, even beyond the framework of ID debate. Maybe Demski’s theories don’t explain everything, maybe they are sometimes incomplete or evolving, but the objection that many critics express, that his views are rather isolated in the field of information theory, is in my opinion just a demonstration of his greatness, in the sense that he is practically the only one (as far as I know) who has systematically addressed a fundamental problem which others have chosen to ignore, probably because at present our theoretical tools are insufficient to completely understand or even define it.
    What is that problem? Dembski calls it CSI, but in my personal, not technical language, I would call it the problem of meaning.
    Obviously, we have a lot of cultural discussions about meaning in phylosophy, in semantics, and so on, but who has tried to define meaning in a scientific, mathemathical way?
    I see the question in these terms. We have a lot of work about information, but information, I believe, is always defined (I may be wrong) in terms of complexity (number of bits) or, more specifically, of non compressibility (minimum number of bits to transmit the information). OK, that’s all very interesting, and I am fascinated by the implications of that all (I was really fascinated, for instance, by the very good paper by Chaitin linked, I think, from this blog).
    But still, something important is lacking. And that is the difference between meaningful and not meaningful information. Which, in other words, is the field of the CSI theory.
    Now, I understand that in Dembski’s thought specification is the whole magic to distinguish between meanimg and not meaning. I also understand that rigorously defining specification is the most difficult part, and I think that even Dembski’s approach has been evolving about that, although always with great consinstency.
    My impression is that meaning, or specification, is a rather deep subject, which at present is beyond our full comprehension. That does not mean that we cannot try to understand it better. Above all, that does not mean that we cannot use it in our theoretical frames. I think that meaning is a concept of the same class as causation and probability. Neither of these two concepts, as far as I know, can be completely and universally defined and understood (see all the epistemologic and philopophical debates about them), but that has never prevented us from building a whole scientific paradigm on them.
    My feeling is that meaning is strictly linked to the concept of consciuosness, and that’s why it is so elusive. A very general way to describe meaning is that it is any information which, in the appropriate circumstances, can be recognized as “different” from general, random information by a conscious observer.
    It is interesting that even the conscious observer usually (but not always) needs a previouly known specification pattern to recognize meaning. That’s the case for language. In the very good example of the 100 letters english phrase, knowledge of the english language is the mechanical algorithm necessary to recognize if, in a single sequence of letters, there is any meaning encoded by that language, and to understand that meaning.
    But the mechanical algorhytm is not the same thing as the recognition of meaning. A conscious observer is always necessary to recognize meaning. So, we can say that a mechanical algorithm never “recognizes” a meaning, but can, if previously programmed by a conscious observer, “identify” or “select” it for a conscious observer (or for some further process). Searle’s “chinese room” argument remains a very good examplification of this difference. Penrose’s interpretation of the non algorithmic nature of (some) conscious processes on the basis of the godel theorem is another one.
    But in the end, what makes some kind of information “recognizable” to a conscious observer? That’s the real problem, and it is very similar to the problem of specification.
    One answer could be: pre-specification. If a conscious observer knows a pattern in advance, he can identify it when it occurs, even if it is completely random. But that answer is not very good, because a programmed algorithm can do the same, and in this case the conscious observer only acts algorithmically, “comparing” the input data with a reference (the pre-specified random information), let’s say from a notebook. This is exactly the same situation as the chinese room setting, where a conscious observer “answers” to chinese sentences algorithmically, without unserstanding their meaning.
    But conscious recognizion is another thing. In language recognizion, it is not only a problem of knowing the code (the language rules), but mainly to understand that what is written in a sentence by that code has a meaning, in other words that it is a translation of a conscious content generated by another conscious mind. The “translation” could have been done by another code (another language), but the meaning remains the same.
    In a sense, I think conpressibility has something to do with meaning, because conscious beings regularly utilize compressible information to express conscious patterns. Real randomness is really difficult to conceive, remember and identify consciously. Specific, compressible patterns are often easily recognized. Scientific laws are, in a sense, a very specific way to “compress” the description of regular phenomena in nature, if I understand well Chaitin. In that sense, no algorithm can recognize a law, although any algorithm can identify it if it has been intelligently programmed to do so.
    Another parameter which can help to recognize meaning is the observation of function. That’s, in a nutshell, the classic watchmaker argument. If we find a machine we think it was designed by intelligent agents (unless we are part of that strange folk, darwinists…), but the recognizion that a machine is a machine does not depend only from an observation of a complex, even specially ordered structure, but usually is made obvious by the obervation of a specific function performed by the machine. And here we have another problem: how can we recognize function? I don’t know, but I know that this is another concept that conscious beings can easily understand and apply, even without defining it (Ah, the wonders of conscious, non algorithmic processes!).
    The interesting aspect of function as a form of meaning is that it can well be used as a specification “a posteriori”, and so it is very useful in the ID framework. (Obviously, if Dembski or someone else can give a mathematical definition of it, that would help). And who can deny that “functional” CSI is extremely (and I mean extremely!) abundant in the living world? (I know, I know… darwinists).

  88. tribune7,

    The placement of every molecule and its movement in a thunder storm requires a fantastic amount of information to “specify”. I do not know how big the file would be but it would be immense. No one would probably ever want to do this but maybe someone would want summary information of all the parts of a thunder storm to try and understand it and even this summary could contain an immense amount of information.

    Similarly a rock outcropping would contain a large amount of information to specify exactly the outcropping. I am not sure if Mt. Rushmore would actually require more information or less. My guess is less. So nature produces some very complex things that require a lot of information to specify. They are obviously the result of random physical forces.

    If something is certain, that is a very important piece of information that enables me to eliminate thousands of alternatives. So to say that it contains no information is not true in the normal meaning of the word.

    After reading all that has been presented here, I am not sure we are close to a definition of CSI that the average person can comprehend. Certainly after reading the comment from gpuccio who is a very bright and knowledgeable person, I am sure of it. We can point many examples but not to a definition that would lead someone to know the difference.

    As for your last comment

    “Now, what are the conditions for life to occur?”

    That would be a meaningless comment for the Darwinists because all they would say is that it will be solved some day and just because we can specify the causes of thunder storm does not mean we will not be able to someday specify the causes of DNA. I would also bet we are far from fully understanding thunder storms.

  89. Salvador,

    Thanks for the quick note and tie in to Trevors and Abel.

    Am I right to say CSI=FSC?

    Also, you state,

    “That is actually a fairly bold statement!!! ID proponents claim biology is optimized to correspond to human engineered aritfacts like computers and robots and manufacturing control systems, sensing, navigation, error correction, etc.!!”

    This is true – very bold! But, not just ID Proponents. People like Bill Gates recognize genetic code as optimized technology for computational algorithms and storage mechanisms!

    So, to IBM and other major Hardware/Software players. This is what evolutionary biologist have failed to recognize. The paradigm has gone from random to designed inquirey of life’s turing machine.

  90. Gpuccio,

    Thanks for your expansion on the subject. I too enjoyed the paper regarding Chaitin, et al, from Progetta?

    You hit on a point I was going to make as well and I think deserving of repeating in this forum.

    Dembski is ahead of the curve. It’s not all there yet, but he never said it was. He is leading a new path however that is causing entire fields to stand up and take notice, indeed, world leading scientist have recognized the guantlet has been thrown down now and are openly acknowledging this fact with the Origins of Life Prize.

    No Free Lunch and The Design Inference is a much needed breakthru of ideas built upon others from the past brought into modern mathemtical synthesis.

    I do think fundamentals are coming into view regarding information and that Demski leads in this charge. How close we are to fundamental laws? I do not know. But leading scientist recognize now, the importance of ID to this debate.

    Despite all the naysayers to the contrary, we are in wonderful new territory of science and I like it.

  91. If something is certain, that is a very important piece of information that enables me to eliminate thousands of alternatives. So to say that it contains no information is not true in the normal meaning of the word.

    Jerry, your objection is addressed specifically (no pun intended) in Dembski’s paper.

  92. tribune7,

    “your objection is addressed specifically (no pun intended) in Dembski’s paper.”

    The specific addressing of my comment was a paper that had close to 8000 words, uses variation of the word information 177 times, specific 68 times, possibilities 67 times, actualized 46 times etc.

    I love “actualized.” Does that mean something happened?

    A term (CSI), that no one seems to be able to define clearly, was used 57 times.

    Let me say that I will pay a whole lot of money for certainty, which supposedly has no information value because of all information that I will possess. For example, who is going to win the 4th race at Gulfstream today. That is certainty that is worth a lot of money. A couple pieces of certainty like that and we could bankroll the whole ID movement with so called zero information.

    I love the way the term “specific” gets used. Let me count the ways?

    Now I do not say that I will not be more knowledgeable and possess more information once I digest Dembski’s paper but to say my comment was addressed specifically….

    This whole process sounds like the “overwhelming evidence” that supports Darwinism. Some how you ask for simple answers and you get treatises that are hard to evaluate.

  93. Barrett1: “Scheesman invoked the word “abstract” in his history lesson post to describe what we know about the genetic code. And in the Time Magazine debate linked above, an important pro-ID person invoked the word “information.” I’m still quite confused about what it is about genetics that invokes these descriptive words. Do you see the genetics or the genetic code as “abstract” and “information.” And why? Thank you in advance. ”

    I located the original comment using the useful “Search” feature up top:

    http://www.uncommondescent.com.....oach-wasp/

    [27] “…Today things are stood on their head. We have complex proteins, intinctive behaviours that must somehow be encoded into DNA, which is itself a code, an abstraction, …”

    I used the term “abstraction” in the simple sense that something concrete is expressed as something intangible; without a direct link to the original. A chance-worshipper is left with the task of explaining not only the code, but the machinery for transcribing it into what it encodes for. This is made doubly difficult when the instructions for the machinery itself are inside the same code!

  94. The specific addressing of my comment was a paper that had close to 8000 words, uses variation of the word information 177 times, specific 68 times, possibilities 67 times, actualized 46 times etc.

    The objection you raised along with a specific (great word hee hee) definition of information was one of the fundamental points of the paper. Would you like me to cut and paste?

  95. tribune7,

    Only if you can do it in a 100 words or less and it makes sense

    I’ve seen the Darwinists challenge the CSI concept in several places and I saw no intelligent rebuttal from anyone which I thought was strange. But now I am starting to understand why. When no one can explain it easily, it is hard to defend it.

    I understand all the examples and intuitively see the differences between what is called specified and what is not but find it curious that no one has been able to put an easy explainable layman’s definition to the examples.

    So far I like bFast’s take the best.

  96. Jerry:

    Here’s a definition of “specificiation” in plain English: “The right-ordering of a complex, highly improbable object.”

  97. Again, Sewell’s definition: somthing is specified if it is macroscopically describable, in 1000 words or less. That doesn’t work for you Jerry?

  98. PaV,

    What does “right-ordering” mean? Is a specific outcropping of a rock formation, highly improbable. I think so and it is also complex. So I guess right ordering is what makes the difference but I do not know what you mean by the term.

    Atom,

    What is “macroscopically describable” mean? How does that work for Mt. Rushmore? What is the implication of 1000 words? Could you describe a thunder storm in a 1000 words or less? Could you describe the entire DNA molecule in 1000 words or less?

    Thank you for your help.

  99. “Great_ape, you have presented a rather Shannonish definition of information, but your definition of CSI seems to fully ignore the complex and specified bits.” ==bFast

    And here I actually thought my definition was a more based on K-C information…

    “To understand what information is, we must simply look at its root, to “inform”. Dictionary.com: “to give or impart knowledge…” Information, therefore, is a set of details which gives or imparts knowledge. It has a quatifiable characteristic which is that it is somewhat compressable — but compressability is not, in itself information.” ==bFast

    That reminds me of an old Jack Handy quote: “Maybe in order to understand mankind, we have to look at the word itself: “Mankind”. Basically, it’s made up of two separate words – “mank” and “ind”. What do these words mean ? It’s a mystery, and that’s why so is mankind.

    I’m going to go way out on a limb here and guess you’re not a mathematician or engineer?

    “Now, CSI is complex information that specifies something. Its that simple.” ==bFast

    Sigh. If only it were so. I don’t mean to be harsh, but I have a strong suspicion you don’t have any more of a clue about CSI than I do. Then again, not having a clue myself, I’m not really in a good position to judge that.

    Ok guys. I’m just a biologist (i.e. caveman), but I know enough to know that no two folks here have offered the same definition of CSI. I still need to read through again and try to digest some of the more recent posts, particularly Sal’s. In my scientific experience, though, I can’t recall running across a working *concept* previously that seemed so difficult to verbalize.

  100. “The DNA specifies something else outside itself just as an audio signal specifies something else outside itself. Noise doesn’t. A thunder storm doesn’t. That I can relate to.” ==Jerry

    Jerry,

    While DNA does specify numerous things beyond itself, I don’t think this is quite what Dembski/Sal have in mind. What they are saying is that the dna/protein/cell machine *itself* conforms to or resembles certain human engineering structures and, in that sense, it is specified by a pattern. Specified by a pattern(s) already known to exist.

    This much, at least, I think I understand, but someone please correct me if I’m wrong. I don’t want to contribute further to the confusion.

  101. great_ape,

    Thank you for your admission. Now I don’t feel like the “Lone Ranger” on the CSI definition.

    I didn’t think that my take of bFast’s explanation of CSI what was Dembski and Sal means by it but I thought it was insightful. It didn’t seem to be what they meant especially when they keep talking about coin tosses and coin tosses don’t specify anything except who kicks off. As you said, somehow DNA must be specified by something to fit their definition. But what does it mean to be specified with out being able to identify what is doing the specifying?

    To use street language, DNA must be the specifiee as opposed to just a specifier. So what characteristic is it about something that would make it so it had to be a specifiee.

    I don’t know if you can follow what I am talking about especially since you think there is no specifier for DNA. What is it about something that could rule out chance or law as an explanation for its existence? What makes DNA different from other self organizing physical phenomena? Does any other self organizing physical phenomena specify something else?

    Anyway I am still confused.

  102. Hey Jerry,

    Macroscopically describable means (IMHO) that you can reproduce the event in question from scratch based on the description (specification.) Kinda like the specifications I get at work (I’m a software engineer.) Two programmers can come up with the same functional design based on the detailed spec. So the specification contains all the “info” necessary, even if it doesn’t spell it out bit-for-bit.

    How does it do this?

    By relying on my programmer’s background knowledge. They know I know what Dijkstra’s algorithm is, so they can use that concept in their specification without writing it out explicitly. And the final result is an event (my source code) which is much larger than the specification that defines it. (Again, simply describes a complex event.)

    Back to Mt Rushmore, I can tell you “Carve four presidents into the rock, starting at this altitude, at this scale, at these angles to one another…” and in a few paragraphs fully specify a very complex final product. How do we know our low-bit specification contains all the necessary info (combined with our background knowledge)? Easy, give two separate sculptors the same spec, and they’ll come up with the same final product.

    To reproduce a given thunderstorm, you’d have to specify each lightning arc. Thus, your specification would be equivalent to the event in question in bits.

    Unless your specification is intentionally vague, so that your specification target is large. (“Any old thunderstorm arrangement will do.”) In that case your specification target does not sufficiently constrain the possibilities, so there is a large probability that you will hit that “target” by luck, rendering it useless as a specification.

    Again, it is like hitting two extremely small targets simultaneously.

    That is a plain english explanation. For the relevant maths, see Dembski’s paper.

  103. Only if you can do it in a 100 words or less and it makes sense

    OK, how about this: “Content requires contingency”
    Here’s another one: “For there to be information, there must be a multiplicity of distinct possibilities any one of which might happen.”

  104. Atom,

    Imagine a sunny day, with a beautiful blue cloudless sky, and a rising barometer.

    Now imagine a fellow in a dirty hairshirt and a long white beard walking through campus to the administration building. He comes to it and shouts YOU HAVE SINNED!

    He points his staff at it, and out of the clear blue sky a bolt of lightning crashes into it followed by a loud peal of thunder. All the while not a cloud in sight and the barometer rising. Then huge hailstones fall on it destroying what was left by the lightning bolt.

    I would say that would be a thunderstorm with some serious CSI.

  105. “So what characteristic is it about something that would make it so it had to be a specifiee.” ==Jerry

    I think they are suggesting that the fact that DNA or similar systems can _be described_ in a relatively compact fashion using the vocabulary/grammar of english language and/or human engineering concepts is an indicator of “specification”. Of all possible random patterns that are generated from a theoretical random pattern generator, very few of these thing should have this characteristic of compressibility or “compact describability”

    We could argue about just how random the “theoretical random generator” is for organic life in this scenario, but that’s a whole other discussion.

    Many things are subject to simplistic description. Dembski uses the single letter ‘A’ as an example. It is specified. What supposedly sets CSI systems apart is their complexity. He suggests the complete works of Shakespeare are both specified and complex.

    I’m still confused, of course. I think I have sort of a handle on their usage of “specification” and why it is statistically rare among all conceivable patterns. And I think I have a rough handle on complexity/information in the Shannon sense. And I agree in principle that some patterns possess both of these attributes.

    I remain confused as to why the possession of high Shannon information by any given specified pattern would be considered rare. Sal’s example, “500 coins, all heads” suggests that *any* specified pattern given for something within a largish space (whatever the technical term is for 500 coin flips) would yield a high Shannon information content, however simple the pattern was in KC-complexity. So while I see the rarity of the “specifiable” part, I don’t see the rarity of the complex part (Atom’s second target (above)). As such, I so far can’t see how the two concepts (specification and complexity) can be joined to indicate some extra-extra-unlikely event.

    (I suppose one could argue the observance of such a large flip space (500 coin flips) is itself indicative of complexity, but that can’t be because nature is ripe with large spaces that could be construed as samples(1 billion raindrops, 10,000 meteor strikes, untold grains of sand) that have high Shannon information…. So again I’m back to “specification” as the true rarity.

  106. Specified complexity is when you coat marbles with contact cement, blindfold yourself and throw 10,000 of them a handful at a time against a wall from a distance of thirty feet and they stick together into a statue of Elvis Presley. There’s exactly the same probability of that particular arrangement of marbles as any other arrangement.

    Does this require subjective knowledge on the part of the observer? You bet. So does the Copenhagen Interpretation of Quantum Mechanics. If needing an observer for probability waves to collapse into certain events is okay for the most widely accepted fundamental theory of physics I don’t see why needing an observer to find specification should be a problem for ID.

  107. Great_ape,

    I like your description of it. The rarity of the second target was addressed by Sewell. (This is something I had trouble with initially as well.) He says, again:

    There are so many simply describable results possible that it is tempting to think that all or most outcomes could be simply described in some way, but in fact, there are only about 2^30000 different 1000-word paragraphs, so the odds are about 2^999970000 to 1 that a given result will not be that highly ordered—so our surprise would be quite justified.

    When we limit the number of bits (words) in our description, we get a limited number of possible descriptions. We can then compare how many of the event patterns match one of those descriptions. It is this ratio that defines the “rarity” of the specification (second target.)

    Now sure we can begin to nitpick at this informal definition I just gave, because it is informal. This is why Dembski formalizes and quantifies the idea.

    Tribune7, now the entire scenario may (or may not) have CSI, since it seems to match a pattern (Prophet performs miracle in judgment), but the individual pieces like the thunderstorm in isolation do not necessarily.

  108. Hi All:

    You have made me interested enough to unlurk.

    First, on definition: precise descriptive statements that give necessary and sufficient conditions for an entity are quite hard to make; same, for definition by genus/difference [cf taxonomy in biology]. But, as the above shows, recognition of a pattern by pointing out examples and recognition of “family resemblance” is much easier.

    Indeed, one can argue that precising definitions are logically subsequent to that intuitive recognition/identification by example and/or counter example. (We usually argue over definitions by saying whether or no they include all and only instances of the recognised entity, and exclude all and only non-instances.)

    For instance, kindly supply a generally accepted precise definition of LIFE that meets this criterion. (Of course, that is to show that the subject matter of the overarching discipline for biological ID is itself subject to the same issues of definition, so we should not be selectively hyperskeptical.)

    Be that as it may, we should distinguish the ability to identify/distinguish intuitively, from the specifications [!] by formal definitional wording, hopefully within Sewell’s 1,000 word limit. The classic distinction between: [1] “fffffffff . . .” [here, assumed non-contingent, and obviously not complex], [2] “nfgrduywornfgfkdyre . . .” [assumed contingent and complex but random] & [3] “this is a functionally specified, complex statement” should not be forgotten. (Cf discussion in TBO’s TMLO, 1984, Ch 8, etc.) Similarly, 500 coins neatly lined up, all heads or all tails or alternating h,t, etc are specified and complex, and function in the context of recognisable patterns.

    [I emphasise "functional" as well as specified, as I have found that this helps us eliminate a major set of issues: first, let the alleged information actually function in a communicative context (i.e. fit in with signal sources, encoders, transmitters, channels, receivers, decoders and sinks, physical and/or abstract], then discuss its specification and complexity. A rock slide or erosional feature is indeed complex, but is non functional in communicative situations, absent someone’s analysis of it that derives from observations of it say a bit pattern . (Such patterns start with say retinal patches of light/dark and colour, and/or real-time frequency patterns in our cochlear sensing hairs.)

    On the other side of the issue, going to an examplethe late great Sir Fred Hoyle used to discuss, it is logically and physically possible that a tornado passing through an aero industry junkyard could assemble a fully functioning 747, but that is so overwhelmingly improbable that it exhausts the probabilistic resources of the observed cosmos, say, 10^80 atoms and 13.7 BY. Oddly, Mr Dawkins cites the same example and notes that such functional outcomes are sparse indeed in the available configuration space for such a random shuffling, but then insists that the appearance of complex design can be deceiving; due to that bare possibility. The problem is, that it is a routine principle of statistical mechanics, that we look at he issue of microstates [here shuffled of aero-parts, i.e we are looking at giant “molecules”] compatible with a given macrostate [here a functional aircraft] and infer from the proportions of the so-called statistical weight of relevant macrostates, relative likelihood.

    This is in fact the basis for pointing out why though it is logically and physically possible for the molecules of oxygen in a room to all rush to one end, without intelligent intervention etc, it is so maximally improbable that the relevant fluctuations on that scale are simply not observed.

    Similarly, TBO’s analysis in CH 8 of TMLO turns on this same basic principle, captured in the Boltzmann expression s = k ln w, w being the statistical weight of the macrostate. Apply the concept of Brillouin on the link between entropy and information [there is still a school in physics that speaks of such, following Jayne, cf Harry Robertson's Statistical Thermophysics], and use the resulting measue of information in a biofunctional molecule, and the relevant Gibbs Free energy, to deduce equilibrium concentration in a generaous pre-biotic soup, and we see thatit is vanishingly small. [10^-338 molar for a 101 monomer protein.] More modern arguments such as Trevors and Able, use probabilistic and related thinking and arrive at the same basic result.

    No wonder Honest Shapiro has recently re-stirred the OOL pot! (I think Meyer has a serious point on the similar challenge to get to step-changes in complex biofunctional information through “lucky noise” in life forms required by NDT to drive say the Cambrian life revolution — ie, the challenge of body-plan level macroevolution, as his now famous paper argued.]

    That is, the functionally specified outcomes are so maximally improbable that they exhaust the available probabilistic resources, relative to an assumption of chance [and necessity] only. If we see a room in which all the oxygen molecules are at one end, we infer intelligent agency. If we see a jumbo jet, we do the same. If we see an intelligible post in a blog thread, we do the same. Why then, do many – absent worldview level question begging [often labelled here, methodological naturalism] – infer from the even more complex functionally specified, complex information in the nanotechnology of life at cellular level, that it is explicable in terms of chance pluys necessity so we can rule out agency, even ahead of time? Is this not grossly inconsistent?

    Then, having thought a bit about that underlying context, let us look at ongoing mathematical attempts to define what we observe in nature and recognise intuitively, e.g. as Mr Dembski has done, as models, not the reality that the models seek to capture.

    That way, we can be objective about the success/failure of the models [I view Mr Dembski's work as work in progress, with great promise and interesting potential applications], without losing sight of the underlying reality.

    (NB: I find that evolutionary materialism advocates are often guilty of using that confusion to dismiss the underlying intuitive point, and then gleefully pounce on debates over the matter to assert that the concept is “hopelessly confused” and can be brushed aside. But, in modern educational psychology, I long ago learned from the pioneer cognitivist, Richard Skremp, that a CONCEPT and its verbal expression are quite distinct. Mathematical descriptions are of course an extension of such verbal descriptions.)

    So, let us keep this issue in due proportion.

    Sorry on length.

    Cheers

    GEM of TKI

  109. great_ape:

    “I remain confused as to why the possession of high Shannon information by any given specified pattern would be considered rare. ”

    As I have already said, certainly the concept of specification is in itself deep and elusive. But it is difficult for me to understand why there is such a great confusion about the meaning of Dembski’s approach to it, in other words of his general definition of CSI. It may not be perfect, it may not solve all problems, but at least, in my opinion, it is clear and well operational.
    Great_ape, let’s get back to the 500 coin flips example, and let’s see it in another way: we have a sequence of 0 and 1, 500 bit long, which could have been generated random by a computer program, giving equal probability (0,5) to 0 and 1, or in alternative could have been written by someone. Let’s suppose that we observe only one sequence, and that it is composed of 500 ones. Then:
    1) Obviously, the sequence we observe has the same probability as any other sequence: 1 to 2^500, that is about 1 to 10^150, that is Dembski’s UPB. Therefore, the sequence is complex, but not necessarily rare.
    2) For once, let’s be a little Bayesian: if you were asked to choose between the two hypotheses, random computer sequence or human agent, what would you do? (suppose you have to bet real money…) The sequence is obviously specified, and therefore rare (only a small subset of all possible sequences is specified, however we define specification). That’s why nobody would have any doubt if betting real money.
    3) If you ask me why the sequence is specified, I would prefer to leave the answer to Dembski, or to Salvador. But I am sure that the answer exists. Probably, there may be more than one answer, but anyway the subset of specified sequences is always a very small subset of all possible sequences (that is, most sequences are really random, and you cannot specify them in any way). For me, the very simple explanation of this particular specification is, very simply, that all 500 values are 1. I think that means that this particular sequence is higly compressible, and my impression is that conscious intelligent beings can easily “recognize” highly compressible sequences, or at least some of them. But the simple fact, beyond all mathematical definitions, is: an intelligent being could very easily write that specific sequence, indeed that would probably be one of the first sequences that one would write if one’s intention were to write down something “recognizable” by another intelligent being (500 zeros are obviously another good candidate). The probability that the same sequence may be randomly generated is, on the contrary, so low that we can easily refuse it for all practical and scientific purposes (if we are dealing with real world science, and not only with mathematics).
    4) So, it is that simple. The sequence is complex and rare. It is not rare because it is complex, but because it is specified. The complexity is necessary because, for instance, a sequence of 5 ones is certainly specified, but it is not complex enough to refuse the chance of it being generated randomly. So, complexity is necessary together with specification to practically refuse chance.

  110. Tribune7, now the entire scenario may (or may not) have CSI, since it seems to match a pattern (Prophet performs miracle in judgment),

    And is extraordinarily improbable.

    but the individual pieces like the thunderstorm in isolation do not necessarily.

    That is true.

  111. Jerry wrote:

    PaV,

    What does “right-ordering” mean? Is a specific outcropping of a rock formation, highly improbable. I think so and it is also complex. So I guess right ordering is what makes the difference but I do not know what you mean by the term.

    “Right-ordering” means that the relationship among the various parts form the correct pattern.

    By way of illustration, Dembski’s example of the series of prime numbers (in binary code) emerging from what looks like a random string of 1′s and 0′s reresents ‘right-ordering’.

  112. How do we know that DNA is CSI? As far as we know it is a random pattern of nucleotide sequences? It is only because it specifies something that we know it is unusual. Is there anything about the sequence itself disregarding the implications of what this sequence specifies that makes it CSI.

    Coin flips with regular outcomes, random numbers, painted figures, arrows in bulls eyes all are conforming to a pattern so it is the pattern that determines whether it is CSI or not. But what pattern is DNA conforming to. It is just a sequence of 4 molecules. What is it about any organism’s DNA that makes it different from a random arrangement of the nucleotides and makes it CSI?

    Certain sequences are repetitive which is certainly not random but this could be explained by natural forces, which create many self-ordering phenomena.

    So what is CSI about DNA itself? Or why is it specified? Or is it only called specified because of its functional outcomes? If we randomly replace 10,000 nucleotides in a genome, would anyone know the difference if they did not have anything else to compare it to?

    But then the DNA would probably be useless. So it seems that its unusualness is not in any inherent pattern in the sequences itself such as are in coin flips giving only heads or Mt. Rushmore but in the effects of the sequences.

  113. jerry:

    “How do we know that DNA is CSI?”

    That’s a very good question. In my opinion, there are many different answers, each one with a definite value.
    I have already written in the above discussion that, in my opinion, specification can be defined in different ways, provided that the information which is specified be characterized by something which makes it “recognizable” to a conscious observer.
    In DNA, there are various levels which allow to recognize specification, that is to say that it is “multi-specified”.
    a) DNA, at least for the protein coding part (about 1,5-2% of the total) is a well understood language. A language is one of the foremost indications of specification. In that sense, DNA may well be compared to long phrases, to an ordered list of triplets with punctuation signs and various other functional “words”, together with the fundamental words which codify the 20 aminoacids. The DNA code is almost universal in nature (but not completely), and it is redundant (different triplets codify the same aminoacid). The DNA code is read, with perfect order, by another linguistic organule, the rybosome.
    Even the choice of using sequences of three nucleotides to code an aminoacid is absolutely intelligent and specified mathematically: indeed, to code for 20 aminoacids, a two nucleotide code would not be enough (4^2 combinations, that is 16), and so a three nucleotide code is the least useful alhpabet (4^3 combinations, that is 64, allowing for redundancy, which is probably a specifically useful tool, and for functional “signs”.
    Protein coding DNA is a specified language also for another reason: its products are higly specific and functional proteins, about 20000 of them, each with mean complexity of hundreds of aminoacids, therfore each of them well aboe Dembski UPB.
    But there is more. Bryond protein coding, DNA has a lot of further functions, many of them still poorly understood. Its spacial structure with repeated foldings is incerdibly complex and elegant. In the 98% of non coding DNA are certainly hidden precious regulatory functions, but nobody still understands how, given that great part of those sequences are apparently repetitive and useless, if judged by our present understanding. Only recently it has been recognized that specific sequences, intelligently interspersed throughout the whole DNA molecule (coding and not coding parts) have a key role in determining the 3D structure of DNA and therefore also its transcription status. And so on, and so on…
    Specified? DNA is ulra-specified, it is in itself a living world with incredibly stratified levels of meaning and functions, and practically every day new wonders are discovered about it.
    Just to get an idea, please look at the animations recently posted on this blog, and ask yourself if what you see seems specified or not…

  114. Jerry Re: 112

    Is a long string of encrypted text CSI?

    (If decoded, it is the full text of Merriam-Webster’s Dictionary.)

    Answering this question will answer your own. The explanation for both is the same.

  115. Actually, it is even simpler than that. Can any string of text be CSI?

    DNA is just text, with nucleotides as the letters and triplets as the words, AA chains equivalent to sentences and protein complexes as paragraphs.

    Words by themselves are useless (functionless) without a translation mechanism, i.e. background knowledge. If we were as familiar with DNA as we are with english letters and words, we wouldn’t ask “What is CSI about DNA itself?”

    In turn I can ask: “What is CSI about any string of text itself?” (DNA just being an example of a string in a different language.)

    Words and sentences describe things, and it is in this description that their function resides.

  116. kairosfocus, great post (108)

  117. gpuccio,

    I understand most everything DNA can do or is now speculated to do. But the discussion is over a definition of CSI and it seems that something is CSI not by what it specifies but by some pattern within itself. It is CSI if a recognizable pattern exists within it. So pointing to its effects is not appropriate.

    Currently DNA is only recognized as something special because of its effects which are amazing. But what is it about the DNA itself that would lead someone to say it is specified. Even repeating sequences could be the result of some random function of natural forces.

    I assumed the “S” in CSI was supposed to take care that it could not have arisen by natural permutations and therefore had to have an intelligent input.

    So what is it about the sequences themselves that says they cannot have been generated by random physical forces. We all point to the remarkable things it specifies and I agree it is conclusive. But what evidence in DNA itself says it was specified.

    Until then Darwinists will say the DNA arose by random events, which we do not understand yet. They will say it arose in bits and pieces. The popular current term is “emerged.” They will however, say it was remarkable luck.

  118. jerry:

    I don’t agree with your concept that a sequence is specified only if it contains a pattern “in itself”. That is a possibility. But a sequence can be specified also, and even more, because it contains information about something external to the sequence itself, an information which cannot be there by chance. That is the case of all languages.
    To give another, non linguistic example, you can have a long sequence of 0 and 1 which is apparently random if you look at it as digits, but if you know how to decode it, passing it to a graphics computer program, you can find that it is a digital picture of a map of the USA, with all the pertinent details. Would you say that the sequence is not specified? It certainly is. It cannot come out as a random succession of bits where each bit has the same probability of being 0 and 1 (or rather, it could mathematically, but it really can’t in the true world). So, unless you have a plausible explanation of how that image came into being, you are perfectly right to assume that an intelligent agent, in some way, produced it.
    So, to say that specification must be restricted to the case of an inner pattern of the sequence is not correct: that is only one possibility. And if you read carefully all the ID literature, and especially Dembski (and yes, I think you should do that: specification is a difficult subject and you can’t expect that it can be “stereotyped” in a few words covering all the aspects of the problem), you will find that the most common examplification of specification is relative to this second meaning.
    Moreover, I would say that the genetic code, the fact that the nucleotides are arranged according to a three letter code, corresponding to the aminoacids, is in itself a pattern of the sequence: indeed, although you need to know the code to decipher the meaning (and don’t forget that we do know the code), we can often understand that we are observing a language even if we don’t understand the code and the content: there are linguistic instruments to recognize that, and that means that a language sequence has, anyway, some intrinsic recognizable structure (recurrences, some order, and so on). Indeed, as you may have read, scientists from the field of informatics are now studying the known sequences of DNA , especially the non coding part, to understand how it is structured, and to derive its meaning from the structure. And some results have alredy been obtained (see the recent paper from IBM scientists about a “second code” in DNA which would work determining the 3D structure of the molecule at critical points, through non random occurrences of specific sequences even in the middle of protein coding genes: an informational structure overlapped to another informational structure!).
    So, it seems to me that DNA is at least twice specified: by internal, linguistic structures recognizable in the sequence itself, and even more for the information, engineering and regulation content clearly written in the sequence by means of a definite language.

  119. Jerry — As far as we know it is a random pattern of nucleotide sequences?

    Jerry, I don’t think even Dawkins and PZ believe that.

  120. tribune7,

    I don’t think anyone believes DNA is a random pattern of nucleotide sequences given what they can do. But if you were given the DNA sequences as sequences of the numbers 1, 2, 3 and 4 and not knowing where they came from, would you know they were not produced by random forces? I have no idea of the answer nor do I know if anyone actually knows the answer.

    When you are writing in a hurry, you do not always say things precisely.

  121. gpuccio,

    You support my conclusion that no one knows an easily understandible definition of CSI. It may be an important concept but until it can be defined so that others can understand what it means it may be useless in convincing anyone that life is designed.

  122. Jerry,

    Your objection applies to plain English as well:

    But if you were given the [english sentences] as sequences of the numbers 1, 2, 3,…, 26 and not knowing where they came from, would you know they were not produced by random forces?

    Good question, would you?

  123. “DNA is just text, with nucleotides as the letters and triplets as the words, AA chains equivalent to sentences and protein complexes as paragraphs.” ==Atom

    I’d just like to point out that it wasn’t always the case that the linguistic/engineering analogy of DNA was evident. A lot of work went into first establishing that these nucleic acids were even the vehicle of information within cells and only then into discerning how the information is read and actualized. In many respects, the latter is still being worked out. My point is that it didn’t jump out initially as an obvious specified pattern. The considerable effort that went into working all that out attests to the fact that, although analogies can retrospectively be drawn to linguistics and engineering structures, the details of implementation were quite alien to anything we’d have thunk up ourselves. What strikes me about those cell movies you guys posted is not only the sophistication, which one can not help but appreciate, but also a certain “otherness” that seems rather alien to our human mode of intelligence and way of doing things.

    So I guess my question–though a bit esoteric–is this: If you have to put tremendous time/energy/effort into discovering the specified nature of a found pattern, is it really specified with regards to one of *your* established patterns? Or did you just work hard enough until you explained or “translated” it into the best structures you had at your disposal. Because the more I think about it, it seems that any complex phenomenon that you can offer a *explanation* for, at some level, has been translated into your species’ subjective repertoire of existing patterns and, as such, could be labeled as a “specified” pattern. Otherwise it would be unintelligible. If this is indeed the case, then the concept of CSI would suggest that we could not offer a satisfactory explanation for *any* complex phenomenon that was not intelligently designed.

  124. “It is not rare because it is complex, but because it is specified.” ==gpuccio

    Then we are in agreement as far as this goes. I was having issues with the idea that the complexity was supposed to add significantly more rarity to the observed pattern *above and beyond* the “specified” attribute. The way certain things were phrased lead me to that erroneous impression. Really, it should be paraphrased “complex phenomena, not so hard to find; complex phenomena that are “specified, exceptionally rare finds.” With that elementary-level confusion hopefully behind me, I must ponder whether one can objectively treat the admittedly partially subjective notion of “specificity” without tangling oneself into philosophical knots.

  125. But if you were given the DNA sequences as sequences of the numbers 1, 2, 3 and 4 and not knowing where they came from, would you know they were not produced by random forces?

    I think you may be starting to catch on just a little.

    Now, suppose that sequence of of numbers spiraled around another sequence of those same set of numbers. And suppose the numbers on the one spiral always matched only certain specific numbers on the other spiral. And suppose those spirals were always the same direction. And suppose the sequence of numbers were the same except in going opposite directions.

    And suppose we found there was another device that would scan these sets of numbers and send them to another device which would make working machines.

    Would you still believe that it was possible for all this to happen at random?

  126. tribune 7:

    Re 116 on 108. Thanks, welcome.

    great_ape:

    Re 123 – 4: You have aptly shown the significance of FUNCTIONALLY specified complex information. (And, BTW, redundancy in analysis and communication is often an asset, so even if there is an overlap between specificity and complexity, that may well be functional. For instance, classically, in Newtonian Dynamics, the First Law, strictly is a special case of the second: F = 0 implies that a = 0, where F = ma. But, understanding explicitly that when there is no net force there will be no net acceleration so bodies tend to remain at rest or to move at steady speed in a constant direction absent such forces, is vital. A good example is in understanding why circular motion is accelerated.)

    My “algorithm” on inference to agency:

    1] First, show that there is functionality in a context that entails specification and information, then

    2] Address contingency. (Does the case show that contingency is at work? If so then chance or agency dominates — once necessary and sufficient deterministic conditions are present, the result will be present directly, at a rate governed by the dynamics of natural regularities at work: fuel + air + heat –> fire.)

    3] Finally, address complexity: If the chance option would credibly exhaust the available probabilistic resources, then agency is a better explanation.

    At that point you are entitled to state, on an inference to best explanation basis, what is the best answer: chance, necessity, or agency, or what blend of the three major causal forces. We routinely do this intuitively in many contexts, and through Fisherian or similar inference explicitly in statistics and science in many situations. (So the issue of selective hyperskepticism when key worldview level assumptions and related outlooks, agendas and attitudes are at stake becomes an issue. Indeed, I think this best explains the hostility we so often see and which is so often adverted to in this blog’s threads.)

    As an instance of “blending,” in my linked:

    * unconstrained heavy objects tend to fall under that NR we call gravity.

    * if the object is a die, the up-face is essentially chosen at random after tumbling, from the set {1,2,3,4,5,6}, thanks to the kinetic energy, centre of gravity and eight corners plus twelve edges leading to complex rotations adn frictional losses that eventually damp out the motion.

    * If the die is tossed as a part of a game, then its outcomes are as much a product of agency as of chance and necessity.

    Trust that helps.

    Cheers

    GEM of TKI

  127. tribune 7:

    Re 125: brilliant!

    (Can I borrow and use it?)

    GEM of TKI

  128. Kairosfocus,

    Re 127; Thank you and yes! :-)

  129. “Would you still believe that it was possible for all this to happen at random?” ==tribune7

    If one wanted to be formal, I think Jerry would still be warranted in believing it was *possible* for all this to happen at random. ;)

  130. If one wanted to be formal, I think Jerry would still be warranted in believing it was *possible* for all this to happen at random.

    Fair enough. BUT what would you be willing to bet that it did?

  131. To qualify 129: I can’t see how any amount of “exhaustion of probabilistic resources” necessarily/logically excludes any individual outcome that exists in the sample space. However vanishingly small the outcome’s probability is, if it couldn’t happen in your universe, then it’s not in the sample space. If the outcome simply can’t happen, then you’re no longer dealing with statistics or probability.

  132. Fair enough. BUT what would you be willing to bet that it did?” ==tribune7

    I might be willing to bet. A very tiny sum of money. Consider the odds ratio: in the ridiculously unlikely event that the outcome happened, I’d make a killing. ;)

  133. I might be willing to bet. A very tiny sum of money.

    I hate to even waste a penny :-)

  134. great_ape: “If this is indeed the case, then the concept of CSI would suggest that we could not offer a satisfactory explanation for *any* complex phenomenon that was not intelligently designed. “

    I think your thought, here, corresponds to what I wrote in post# 56:

    “With this said, it now becomes clear that ‘rules’ themselves are the direct product of ‘intelligence’, implying that ‘significant information’ [a term I use to distinguish it from mathematical definitions of informatin] can only be produced by ‘intelligence’. This, in turn, has direct application to the so-called “Anthropic Principle”, where, within the ‘infinite’ possibilities for each of the physical constants of the world, the ones we have are the only ones operative. IOW, the electrical charge of the electron is not a ‘law’; it is more a ‘rule’ of the universe, with the implication that some ‘Intelligence’ has chosen it.”

    Bottom line: ‘rules’ are the functioning of intelligence. God’s ‘rules’ we know as ‘laws of nature’. Then there’s our ‘rules’–as in language. It’s interesting to note in this context that the account in Genesis as God bringing all the animals that existed so that Adam could give them a name, the implication being that language is man producing rules rather than God.

    So, getting to the question of DNA, the only question to ask is: does it follow any rules? If the answer is, yes, then it has been specified (in this case, by the Designer).

  135. Hi again:

    Tribune 7:

    Thanks. I will use it, maybe with slight mods for the correlations, say we see twirl 2 has a 1-1 match: 1-3, 2-4, so that if 1 is in twirl A or B, the corresponding digit is in the other twirl.

    [Comment: Changing terminology seems to help clarify when we may have a lot at stake that tends to cloud the issue when we use more familiar terms. Here, it seems that there is a lot at stake in recognising that DNA is a four-state, digitally coded string -- so n elements imply a configuration space of 4^n, reaching 10^150 at about 250 elements -- that embeds a bill of materials, with sequencing and procedural elements, such as start/stop. I have recently actually been called a “liar” for trying to point that out! [The objector majored on the subsidiary point that the start codon in DNA codes for a protein monomer so how dare I put them together with stop codons under the same rubric, demanding an explanation of how that comes to be anything but unintelligent. I have just got back to him that sometimes stop codons also code for oddball monomers, and also pointed out earlier that once we see a complex code at work, that puts agency seriously on the table as what in our experience is the most likely explanation for such a phenomenon. But, with a lot riding on the line, that can be hard to see I think.]

    Great_Ape:

    Re 129: I can’t see how any amount of “exhaustion of probabilistic resources” necessarily/logically excludes any individual outcome that exists in the sample space

    First, by sharp contrast with recent experience, I note how the tone here is ever so refreshing and serious. You have made no small contribution to that. Kudos to you.

    Notice what tribune 7 keep s referring to: on what would you be willing to bet? [When the odds are sufficiently against you, and the stakes are higher than you are willing/can afford to lose . . . there lurks a prudence argument here, a la Pascal's Wager . . .]

    But there is also a consistency argument, as in my “selective hyperskepticism” case. That is, to function in this world, on a common sense basis or even in scientific contexts, we routinely make inferences to best explanation across the triad necessity/ chance/ agency, and we in fact bet a lot on this, even in the face of the possibility of errors in such inferences.

    So, if/when we suddenly back off into skepticism (i.e., impose an unreasonable degree of “proof” requirement on matters of empirical fact and explanation), when there is not only the possibility to revise the bet on agency — where there is similar probabilistic resource exhaustion to say inference to agency for the text of this message [e.g. I could be a Turing Machine mimicking a human . . . or this strictly could be lucky noise that by random chance, however improbably, happens to come out as functional text in this thread] — then some of us will infer that he best explanation may well be worldview-level question begging.

    A further factor in this, is that we can see that there is in fact a well-known attempt to materialistically redefine “science” in the teeth of demarcation issues and the history of its founding and praxis in key quarters even today, through so-called “methodological naturalism.”

    So, applying the criterion of functionality, then looking at contingency and the issue of exhaustion of probabilistic resources to decide if agency is a best explanation, looks a lot more reasonable from where I sit. But then, I can easily enough see that if one is in doubt on the possibility of agency in the case in question, that may shift one’s estimation of the probabilities. (So, we come right back to the issue as to whether there are reasonably credible signs of agency at work, which can be empirically detected, and whether we are willing to trust the results of such inference when we have a lot on the line, scientifically, institutionally, educationally, policywise, and even at worldviews level. Thus the heat that too often overwhelms the light.)

    Hey, it’s time for me to try to get back to sleep. [Or, is that a lazy Turing Machine cleverly programmed, or are the lucky noise bits getting real smart for what is just designoid blind watchmaker stuff . . .]

    Cheerio

    GEM of TKI

  136. Errata: ….It’s interesting to note in this context that the account in Genesis has God bringing all the animals….

  137. kairosfocus, I am glad you have stopped lurking.

    Welcome to UD.

  138. tribune7,

    “I think you may be starting to catch on just a little.”

    This is an ignorant remark and indicates you have no idea what the debate is all about. Is this an attempted put down?

    The discussion has been about the concept of CSI and whether there is a useful definition of it. I was never denying that DNA had an intelligent origin or wasn’t amazingly designed.

    The term CSI is thrown about here like it means something. Well if it means something then define it and use it to show how it leads to something.

    I have seen debates about the “Theory of Intelligent Design” which ended in a morass because no one could understand the terms being discussed. The discipline of Intelligent Design is under constant attack because it has no content or haven’t you noticed that.

    If you want to point to the obvious complexity of the process and the improbability of the results I have no problem. The existence of the various proteins themselves is enough proof for me that the whole thing could not have happened by chance.

    Go through all my comments here and see if there is one that questions the intelligent origin of DNA or its ability to prescribe life’s processes. What I question and still say is that no one has shown that there is an easily understandable definition of CSI.

    My example of the numbers was to only ask if there is a pattern in the numbers or nucleotide sequences itself that is comparable to the coin tosses, Rushmore example or anything else that is used to illustrate CSI. You reply in 125 is a example of preaching to the choir and that is not what the discussion is about.

    I still rest my case that there is no one that has presented an understandable defintion of CSI and untill then the term should not be invoked by anyone to demonstrate the intelligent origin of anything. Use all the examples one wants to. They seem obvious to me and many others but I doubt if they convert one Darwinist wgi also use the confusion over CSI to tie up the arguments with the inadequacy of Intelligent Design as a formal discipline to explain anything.

  139. Jerry, the problem is you are not trying to understand the explanations and definitions.
    I give you a link to a paper — which, btw, is the simpliefied version of Dembski’s peer reviewed work, The Design Inference — and you complain it’s too long.
    What more do you want me to do? Would you demand a 100 word explanation for the Theory of Relativity? Quantum Mechanics?
    You have to put an effort into this.

  140. I ask anyone again. What is it about DNA that makes it CSI?

    Some have said it must lie in the DNA itself, some have said it has to do with what DNA specifies. Some have said both.

    Well what does the definition of CSI, whatever it is say? I shouln’t have read a paper, heavily laden with mathematical symbols and specialized meanings of words to get an answer. Does it point to phenomena itself or the fact that it specifies something else. These are simple questions.

    The examples of the coin tosses points to nothing else but the phenomena itself and the improbability of the event. A series of random numbers is a similar example. By the way if we found a sequence of a billion 01 combinations in a row in the natural world no one would say it had to be designed but would look for some force that alternates automatically.

    Mt. Rushmore is a hybrid because we know there is a relationship to something external even if we didn’t know who the faces were meant to be. We recognize human faces. But is there anything in the shapes themselves that is CSI. If one points to obvious curves and smoothness of the outcroppings then it that enough. Suppose a “so called modern artist” sculptor decided to make an expression of his and sculpt it into the mountain side. Would we be able to say it was CSI when it had no relationship to anything in the known world except his own mental image. Could someone come along and show numerous other smooth rock formation carved out by water and wind and say what is the difference.

    So to me the term “specify” as it relates to DNA rest on the observation that is specifies not only one but thousands of things that could not have arisen by chance itself. So I am back to bFast’s interpretation which I think it the best so far but was pointed out that it is not what Dembski means. Maybe Bill Dembski could comment.

    It doesn’t require one to read 8000 words of specialized words and symbols, however well it is done, to get to an answer. It may require a full understanding of the 8000 words or the book to precisely apply the concept. I am not getting the confidence that anyone here is applying the concept correctly.

  141. CSI is defined as being greater than 500 bits of information.

    Stephen C. Meyer tells us the following:

    Rather, the coding regions of DNA function in much the same way as a software program or machine code, directing operations within a complex material system via highly complex yet specified sequences of characters. As Richard Dawkins has noted, “The machine code of the genes is uncannily computer-like.” Or as software developer Bill Gates noted, “DNA is like a computer program, but far, far more advanced than any software we’ve created.” Just as the specific arrangement of two symbols (0 and 1) in a software program can perform a function within a machine environment, so, too, can the precise sequencing of the four nucleotide bases in DNA perform a function within the cell.

    Specified because not any ole sequence will do the trick. Complex because the DNA does many things and the information is what coordinates all of thsoe different things.

  142. Jerry

    I will try again.

    The dictionary definition for specified is something stated explicitly.

    Let’s go back to the archer analogy you didn’t like.

    The archer shoots at a wall. He hits it.

    The wall is blank. How do we know he hit the spot for which he was aiming?

    It’s not specified.

    Now let’s say there is a bullseye on the wall. He shoots and hits it. We can make a reasonable assumption the shot was specified.

    Now lets say there are strands of DNA that make up genes that contain a code for blue eyes. This code is transcribed by RNA to “machines” that cause the cells of the iris to have a particular melanin content — hence blue eyes.

    Pretty specific.

  143. So the challenge is a thousand words or less? I’ll take a crack at that.

    The information contained within DNA is not inherent to the properties of the chemicals in the same fashion that the information in writing on a paper is not inherent to the ink. DNA is a conveyance for information.

    In order to be considered CSI the information must be

    (a) Complex
    (b) Specified to an independent pattern which is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance.
    (c) Contains 500 informational bits

    These 500 informational bits are derived from Dembski’s Universal Probability Bound of 10^-150 using:

    Information(Event) = -log2Probability(Event) or I(E) = -log2P(E)

    This UPB is based upon the maximum possible physical reactions in the universe (# of particles, duration of the universe, etc). A probability event that exceeds the UPB is considered by statisticians to be one in which chance is precluded. The mathematician Borel actually suggested a UPB of 10e-50 so Dembski is even “nicer” to Darwinists since his UPB of 10^-150 gives them more wiggle room.

    This definition is also designed to preclude false positives, where a design inference is declared positive when it should not be. As such, geometrical primitive such as the curves in modern art may be rejected since it’s known that non-intelligent events may account for these features.

    In the event that a non-intelligent mechanism is shown to be capable of producing CSI there are two possibilities:

    (a) There are a limited set of special conditions under which this may occur.
    (b) CSI cannot be used as an indicator for detecting intelligence.

    As of yet this has not been shown to be the case so in the meantime CSI is the best scientific explanation for inferring design from an intelligence.

    286 words. I think that covers the basics although I don’t even get into an example of CSI being calculated.

  144. RE Great_ape 124:

    Because the more I think about it, it seems that any complex phenomenon that you can offer a *explanation* for, at some level, has been translated into your species’ subjective repertoire of existing patterns and, as such, could be labeled as a “specified” pattern.

    Good point. But now we ask “How specific is the specification?” Just because I can fit a complex phenomena into a pattern doesn’t mean I have nailed it down with my specification: maybe millions of other events also fit that specification, maybe this is the only one.

    Now we are talking about how small the second target is. Does it properly constrain, making the chance inclusion in this set extremely unlikely?

  145. tribune7,

    I understand perfectly the example of the eyes. It is what I have been saying all along since bFast in post #80 brought it up. Read my previous comments which are approaching 8000 words.

    But we have to consider what the reply of the Darwinists will be and here I am not too sanguine that the definition will convince any of them. They will agree 100% with the statement that DNA is complex contains information and specifies something else and then say “So What.” They will say it arose by natural forces through an emergent process that took hundres of million years of trial and error and then admit that it was remarkably lucky. It is standard Dawkins, ool life researchers etc. Read the Shapiro article discussed on a recent thread. He discusses how Darwinian processes could work on simple chemical reactions.

    As far as the archer example is concerned, the Darwinist will say that the archer hit one target when there were thousands of targets on the wall and the archer just hit one of them. They may say there were so many targets on the wall that it would be improbable that the archer did not hit one of them.

    The standard argument is that yes the specifically indicated sequence of nucleotides prescribes a form of life but there are maybe millions or billions of more possible combinations that could have also produced other types of life forms. So the fact that chance led to the picking of this one combination does not preclude that there may be zillions of other possible combinations that would also lead to a self replicating, energy using, self contained, complex set of chemical reactions etc entity. So therefore the argument from low probabilities is not appropriate. In other words there are a large number of possible targets on the wall for the archer to hit.

    That is what ID is fighting. Maybe someone else can express it better but we cannot use the specfic combination that exist as a rationale for design because this combination is so improbable. The reply by the Darwinists is because there could be zillions of other apparently designed combinations that comprise the universe of apparently designed entities and sheer chance just hit on a particular one of them. If you argue that this is the only possible combination, the probabilities are so astronomical that no one could say it happened by chance. But Darwinists will say the urn is full of a large number of potential lfe forming processes all of very low probability and the one we see is just the one that emerged. Somebody eventually has to win the Powerball lottery so just like the lottery one of these possible combinations of potential life luckily emerged. This is not something I agree with but this is the counter argument.

    I was to understand that CSI was an answer to that objection. Maybe it is but no one is expressing it in simple terms that would convince anyone and Dembki’s book has not done it yet with more than a small part of the scientific community.

  146. By the way the words in this thread is approximately 30000 and counting.

  147. DNA is complex contains information and specifies something else

    Name one thing that has complex, specfic information that is not known to be designed.

  148. Name one thing that has complex, specfic information that is not known to be designed.

    DNA!

    Haha I slay me. :D

  149. “Now we are talking about how small the second target is. Does it properly constrain, making the chance inclusion in this set extremely unlikely?” ==Atom

    Aye, there’s the rub. For while specified patterns are rare vis-a-vis a given grammar, some grammars are more versatile and more encompassing than others and, perhaps more importantly, some appear to grow. If a pygmy tribe is able to integrate the concept of a jet plane into its language system, does this, alone, warrant the belief that the jet plane was designed? What if we had cobbled a bunch of metallic junk randomly together and plopped it down at the center of their village? I suspect they would also come up with a narrative/descriptive structure for it. One that applied to just that peculiar amalgamation of junk. Is is complex, certainly. But is it now specified b/c the tribe has a narrative for it?

  150. Great_ape,

    Can you reproduce the amalgamation fully from the description given? (cf. my comment 102)

    This is why it is much easier to describe examples dealing with bit strings, coin flips and cards than it is vastly more complex structures. With bit strings we can apply it perfectly, showing our concept sound. When we get hype-complex (a jet), we have trouble computing the problem. But that doens’t mean the maths are bad; it only means our skills applying them are.

  151. jerry:

    this discussion has been very interesting and stimulating, but sometimes it seems that we have difficulties to stick to some fundamental points.
    Regarding CSI, i think that Patrick’s summary (#143) is really very good and contains all the basics of Dembski’s thought.
    To your repeated question if specification has to be inherent to teh sequence, or depends from outer patterns and meanings, I would again answer: both things specify (I may be interpreting Dembski’s thought wrongly, I don’t know. I would like to know the opinion of somebody else). To me, specification needs not be defined once and forever, understanding specification is one of the great challenges of ID thought. For now, I am more than satisfied with Dembski’s definitions, well summed up in Patrick’s post. My feeling is that any filter which allows to recognize a small subset of complex patterns, and to differentiate it from the many random ones, is specification.
    Another point: the presence of CSI implies the inference of an agent only if no know natural law can explain that pattern. In other words, we have to be able to exclude necessity, at the best of our knowledge. CSI only excludes chance. So, if there is a known force causing 1000 zeros, or a 01 sequence, that’s an OK explanation. Otherwise, invoking probability is of no help if the sequence is complex and specified.
    I have no hope of convincing darwinists. Most of them don’t want even to hear or to honestly think. It is a very sad thing to say of so many intelligent people, but unfortunately it’s the truth. But those who are open, and who want to really understand, will convince themselves. We have only to state the truth, and time and the progress of knowledge will do the rest.

  152. Patrick,

    Thank you for your definition. Could you clear up a couple things about part of it.

    “b) Specified to an independent pattern which is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance.”

    We have been given a couple examples, DNA and a coin toss scenario of 500 straight heads. If you have time could you discuss each one in terms of your definition especially what you mean by “Specified to an independent pattern which is easily described”

    I can describe the coin toss example fairly easily but I am not sure how I would describe DNA as a pattern.

  153. “Can you reproduce the amalgamation fully from the description given? (cf. my comment 102)” ==Atom

    In essence, yes. The precise materials, for instance, might be different, but the major shapes and sizes would be recapitulated. Aluminum in place of iron, for instance. But then the materials used for organic life were never specified by our human engineering concepts either. So “full reproduction” in the strict sense can not be a requirement for specification. The problem with bit strings and coin flips etc. is that they sidestep the analogical/substitutive component argument that would likely have to be made if ID argues life fits an existing specification of ours.

  154. “That is, to function in this world, on a common sense basis or even in scientific contexts, we routinely make inferences to best explanation across the triad necessity/ chance/ agency, and we in fact bet a lot on this, even in the face of the possibility of errors in such inferences.” ==kairosfocus

    I agree. I wish only to emphasize the difference that nevertheless remains between “what is possible” and “what is not possible.” Often, in the context of exceedingly small probability, this distinction is forgotten. And along with this distinction comes the philosophical “problem of inference,” which though we appear to deal with it effectively from a pragmatic standpoint in our day to day lives, has never been solved formally. We also have, with organic life on earth, the “sample of one” problem which is a whole other can of worms.
    We don’t know what can statistically be inferred from a (presumed) single event.A design inference process employing CSI, as it pertains to organic life (the big picture, here, after all), has to ultimately deal with both of these thorny issues formally (among many others). In reality, the task is far more daunting than bits and coin examples might lead some to believe.

  155. “What is that problem? Dembski calls it CSI, but in my personal, not technical language, I would call it the problem of meaning.
    Obviously, we have a lot of cultural discussions about meaning in phylosophy, in semantics, and so on, but who has tried to define meaning in a scientific, mathemathical way?” ==gpuccio

    I had intended to comment on this previously. I suspect your question was rhetorical, but it is my understanding that many bold souls have tried to formalize meaning and failed in frustration. I think academics who seriously look into Dembski’s work recognize that he is indeed trying to tackle the question of meaning or “meaningful information” formally. And hence the source of much skepticism, particularly when the definitions (e.g. CSI) in his approach are so difficult to articulate. The center of it all seems to be “specification.” That complex conceptual notion holds either a novel solution to “meaningful information” or is the place where major flaws in the approach reside.

  156. great_ape:

    thank you for both your comments (#154 and 155), which I find very stimulating and pertinent. But you impel me to some brief comment:

    #154: You are perfectly right formally, but I don’t agree with the conclusions. It is pefectly true that very small probabilities remain possible, and that inference works on probabilities, and that, as you very correctly say, our discussions depend on “the problem of inference, which though we appear to deal with it effectively from a pragmatic standpoint in our day to day lives, has never been solved formally”.
    But we must say that we don’t deal with it only in our “everyday lives” (which is perfectly true, because it is easy to demonstrate that most human decisions are made by inferences, often unwarranted, and which in some way seem to work enough). We successfully deal with the problem also in our “everyday science”: most sciences are based on inference and probabilities, maybe all. I work in the field of medicine, and I can assure you that practically everything in medicine is probabilist inference, and to be more specific fisherian probabilistic inference, and that the confidence level for medical inference is usually conventionally set at 0,05 to refuse the null hypothesis. We are, I believe, a little bit distant from Dembski’s UPB.
    And yet, nobody tries to deny that medicine is a science, I believe (well, almost nobody…), or that the other probabilistic sciences should be discarded as a whole. That would be the easiest way to refuse all science, and not only evolution.
    So, you are right, the problem of inference has never been solved, and maybe it will never be solved, but that remains an epistemological problem, and has no real consequence in our discussion. Dembski is using, in his theory, a traditional fisherian approach to inference, like anybody else in most biological sciences. And he has chosen by far the safest confidence level I have ever seen (1 to 10^150) to reject the “null hypotesis” of random generation of biological information. So, in my opinion, he is (almost) infinitely right from a scientific point of view.
    Only one thing, in my opinion, is not correct in your post. You speak of the “sample of one” problem regarding biological CSI. But that’s not correct. We are not speaking here of the “fine tuning” argument, which, although correct, could suffer of the “sample of one” objection. We are speaking, indeed, of biological information, and we are trying to understand the causal mechanisms behind it, trying to choose bewteen few possible options (necessity, chance with the “help” of hypothetic random selective mechanisms, design). Well, complex biological information has not been generated once, on this planet, but billions of times. Every different species is a very different instance of complex biological information. Every single functional protein is a different instance of CSI. Every regulation network in nature is a different instance of CSI. So, our population is very, very big, and we can take as large a sample as we like. No “sample of one” problem here.

    #155: You are right again, and I perfectly agree with you about the importance and depth of the concept of specification. It is certainly possible, in principle, that meaning does not exist at all, that everything is deterministic and senseless, and in that case, and only in that case, Dembski is working at an impossible task. But if something that can be called “meaning” exists (and I believe that by far most people on earth believe that), then Dembski (and anyone else who has tried and failed, or who is trying now) is addressing a very fundamental problem of nature, but probably a very difficult one. That is one of his greatest merits. Most other scientists, who refuse even to recognize, much less to address, the problem, are not certainly helping to solve it.

  157. Short on time…but even if there is a problem with deriving Specificity wouldn’t that just result in a false negative if the object is indeed designed? So the limitations of the observer that might be a valid limitation of ID (I’d have to put more thought into it) but it’s not like it’s resulting in a false positive. Also, Specificity wouldn’t need to be exact on all fronts. For example, if SETI received an alien signal we’d recognize that it’s a signal but since we wouldn’t know the exact format it’s possible we couldn’t know exactly what’s contained within the signal (alien soap opera?). Another example is an encrypted data stream designed to resemble noise. In that event Specification might not be recognized at all.

  158. “…but even if there is a problem with deriving Specificity wouldn’t that just result in a false negative if the object is indeed designed?” ==Patrick

    short on time as well, but over-extension of the concept “specification” would lead one to conclude the state/phenomenon in question are highly unlikely (b/c specified states are rare) and thus you would be more likely to infer design. Therefore the risk of false-positive is very real IMO.

  159. Dembski is using, in his theory, a traditional fisherian approach to inference, like anybody else in most biological sciences. And he has chosen by far the safest confidence level I have ever seen (1 to 10^150) to reject the “null hypotesis” of random generation of biological information.”==gpuccio

    I agree that in science as well as everyday life we make important and warranted inferences, some of which are made in a formal fisherian framework. And while it is formally possible for all the air molecules in this room to [exit stage left], it is not something I’m generally concerned with. Nor should I be. (I hope.) I haven’t time to defend myself properly, but I’d argue that when we take the discussion to cosmological/astronomical scales of both space and time, which is the ultimate domain of the design inference as it regards organic life IMO, then the “possible” vs. “probable” issue arises again. Particularly when, under the evolutionist/materialistic scenario, the only way you’re actually here to pose the question is b/c one of those unlikely scenarios/states occurred. You’re correct that I didn’t properly connect my concerns with possibility and “sample of one” with our current discussion. Later this evening I hope to answer your objections more adequately.

  160. gpuccio,

    Considering it further, you’re correct. These latest issues I’ve raised pertain more to OOL than to the “information increase” via evolution issue we’ve been discussing. As far as the latter issue goes, I’m not sure how the debate proceeds. While I think I have a decent handle now on what CSI may entail, just how the concept maps to a real life organism and just what is the appropriate fitness landscape within which to even begin to estimate the relevant probabilities? Clearly none of us have the answers. ID-ists would infer that, even without the details known, the rational inference is that it wouldn’t happen by chance. Darwinists, with the same information missing, infer that it likely happened by a process/algorithm that has chance as a core–though not its only–component.

  161. H’mm:

    Seems I inadvertently went over a limit yesterday. Okay, let’s set two contexts within which I think we can address the issue in a more balanced fashion:

    1] The issue of defaults and reasonable degree of proof:

    Jerry, 145: They will say it arose by natural forces through an emergent process that took hundre[d]s of million years of trial and error and then admit that it was remarkably lucky.

    This bears on a certain view of the subtlety in Darwin’s remark that “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.”

    Namely, some would say that so long as there is a logical and physical possibility, NDT reigns by default. This is in fact most unreasonable and selectivley hyperskeptical as the Darwinists – just like the rest of us — routinely accept that many 500+ bit strings that are functional in communicative contexts [and contexts reducible to such] that they are best explained by agency not chance. Thus, my remarks above to Great_Ape.

    When therefore see a sudden ratcheting up on demand for degree of proof, on a question where a major worldview and life agenda-linked issue is at stake, to a level that say the second law of thermodynamics cannot pass, then we should call “foul.” [For, it is logically and physically entirely possible for all the oxygen molecules in the room in which you sit to rush to one end, as one of the many microstates compatible with a roomful of air. But, such an improbable and specified outcome is so utterly beyond the probabilistic resources available, that we ignore that prospect in our daily lives. And, much more . . .]

    2] Quasi-infinite arrays:

    4] Jerry, 145: the fact that chance led to the picking of this one combination does not preclude that there may be zillions of other possible combinations that would also lead to a self replicating, energy using, self contained, complex set of chemical reactions etc entity. So therefore the argument from low probabilities is not appropriate. In other words there are a large number of possible targets on the wall for the archer to hit.

    First, let us go back to John Leslie on cosmology and the art of hitting flies on a wall, which exposes the underlying hole in the assertion:

    One striking thing about the [cosmological] fine tuning is that a force strength or a particle mass often appears to require accurate tuning for several reasons at once. Look at electromagnetism. Electromagnetism seems to require tuning for there to be any clear-cut distinction between matter and radiation; for stars to burn neither too fast nor too slowly for life’s requirements; for protons to be stable; for complex chemistry to be possible; for chemical changes not to be extremely sluggish; and for carbon synthesis inside stars (carbon being quite probably crucial to life). Universes all obeying the same fundamental laws could still differ in the strengths of their physical forces, as was explained earlier, and random variations in electromagnetism from universe to universe might then ensure that it took on any particular strength sooner or later. Yet how could they possibly account for the fact that the same one strength satisfied many potentially conflicting requirements, each of them a requirement for impressively accurate tuning? . . . . the need for such explanations does not depend on any estimate of how many universes would be observer-permitting, out of the entire field of possible universes. Claiming that our universe is ‘fine tuned for observers’, we base our claim on how life’s evolution would apparently have been rendered utterly impossible by comparatively minor alterations in physical force strengths, elementary particle masses and so forth. There is no need for us to ask whether very great alterations in these affairs would have rendered it fully possible once more, let alone whether physical worlds conforming to very different laws could have been observer-permitting without being in any way fine tuned. Here it can be useful to think of a fly on a wall, surrounded by an empty region. A bullet hits the fly Two explanations suggest themselves. Perhaps many bullets are hitting the wall or perhaps a marksman fired the bullet. There is no need to ask whether distant areas of the wall, or other quite different walls, are covered with flies so that more or less any bullet striking there would have hit one. The important point is that the local area contains just the one fly.

    In short, when we see so-fine tuned a system as the lock-key preotein machinery of the cell, driven by a long-since past 500-bit code, with stabilising error correction, plus a known situation that it can be destabilised in many cases by a single amino acid-length error, we should pause before postulating unobserved possible other life forms to dismis the force of hteobserved facts. So, too, by bringing to bear the other “half” of ID so to speak, we gain an interesting synergy, as we observe that a locally fine-tuned outcome is just as impressive to the open mind as a globally fine-tuned one. (It is also just as effective – in the end — in subjecting the closed minded to such public scrutiny that they are forced to back down through the fear of consequences to being so obviously unreasonable and/or abusive, a useful but rather slow substitute for want of intellectual virtues. Let us also just note that Kuhn observed on paradigms that they usually triumph by generational change. At that rate ID is about 1/3 to ½ way home.)

    I’ll pause here . . .

    GEM of TKI

  162. Also:

    I think it would make a difference if we go back 25 or so years and see the roots of the concept of complex, specified information.

    For, it turns out that CSI is a general OOL concept driven by the discovery of the intricacy of life at molecular level, not a specifically ID conception.

    To see this, I first note that in my first comment [# 108], having been lured to unlurk, I pointed out the depth and ubiquity of the problems in definition. In so doing, I pointed out that there is a long-standing empirical and history of OOL research ideas context for it. Indeed, the concept actually originated with men like Yockey, Polanyi, Orgel and the like, then was taken up by Thaxton, Bradley & Olsen in their 1984 work, TMLO, chapter 8.

    In short, contrary to popular opinion, this concept is NOT a novelty introduced by design thinkers, but the product of the organic development of the OOL field as it contemplated the nature of DNA and proteins.

    WD et al kindly forgive me the bandwidth for posting in extensso, but this is really central:

    Only recently has it been appreciated that the distinguishing feature of living systems is complexity rather than order.4 This distinction has come from the observation that the essential ingredients for a replicating system—enzymes and nucleic acids—are all information-bearing molecules. In contrast, consider crystals. They are very orderly, spatially periodic arrangements of atoms (or molecules) but they carry very little information. Nylon is another example of an orderly, periodic polymer (a polyamide) which carries little information. Nucleic acids and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information. By definition then, a periodic structure has order. An aperiodic structure has complexity. In terms of information, periodic polymers (like nylon) and crystals are analogous to a book in which the same sentence is repeated throughout. The arrangement of “letters” in the book is highly ordered, but the book contains little information since the information presented—the single word or sentence—is highly redundant.

    It should be noted that aperiodic polypeptides or polynucleotides do not necessarily represent meaningful information or biologically useful functions. A random arrangement of letters in a book is aperiodic but contains little if any useful information since it is devoid of meaning.

    [NOTE: H.P. Yockey, personal communication, 9/29/82. Meaning is extraneous to the sequence, arbitrary, and depends on some symbol convention. For example, the word "gift," which in English means a present and in German poison, in French is meaningless].
    Only certain sequences of letters correspond to sentences, and only certain sequences of sentences correspond to paragraphs, etc. In the same way only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being and in a specified sequence.5 Orgel notes:
    Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6
    Three sets of letter arrangements show nicely the difference between order and complexity in relation to information:
    1. An ordered (periodic) and therefore specified arrangement:
    THE END THE END THE END THE END
    Example: Nylon, or a crystal.
    [NOTE: Here we use "THE END" even though there is no reason to suspect that nylon or a crystal would carry even this much information. Our point, of course, is that even if they did, the bit of information would be drowned in a sea of redundancy].

    2. A complex (aperiodic) unspecified arrangement:
    AGDCBFE GBCAFED ACEDFBG
    Example: Random polymers (polypeptides).

    3. A complex (aperiodic) specified arrangement:
    THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!
    Example: DNA, protein.
    Yockey7 and Wickens5 develop the same distinction, that “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.[BTW, this also implies that TMLO's three online chapters on the thermodynamics of OOL have greater current relevance than is often recognised.]

    Thus, we should see Mr Dembski’s work as an attempt to give mathematical flesh tot he above concepts, through developing an explanatory filter relative to contingency, complexity and functional specificity, across the three longstanding acknowledged main causal forces, chance, necessity and agency.

    Therefore, we should dial back a lot of the force of the skepticism that commonly greets the concept today, now that Design Theorists make a lot of use of it, and now that Mr Dembaki has set out to model it mathematically.

    Okay, pause again . . .

    GEM of TKI

  163. Finally:

    Oops, sorry Dr Dembski on your name just now . . .

    I also think that we need to look at the clouding context set by worldviews considerations and the links to cultural agendas. As has often been observed, much of the venom against ID is driven by a perception that it is a Trojan Horse for “theocracy” rather than a legitimate scientific movement – a charge often (though not always) made by what we could term “Atheo-crats” and their fellow travellers.

    But in fact, we should recognise that Plato in his C5 or so BC work, The Laws, Book 10, discussed the troika of causal forces thusly:

    Ath. . . . we have . . . lighted on a strange doctrine.
    Cle. What doctrine do you mean?
    Ath. The wisest of all doctrines, in the opinion of many.
    Cle. I wish that you would speak plainer.
    Ath. The doctrine that all things do become, have become, and will become, some by nature, some by art, and some by chance.
    Cle. Is not that true?
    Ath. Well, philosophers are probably right; at any rate we may as well follow in their track, and examine what is the meaning of them and their disciples.
    Cle. By all means.
    Ath. They say that the greatest and fairest things are the work of nature and of chance, the lesser of art, which, receiving from nature the greater and primeval creations, moulds and fashions all those lesser works which are generally termed artificial . . . . . fire and water, and earth and air, all exist by nature and chance . . . The elements are severally moved by chance and some inherent force according to certain affinities among them . . . After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only . . . . Nearly all of them, my friends, seem to be ignorant of the nature and power of the soul [i.e. mind], especially in what relates to her origin: they do not know that she is among the first of things, and before all bodies, and is the chief author of their changes and transpositions. And if this is true, and if the soul is older than the body, must not the things which are of the soul’s kindred be of necessity prior to those which appertain to the body? . . . . if the soul turn out to be the primeval element, and not fire or air, then in the truest sense and beyond other things the soul may be said to exist by nature; and this would be true if you proved that the soul is older than the body, but not otherwise.

    It is worth noting that Plato was here trying to ground that common core framework of morality that undergirds the legitimacy of law through establishing justice, and that in the context of his polytheistic worldview. In short the problem of the worldview linkages of believing in/disbelieving in agency as a context for origins, has been a longstanding issue. [Atheistic and/or materialistic systems, from C5 BC on, have been seen as seriously challenged to ground morality. Indeed, to this day, much of the heat in the discussions over ID stems from this. On that I simply point to Rom 1 and 2, especially 2:5 – 11, in parallel with John 3:19 – 21. In short once motive mongering is introduced, there is more than one side to the story. So, let us not go there . . .]

    Cicero presciently picked the scientific issues up in C 1 BC, in a remark also tied to worldview level discussions, through identifying the difficulties of by chance getting to a sufficiently long and meaningful string of digital characters. [Notice, his intuitive, common-sense based use of a filter resting on contingency, sufficient complexity and specific function in a communicative context that independently specifies the string – not just any random string will do]:

    Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 - 90.]

    Thus, this is both a scientific and worldview-connected issue. We therefore need to lay to one side in the first instance worldview level assumptions and assertions on matters that can be addressed empirically: a 500-bit or greater string of digital characters that functions aptly in a communicative context and system [source/sink, encoder/decoder, transmitter/receiver, channel, in the face of noise and its potential to corrupt signals] is sufficiently complex and specific that we normally and naturally infer to agency not chance as its best explanation. We should therefore have the courage to be consistent on this. [Cf my discussion here, which is also linked through my name.]

    Okay, pause finally . . .

    GEM of TKI

  164. Oh yes:

    Back on the original topic, HT Evo News & Views, citing Egnor over at Pharyngula:

    I am not an evolutionary biologist, and my research (on cerebrospinal fluid dynamics and cerebral blood flow) is certainly not closely related to evolutionary biology. There isn’t any area of medicine that makes much routine use of evolutionary biology, except perhaps microbiology, and most of microbiology is molecular and cellular biology. Doctors don’t deal much with evolutionary biology, since eugenics went out of fashion. So I’m not an expert. My questions shouldn’t present much of a challenge to you.

    How much new specified information can random variation and natural selection generate? Please note that my question starts with ‘how much’- it’s quantitative, and it’s quantitative about information, not literature citations. I didn’t ask ‘how many papers can I generate when I go to PubMed and type ‘gene duplication evolution’. I asked for a measurement of new specified information, empirically determined, the reference(s)in which this measurement was reported, and a thoughtful analysis as to whether this ‘rate of acquisition’ of new specified information by random heritable variation and natural selection can account for the net information measured in individuals of the species in which the measurement was made. Mike Lemonick was wrong that this isn’t an important question in evolutionary biology. This is the central question . . . . . Duplication of information isn’t the generation of new information. No one doubts that living things can copy parts of themselves. You have presented no evidence that the process of (slightly imperfect) copying is the source of all that can be copied and the source of what actually does the copying . . . .

    There is obviously a threshold of the information-generating power of RM + NS . . . . So what’s the threshold, quantitatively?

    Methinks, an excellent question . . .

    GEM of TKI

  165. kairosfocus pointed to an article in Evolution News & Views which Mike Egnor poses questions to the Darwinists

    http://www.evolutionnews.org/2......html#more

    Here is the last couple paragraphs of his answer

    “I did a PubMed search just now. I searched for ‘measurement’, and ‘information’ and ‘random’ and ‘e coli’. There were only three articles, none of which have any bearing on my question. The first article, by Bettelheim et al, was entitled ‘The diversity of Escherichia coli serotypes and biotypes in cattle faeces’.

    I searched for an actual measurement of the amount of new information that a Darwinian process can generate, and I got an article on ‘cattle faeces’. I love little ironies.

    Mike Egnor”

  166. short on time as well, but over-extension of the concept “specification” would lead one to conclude the state/phenomenon in question are highly unlikely (b/c specified states are rare) and thus you would be more likely to infer design. Therefore the risk of false-positive is very real IMO.

    Before even calculating CSI we first have to go through the explanatory filter, which would eliminate most cases. So unless you’re positing unknown mechanisms or special cases that produce false Specifications (like a snowflake; except that we know what produces those) I’m not sure how there would be an “over-extension”?

    Although I should note that I’m open to the possibility that there may be special cases “when a certain threshold of complexity is reached via design it may be possible for additional “emergent complexity” to be generated depending on how the system was designed (plasticity in the language and the formulation of base classes of information).” So if these special cases (and unknown mechanisms) are real then ID would be adjusted to account for them. Unless, of course, if these mechanisms in turn confounded design detection at every turn…but that seems doubtful at this point.

  167. “There is obviously a threshold of the information-generating power of RM + NS . . . . So what’s the threshold, quantitatively?” ==Egnor

    While it may be a fascinating question, it is one that no one can legitimately be expected to answer in any way other than what Myers did; that is, by examples of evolutionary feats accomplished. Why? Because no one has proposed a workable method for quantifying such information in a real living system of replicating organisms. We’re barely even able to define it (see above), and the definition of that biological information (in its formalized state) itself is not accepted outside the ID community. (So the challenge Egnor poses, paraphrased, is this:

    “Using a definition of biological information that is not widely accepted nor understood, and employing a methodology that has yet to be worked out, operating on a system for which many relevant parameters are unknown, please estimate how much evolution can increase this enigmatic quantity of specified information—and, by the way, I’d like that in the form of a number, or I won’t take it seriously. Thanks.”

  168. great_ape

    “While it may be a fascinating question, it is one that no one can legitimately be expected to answer in any way other than what Myers did; that is, by examples of evolutionary feats accomplished.”

    It is a fascinating question, and anybody who wants to be honest could answer more or less as you did: I recognize it is a fascinating question, and I recognize that we have no answer. Besides, it is perfectly right that one can have doubts on the concept of CSI, but then one should be honestly available to a respectful discussion about that, more or less as you have done in this thread.
    Look, instead, at PZ Myers’ position and actions. He has:
    1) “Answered” the question giving “examples of evolutionary feats accomplished”. That is completely incorrect and unfair. The examples he gives are only examples of the indiscriminate use of force and authority to affirm what is not true, and the kind of argument is always the same: the new genes are there, so “evidently” our theory, which is necessarily true, has created them. And it must be true, because my friends have repeated the same concept on this journal of ours, and that journal of ours, and so on. That is not an answer, it is the sad parody of an answer.
    2) Continued, together with his friends, to denigrate, insult, criticize Egnor in any possible way, never acknowledging that his question could have any value, and never trying to explain why, in his opinion, the question could not be answered (as, for instance, you have tried to do in your post.

    So, please, don’t try do defend PZ. After all, you are here, you are discussing with us in a perfectly civil, and I would add satisfying, way. And I think that we can recognize that all that has been said in this thread is interesting, and stimulating.
    PZ, instead, is there in his blog, where he is continuosly trying to affirm, always in unrespectful and offensive tones, that all of us who are discussing these questions here (including Egnor, me, and perhaps even you) are a band of stupid criminals, of ignorant fanatics whose only purpose is to destabilize science and create a theocracy, and whose only weapons are deceit and a bundle of false arguments without any value.
    So, please, don’t defend PZ and his accolites. Even if darwinism were true (and believe me, it is not) their behaviour is completely inexcusable, intolerant, fanatic.

    kairosfocus:

    I have very much enjoyed your generous posts. I am very happy that you have decided to “unlurk”, and I hope you will go on this way.

    Finally, in case Dr. Egnor is reading this thread, I would like again to express my sincere admiration and solidarity to him, not only for his lucidity of thought, but also for his moral integrity: posting at pharyngula with intelligence and respect, in a thread which is attacking you in such an unilateral, unfair and offensive way, is really a great act of courage and coherence.

  169. great_ape,

    Let’s see if we can get to a starting point that we agree upon. The main contention of ID is of course that what we see in biology was designed in some fashion. But let’s assume for a second that biology is a special case like I noted above and that this contention is incorrect.

    Obviously most people would lose interest in ID if that occurred. But do you think that the design detection methods of ID are useful in general for other things?

  170. gpuccio,

    Let me clarify one point. I’m not defending P.Z.’s general approach to these discussions. I’m not a fan. There are, however, just as many folks on the ID side of the aisle who are given to ranting, rhetoric, and recourse to authority. They just use different authorities. IMO no one has the moral high ground here.
    I haven’t even read all of PZ’s responses to Egnor, aside from his post: “Dr Michael Egnor challenges evolution!”. That post, in P.Z. terms, is downright tactful. I just happen to agree with him that numerous instances of gene duplication followed by specialization represent legitimate cases of information increase of the most relevant kind to this discussion. (This against a backdrop of gene duplications that *don’t* specialize, atrophy, and are ultimately lost.)

    You may question those data and whether they are true evolutionary accomplishments, but you’d have a lot of data for which to come up with an alternative (and valid) explanation. The legitimate question IMO is whether this type of information increase is trivial in comparison to some other necessary type/quantity you think is meaningful.

    Of course, Dembski has claimed, if I’m not mistaken, that CSI can only *decrease* or at most be preserved. So even capitulating this minor issue of gene duplication with specialization is a big deal to some. The citing of articles is an appeal to authority, but it is the authority of peer-reviewed data and interpretation. (i.e. the sort of authority we use in scientific discourse on a regular basis; and when you opt to buck authority, you better have an arsenal of data and analysis at your disposal to overturn the established view)

  171. “But do you think that the design detection methods of ID are useful in general for other things?”
    ==patrick

    In principle, being able to accomplish what the method intends to do, I could see contexts where this ability could be useful (SETI, cryptography maybe?). I am not in the best position to judge, though. I leave it to my colleagues that are mathematicians, computer scientists, etc. So far there does not seem to be much excitement in academia. (This despite the fact that in some circles (.e.g. chomski crowd) darwinism is not politically correct enough to be acceptable)

    Theoretically, for me, if “design detection” research makes any headway into better formalizing the concept of “meaning,” then I find that worthwhile and interesting.

  172. Hi All:

    A few more remarks are in order (and again, thanks for the kind words and generally civil tone on both sides):

    1] On Egnor’s [still unmet] challenge:

    I think it is unfortunate that he made a veiled reference to bovine scatology [as was promptly picked up . . .], but note that his core challenge still stands:

    How much new specified information can random variation and natural selection generate? Please note that my question starts with ‘how much’- it’s quantitative, and it’s quantitative about information, not literature citations . . . . Duplication of information isn’t the generation of new information. No one doubts that living things can copy parts of themselves. You have presented no evidence that the process of (slightly imperfect) copying is the source of all that can be copied and the source of what actually does the copying . . . . There is obviously a threshold of the information-generating power of RM + NS . . . . So what’s the threshold, quantitatively?

    It is telling that, for all the literature bluffing on gene duplication etc, there is still no direct response on the point from the advocates of evolutionary materialism. [In the Time Mag thread, he notes that if he asked similar questions about physical parameters, he would get a prompt response. He would, too.)

    2] What is the threshold:

    I will take a brief stab at why the contrast.

    500 bits is a smallish quantum of digital storage, hardly a blink in today’s Windows world of 1 – 2 Gigabyte RAMs! (As one still hankering after Macs and Amigas and looking to Linux to change the world, I cannot resist this one: Windows, even in Vista, is “living proof” that design and optimality are two very different questions! But, good enough can be very successful, as the success of Mr Gates’ software amply demonstrates.)

    And yet, 2^500 ~ 3.27*10^150.

    In short, once we are at about 500 bits or more of storage, if we are looking for a unique state in the configuration space, it is credible that a random search or a search reducible to such a search will not reach that state. If we deal with islands and archipelagos of such states [which are, for argument, viewed just now as sufficiently tightly spaced that once we reach the first island, we can freely walk around] once the islands are sufficiently sparse by a similar criterion, we end up in the same position: we cannot get to the first island from an arbitrary start-point.

    Now, existing life forms have ~ 300 – 500k up to 3 – 4 Bn DNA elements, where each G/C/A/T 4-state monomer therefore stores up to two bits; and where there is good reason to infer that the lower end is if any thing a bit too small for a lifeform that is independent of other forms providing key nutrients. This is three or more orders of magnitude up from the 500 bit threshold; i.e there is strong evidence here that OOL studies are pursuing a task that is in fact beyond the probabilistic resources available in plausible pre-biotic environments.

    Further to this, once we move to say treh Cambrian life revolution, we are looking at a need to get to dozens of body plans, requiring dozens of new cell types, with associated epigenetic structures etc. As Meyer et al have pointed out,t hat credibly means looking at moving up to the 180 million storagre unit DNA code found in a typical modern arthropod or the like, several times over, within a fairly short compass of time and space – even if we accept the claims that invisibly the DNA was producing the required variety for a billion years or so ahead of the revolution.
    Such increments in information, a fortiori, are well beyond the 500-bit threshold. And, they do not even begin to address the issues of creating the DNA’s code, the associated integrated mefdchanisms that implement it,and the required algorithms to do so. In short, both the OOL and macroevolution scenarios proposed by the evolutionary materialists, are absolutely well beyond the proabilistic resources of the observed universe. [And, if they resort to the infinite array of subuniverses scenarios, to expand the resources, they have shifted subject to metaphysics, and have no right to supress discussion of the alternative, design.]

    3] Above: IMO no one has the moral high ground here.

    This is a disappointing resort to the ethical fallacy of [im]moral equivalency. On my long observation, the leading Design Theorists [and even the leading YECs for that matter], and most of those who follow them do not routinely resort to the attack to the man or to the strawman as their basic and first resort. By sharpest contrast, in my experience in other less regulated blogs, including major ones I could name, such is unfortunately the routine rhetorical resort of evo mat advocates. When I have followed up links to major Evo Mat sites, I have seen that in this, they are following the policy of a great many of the leading advocates of NDT, of which PZM, Dawkins, NCSE, and Forrest are unfortunately all too typically representative.

    Further to this, as the Sternberg case shows, this uncivil attitude also spills over into career busting behaviour and unjustified trashing of professional reputation. [Indeed, in part my unlurking here is in the context of a pattern in a major blog over in the UK, in which Mr Bradley, Mr Behe, Mr Dembski, Mr Minnich, etc were all attacked in this vein instead of dealing fairly with their case on the merits; I have linked to this thread from there to show the contrast. Cf here Johnson's recent paper.]

    That is not a pattern of moral equivalency . . .
    ___________

    Okay, can we speak to the merits of the matter raised by Mr Egnor; perhaps showing me where my issues above [and earlier expressions such as in Denton's 1985 book, ch 13 on Evo a theory in crisis] miss the mark, if they do?

    GEM of TKI

  173. Kariosfocus — thanks for the link to Johnson’s papere!

  174. H’mm:

    SNOWFLAKES:

    Having just blown my big sister J a Hershey’s kiss-tinged smack from 1,000 miles away for a particularly nice email [a real tweetie . . .], let me be short & sweet here.

    Snowflakes form under boundary conditions that lend themselves to complexity, but are constrained by the bonding properties of the H2O molecule, and indeed there is a saying that no two snowflakes are alike. But, that is just the point: it would be hard indeed to set up an experiment to replicate the precise shape and size of a given snowflake, whether by random search or by sophisticated experimental manipulation.

    In short, once the configuration space gets big enough, specification becomes highly elusive to random searches. That directly makes the design inference point: once we see a functionally specific configuration, in a context of sparseness of such configurations in a large enough configuration space, then agency is – absent compelling reasons to think otherwise – the logical explanation. And, worldview level question-begging is plainly not sufficient to be compelling.

    Tweet tweet . . .

    GEM of TKI

  175. Of course, Dembski has claimed, if I’m not mistaken, that CSI can only *decrease* or at most be preserved. So even capitulating this minor issue of gene duplication with specialization is a big deal to some.

    He has three categories that account for modification of CSI inherent in biological systems: (1) Inheritance with modification (2) Selection (3) Infusion

    Brief quote for 1:

    Inheritance is thus merely a conduit for already existing information.
    …….
    By modification I mean all the instances where chance enters an organism’s developmental pathway and modifies its CSI. Modification includes–to name but a few–point mutations, base deletions, genetic crossover, transpositions and recombination generally.

    Intelligent Design; page 177. I wish there was eBook versions of ID literature…would make finding info so much easier.

  176. H’m

    Re: MEANING

    Great_Ape has stimulated me to think and dig in a bit:

    if “design detection” research makes any headway into better formalizing the concept of “meaning,” then I find that worthwhile and interesting.

    That is, first, one of the advantages of serious dialogue over manipulative debate: mutual stimulation to clarification and development of ideas.

    Now, first, one of my favourite classical authors has spoken to this issue aptly:

    Even in the case of lifeless things that make sounds, such as the flute or harp, how will anyone know what tune is being played unless there is a distinction in the notes? Again, if the trumpet does not sound a clear call, who will get ready for battle? So it is with you. Unless you speak intelligible words with your tongue, how will anyone know what you are saying? You will just be speaking into the air. Undoubtedly there are all sorts of languages in the world, yet none of them is without meaning. If then I do not grasp the meaning of what someone is saying, I am a foreigner to the speaker, and he is a foreigner to me . . . [Paulo, Apostolo, Mart, 1 Cor 14:7 – 11, c. 55 AD]

    In short, meaning is bound up with:

    1] A distinct set of symbols from the vocabulary of a code, each of which in context makes a possible difference, and which collectively are common to the source and the receiver of the message involved. (NB: this highlights one difference between a general communicative and a specifically educational situation – in the latter, one has to find common ground for communication, but the primary goal is then to teach/learn.)

    2] A characteristic pattern of sources, encoding, messages [using symbols from that common set of distinct possibilities], channels, interfering noise, receivers, decoding, sinks, and responsiveness – this last including feedback.

    3] This brings to the fore the issue of the inherent inference to design involved in reception and decoding of a putative, complex message. In the face of the possibility of confusion caused by noise, the acceptance that a noise-influenced signal is a message is an inference to design [and one we routinely make], precisely because we see that the message is functional relative to the communicative context, and is sufficiently complex that we are inclined to accept it as message not “lucky noise.”

    4] Such functionality, brings to the fore another feature of messages:

    [In the context of information processing systems] information is data — i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. — that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.]

    5] Note here, that we make a difference between the difference-making functionality of messages, and the fundamentally epistemological issues of warrant and credibility, thence “wisdom” — the art of proper and successful use of credible, well-warranted knowledge and insights. [But note that in ICTs and in the natural world, we see many cases of input, processing and output based on essentially algorithmic patterns, which make a survival/thriving difference, raising the issue of their common origin in intelligent agency, as Trevors and Abel etc discuss.]

    6] Thence, we further see that: we here introduce into the concept, information, the meaningfulness, functionality (and indeed, perhaps even purposefulness) of messages — the fact that they make a difference to the operation and/or structure of systems using such messages, thus to outcomes; thence, to relative or absolute success or failure of information-using systems in given environments. (And, such outcome-affecting functionality is of course the underlying reason/explanation for the use of information in systems.

    Can this “meaningfulness” qualitative feature of messages be formalised mathematically? Can it be quantified [not quite the same thing . . .]? Most relevantly: Is that a proper criterion of success for design theory?

    Shannon, of course, explicitly disavowed intent to attempt such quantification, successfully holding that the quantification of information-carrying capacity [in bits etc] was enough for the technological purposes he had in mind. Likewise, in today’s I[C]T-rich world that has built on his work, we are able to use the functionality of information to manage quite well while leaving the issues of warrant, meaningfulness and wisdom to the intelligent decision-makers who use the information we process and communicate. But also, we observe that the mere observed functionality within information communication and processing systems of complex messages in the face of the possibilities of noise, is enough to credibly mark such signals as artifacts of agency.

    And, that is in fact the declared – and crucially difference-making — objective of design theory. In Dembski’s phrasing:

    intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? . . . Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence.

    So, while it would be nice to develop such a formalism for meaningfulness, that is not the major current purpose of design theory, nor is it a necessary step to its current declared objectives [and its utility]. For, from the mere functionality of recognised complex information, we may freely infer on a best explanation basis, to agency as its source. Then, we may set out to reverse engineer the systems viewed as information systems, making advantageous use of our findings.

    Cheerio

    GEM of TKI

  177. Oops:

    I should have used b’s not as on the word “distinction” above. Forgive my error, andthe occasional missed typos. GEM of TKI

  178. PS: Is the “surrender with dignity” beginning?

    Cf this latest thread and the onward linked.

    James Shapiro’s Abstract:

    ABSTRACT: 40 years experience as a bacterial geneticist have taught me that bacteria possess many cognitive, computational and evolutionary capabilities unimaginable in the first six decades of the 20th Century.Analysis of cellular processes such as metabolism, regulation of protein synthesis, and DNA repair established that bacteria continually monitor their external and internal environments and compute functional outputs based on information provided by their sensory apparatus. Studies of genetic recombination, lysogeny, antibiotic resistance and my own work on transposable elements revealed multiple widespread bacterial systems for mobilizing and engineering DNA molecules. Examination of colony development and organization led me to appreciate how extensive multicellular collaboration is among the majority of bacterial species. Contemporary research in many laboratories on cell-cell signaling, symbiosis and pathogenesis show that bacteria utilize sophisticated mechanisms for intercellular communication and even have the ability to commandeer the basic cell biology of “higher” plants and animals to meet their own needs. This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings.

    Key excerpt:

    . . . My own view is that we are witnessing a major paradigm shift in the life sciences in the sense that Kuhn (1962) described that process. Matter, the focus of classical molecular biology, is giving way to information as the essential feature used to understand how living systems work. Informatics rather than mechanics is now the key to explaining cell biology and cell activities. Bacteria are full participants in this paradigm shift, and the recognition of sophisticated information processing capacities in prokaryotic cells represents another step away from the anthropocentric view of the universe that dominated
    pre-scientific thinking . . .

    Of course, I think SC is right to suggest that there is an open question on the conscious agency of bacteria [which JS suggests . . . !], but they certainly show sophisticated information processing that in other contexts we would not hesitate to term at least Weak Form AI.

    Have a read, and have a think, as Sal suggests.

    GEM of TKI

  179. Oh, well, I must admit that, reading better PZ’s rants about Egnor, I have discovered that he has indeed given a specific answer to Egnor, and a very brilliant one! In the words of PZ:

    “In addition to showing that PubMed lists over 2800 papers relevant to his question, I singled out one: an analysis that showed that insecticide resistance in mosquitos was generated by a mutation of an acetylcholinesterase gene, and that they also had a duplication of the gene—this is a classic example of how to generate new information. Duplicate one gene into two, and subsequent mutations in one copy can introduce useful variants, such as resistance to insecticide, while the original function is still left intact in the other copy”

    OK, now we know! All our debates about CSI are stupid, and not only because “over 2800 papers” tell us that it is that way out of sheer authority, but because PZ Myers has demonstrated it with a single case of irrefutable evidence. You may say that a single case is not much, but I don’t agree. A fact is a fact, after all, and I have always stated, even in this blog, that a single fact can well falsify a whole theory.

    But… Let’s look better at the fact (after all, it is only one, we can take the time to verify it). The idea is that mosquitos become resistent to insecticides by a mutation in a gene which is also duplicated to keep the original one (excuse me, PZ, for my pseudo-teleological language).
    Well, I checked. The gene in question is the AchE1, one of the genes in the mosquito which code for an acetylcholinesterase, exactly the acetylcholinesterase which is the target of organophosporic (OP) insecticides. Well, without discussing the problem of gene duplication, whose role is only to reduce the fitness cost of the mutation, let’s see what is the mutation which, in PZ Myers’ words, is “a classic example of how to generate new information”. I have checked again (thanks to internet) and here it is:

    a single base-pair alteration, G119S, within the mosquito’s version of the AchE1 gene confers high levels of resistance to these insecticides.

    A single base-pair alteration? Is that the new information which, in PZ Myers’ opinion, discredits all our debates about CSI? Is that the brilliant answer to Dr. Egnor?
    Yes, it is. PZ Myers’ ignorance of the problem we have been discussing here is simply astonishing, matched only by his arrogance.
    Just to be clear to those who may not familiar with the problem, we are speaking of a single nucleotide mutation with partial loss of a pre-existing function (the AchE1 gene), which happens to be the target of OP insecticides. In the mutated gene (less functional than the original one) by sheer luck (sometimes it happens) the insecticide cannot act. It is exactly the same model as antibiotic resistance by single mutation with loss of function. See the very good article about antibiotic resistance linked from this blog, if you are interested to know more.
    In other words, what is the complexity (not the specification!) of this “new information”? I am not a mathematician, but it should be something like that: a single nucleotide substitution, nucleotides are four, the probability is 1 to 3 by the length of the mosquito’s genome (or three times higher, if any substitution applies).
    It is not a low probability. After all, we are talking of a single nucleotide substitution: It may happen. It happens. It is not complex. It is not even specified, because it does not build any new information, just destroys partially the information which is already there. The advantage is indirect, depending on the loss of an interaction between a very specified and complex molecule (acetylcholinesterase) and a very specified (but not too complex) molecule (the OP insecticide), which, by the way, has been designed by an intelligent agent (man) to get rid of mosquitos through the knowledge of the mosquito’s intelligently designed information. Random mutation merely interrupts that intelligently designed interaction. That’s what random mutation always does in complex systems.
    By the way, here are the links to some material about that:

    http://bpi.sophia.inra.fr/topi.....et2004.pdf

    http://www.beyondpesticides.or....._06_04.htm

  180. great_ape (#170):

    ” The legitimate question IMO is whether this type of information increase is trivial in comparison to some other necessary type/quantity you think is meaningful.”

    I think you can find some answer to your legitimate question in my previous post about PZ’s “answer” to Egnor. In it I analyze the specific example cited by PZ out of the “thousands”, and I think it should be obvious that the “new information” PZ and friends are speaking of is not CSI, and not even near. So, if all the thousands of examples cited by PZ are of that kind, Dembski’s affirmation that CSI can never increase by random mechanisms remains unchallenged.
    I don’t understand all the enthusiasm of darwinists about gene duplications. Gene duplication, if it is random (which is to demonstrate), is anyway only a mechanism which does not create new information, in the best scenario it could be of help by allowing to retain the old information while new “attempts” are done, or to reduce the cost of the loss of information in cases like the mosquito gene discussed in my previous post.
    But the answer which should be given and nobody gives, the answer which Egnor has repeatedly requested without success, the answer which is not contained in the mosquito example, and I bet in none of the thousands of articles cited by PZ, is the answer to the following question: “how is CSI supposed to be created by random forces, including natural selection”? Gene duplication is no answer. HGT is no answer. Somebody has to show a model of how a sequence of, let’s say, 200 aminoacids, which has a specific enzymatic function, may have been generated randomly from some condition where that information was not present in any form. If we have a step by step model, we can calculate its plausibility, in terms of probability, resources, intermediate function of each step mutation which should be selected by a reproductive advantage, and so on.
    No wonder darwinists have never produced such a hypothetic detailed model not even for one protein, because otherwise the impossibility of their theory would immediately be evident to everybody.
    One last note about authority. I am not a big fan of authority, especially in science. Authority is good only in the measure that it can be challenged. And we, in the ID movement, are challenging it, indeed! The authority you speak of, besides, is the authority of a majority of scientists who, guided by questionable leaders like Dawkins and similar, have constantly refused, in the last ten years, even to admit that there is a challenge, much less a problem. It is the authority of those who, knowing they are the many, can denigrate the few who have different views, without even trying to understand their reasons. The authority of those who believe in the use of force and not in the confrontation of ideas. That kind of authority speaks for itself. It’s a bad authority, the worst we can imagine.

  181. gpuccio,

    Thanks for the excellent answer. You, GEM and great_ape are great assets for anyone trying to understand these issues.

  182. gpuccio,

    GREAT POSTS!!!!!!

  183. gpuccio,

    This blog post has gotten buried for a while. Would you mind me turning your recent comments into a blog post?

    In the mutated gene (less functional than the original one) by sheer luck (sometimes it happens) the insecticide cannot act.

    Sounds a lot like the nylon bug always being touted. In the case of the nylon bug, information was lost and the new enzyme was many times less efficient than its precursor, making the minor advantage null.

    1. The bug went from 100% efficiency to 2% effciency to metabolize.
    2. The bug lost genetic info as a result of a frameshift.
    3. The bug has a lower reproductive rate and efficiency.
    4. The bug cannot survive amongst the parent species.
    5. The bug acquired no functional divergence.

    An increase of information requires functional divergence without information loss. Going from metabolic function to metabolic function is not considered functional divergence. Going from, say, a sequence that codes for a metabolic function to a sequence that codes for oxygen transport would be considered “functional divergence.”

    Short on time…but what is the loss in functionality for this mosquito example?

  184. Patrick:

    Of course, if you want to turn my recent comments into a blog post, I am happy of that.

    Regarding your question about the fitness cost in the mosquito example, an anwer can be found in the abstract of the first article linked in my previous post:

    “Resistance ace-1 alleles coding for a modified AChE1 were
    associated with a longer development time and shorter wing length.”

  185. “…guided by questionable leaders like Dawkins and similar, have constantly refused, in the last ten years, even to admit that there is a challenge, much less a problem.” ==gpuccio

    Hi guys,

    I thought this thread had gone dead. Would look forward to beginning afresh in another top-level post. I just wanted to point out that we have no leaders in science. We have the occasional mouthpiece. The closest thing to leaders are those influential old-timers in various subdisciplines that can influence how money gets distributed.That’s the real power behind what gets pursued.

    I consider gpuccio’s post an understandable objection to Myers, but not one that is unanswerable. I have a reply regarding the apparent simplicity of the mosquito example. But again it delves into definitions of complexity and information and it’s probably best we begin a new thread.

  186. Hi folks:

    Great to see a little life left in the thread. Gpuccio has given us a great cluster of posts above. I look forward to any continuation . . .

    I cannot but observe that a single mutation that causes loss of information and is apparently associated with diminished functionality of the mosquito, seems to be two orders of magnitude below the level that begins to count as complex in the relevant sense:

    * 1 four-state element: ~ 2 bits of information carrying capacity. [That is I am adverting to Shannon and Hartley etc.]

    * 250 such elements ~ 500 bits, with a config space ~ 10^150.

    * 1:250 LT 0.01, i.e we are two orders of magnitude down on information carrying capacity here.

    * further to this, the biofunctional result is premised on in effect damaging a gene and causing information loss, i.e. not relevant to the creation of novel, incremental biofunctional information that leads to emergence of new capacities

    * this supports the contention in the Meyer paper that random changes in DNA linked to core body functions are more likely to be destructive than creative.

    It is worth excerpting and highlighting that peer-reviewed paper:

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types [nb perhaps 50 or more], but also for the origin of new body plans [nb on the reported order of dozens at phylum and sub-phylum levels] . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6

    PZM, in short, has evidently given us a strawman counter-example; about par for the course in light of my observations on this point over the past few years; perhaps inadvertently — I think there is a MAJOR communication gap here.. (I note too that Mr Egnor aptly complained that a literature count is not an answer to his actual question, which is on the information-generating capacity of RM + NS.]

    Okay, looking forward to onward discussion.

    GEM of TKI

  187. kairosfocus:

    “looking forward to onward discussion”

    Me too! You have introduced, in your last post, a very important subject, speaking of the problem of body plans. Of course the Cambrian explosion remains, in spite of all the meager attempts of darwinists to bypass it, one of the biggest problems for those who believe in a gradual, step by step unguided evolution.
    Anyway, the concept itself of body plan in multicellular beings is almost inconceivable if not in terms of design. One of the interesting aspects of the body plan (indeed, plan!), meaning not only the general form of the body, but also the detailed spacial and functional organization of parts at many levels and sublevels, of segments in the body, of organs in the body parts, of subparts in each organ, and so on…, is that nobody can say where it is written.
    The recent trend is to explain the body plan in terms of a few genes, usually the homeobox group or similar, because we know, essentially from experiments in drosophila, that mutations in those genes change the order of body segments. The usual, quick conclusion is that we have found the few genes which control the body plan.
    And, again, the quick answer is wrong. The tendency to explain complex engineering in terms of single genes and proteins is perhaps the most irritating assumption of all darwinist thought. Proteins are evidently the final effectors of a complex process, and it is obvious that if we modify the final effector, the outcome of the whole process is significantly modified. But that does not mean that the whole process is determined by the final effector.
    So, mutations in the homeobox genes certainly can macroscopically derange the normal order of gross body parts, but that does not mean that homeobox genes are the repository of the body plan. it seems obvious that the realization of a complex macroscopic body plan, realized by coordinating the growth, differentiation and spacial placement of billions of individual cells, cannot be realized unless a tremendous work of regulation, control, error management, continuos transcription fine tuning, and so on, is accomplished under the guide of precise information about the final result to be obtained. And that process is very likely controlled not only by proteins, but by RNA itself, and it seems almost certain that non coding DNA has a key roel in that. Most of these questions are at present completely unanswered, and while the enthusiasm for evo-devo approach may be justified, the general triumphalism about homeobox genes explaining everything is really laughable.

    great_ape:

    “I consider gpuccio’s post an understandable objection to Myers, but not one that is unanswerable”

    Looking forward to your answer… It is a pleasure to discuss with you.

  188. Hi Gpuccio:

    Let’s wait for that onward thread . . . and let’s note where it is here whenever it appears.

    You make an interesting note:

    the concept itself of body plan in multicellular beings is almost inconceivable if not in terms of design. One of the interesting aspects of the body plan (indeed, plan!), meaning not only the general form of the body, but also the detailed spacial and functional organization of parts at many levels and sublevels, of segments in the body, of organs in the body parts, of subparts in each organ, and so on…, is that nobody can say where it is written . . . . mutations in the homeobox genes certainly can macroscopically derange the normal order of gross body parts, but that does not mean that homeobox genes are the repository of the body plan. [I]t seems obvious that the realization of a complex macroscopic body plan, realized by coordinating the growth, differentiation and spacial placement of billions of individual cells, cannot be realized unless a tremendous work of regulation, control, error management, continuos transcription fine tuning, and so on, is accomplished under the guide of precise information about the final result to be obtained.

    This is indeed a major issue, especially in light of the point made by Meyer through citing McDonald:

    McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6

    We are dealing with highly complex, tightly integrated, functional processes that are seriously information-driven, and information-controlled [error detection ands correction . . .]. the architecture of the information system we see in outline before us, the languages used to code the software side, the nanotechnology of the genetic and related functional molecules, the fact that this constitutes a self-replicating automaton, etc etc all point to vast informational complexity and finely balanced coupling [finetuning . . .]

    When I work with telecommunication or information-pocessing devices, systems and networks, and see similar coupling, integration and complexity — in some cases they are species within the same genus [two state digital bit strings in RAM or on hard drives, four state digital GCAT strings in DNA, etc.] – save that the life systems are vastly more sophisticated – I cannot consistently infer to design in the one case and dismiss design in the other just because it is not in line with the views and agendas of certain key individuals and institutions. I refuse to swallow a camel whilst straining out gnats.

    So, let us see what the onward discussion brings out. And, it is indeed a pleasure to discuss with Great_Ape, as he has been both civil and serious. Kudos to him. (I would indeed like to see his answer . . .)

    All the best

    GEM of TKI

  189. Hi guys; I was waiting for the future thread, but it does not seem to be forthcoming. I’ll go ahead and post my response to gpuccio here this evening after $work if new thread does not arise by then…

  190. The mosquito example seems trivial on the surface. A single amino acid change. Not hard to come by; perhaps a wayward gamma ray passed in the vicinity of a germ-line DNA strand? The free-radical chemicals produced along the gamma ray’s wake attacked the DNA, resulting in a double-stranded break. Or maybe, far more likely, the free radicals were the byproducts of Mr. Mosquito’s own metabolism. In any case, the break was repaired, but repaired incorrectly, as happens from time to time. The repair system is not perfect by any stretch of the imagination. So you have your new amino acid. As it happens, it confers resistance to this insecticide. This all happens in the context of a gene duplication, which are also often the result of repair mishaps, allowing a copy of the original gene to exist intact. (But more on that later maybe.) The germline cell, probably a male’s, went on to fertilize an egg. The egg hatches, and we have a pesticide resistant mosquito.

    How much new information does it contain that has been generated by evolution? I don’t have the foggiest idea. But it’s more than you think it is, at least if you think that biological information is something above and beyond shannon information (You have to think that, by the way, otherwise Egnor’s question/challenge is nonsensical). Because the information differential between mosquitoVersion1 and mosquitoVersion2 is more than just an amino acid change. It now *does* something different. A process is born, one involving several components. There is that gene–the one that’s been altered–but it will also have a support system of transcription factors, etc, that regulate its expression. They are all oriented now within the context of a new process, which is to provide insect resistance. To accurately assess the information attained by that single amino acid change, we’ll have to weigh in the added information value of the specification of “mechanism of insecticide resistance” and however that breaks down into primitive components. We haven’t really been told how to do that exactly.

    Most evolutionary changes are probably similar in kind to this in nature. The material substrate change is modest, but the “interpretation” of the substrate change by nature may or not be more dramatic. It’s like that optical illusion where two faces are staring at each other. Or maybe it’s a single vase, depending on what you saw first. Sometimes you see it one way, sometimes another. You might add a little splotch for an eyespot, though, and suddenly the faces image is impossible to not see. It’s the context **in your mind** that matters. The interpretation. And here the weird thing is that the interpreter is nature itself. Mutations in the organism are (I’m currently trying to flush out this analogy more in the near future, so bear with me) a sort of Rorschach text for nature itself. What patterns are in the mind of nature that determines what it sees and how it interprets the splotches? The patterns consist of the complex fitness landscape that I am always ranting about here. That is the “mind,” if you will, of nature. It is the landscape of possibilities it can “imagine” as being viable. The landscape we don’t understand it well enough to even begin to calculate something like CSI, even if such a thing could be calculated. That is why Egnor’s question can’t be answered in the fashion that he demands.

    But we know when a new feature or process has been generated. And evolution conferring a resistance to a pesticide is a positive instance of some kind of increase of *something.* I can’t tell you the magnitude of that something, but I’m confident that you can’t either. So, in light of the fact that formal calculations are impossible to make, and in light of the fact that the timescales involved for things potentially more interesting can’t be observed–and you know if he tried to argue by reconstructing historical data, everyone’d cry foul b/c maybe agency was involved back then or something–I think Myers’ answer is among the best possible among these constraints.

  191. great_ape:

    thank you for your answer, which is rich of very interesting insights, and allows a lot of pro ID discussion! The only thing it cannot do, alas, is helping Myers’ position. Let’s discuss why.

    First of all, you are elaborating a lot of points which, I think, Myers and those who think like him are completely unaware of. I think that Myers’answer was just what it seemed: the infamous attempt to pass a single aminoacid substitution as the miraculous acquisition of CSI, hoping that nobody would check the biological nature of that mutation (or, perhaps, unaware himself of it). The choice is always the same: lie or ignorance? Fascinating options, both.

    But let’s leave Myers alone, and discuss what is much more interesting, that is your points. They are very thoughtful but, again, I cannot agree on the conclusions.

    First of all, I would like to complete the “negative” part of my post and point to the only part of your argumentation which is, in my opinion, frankly wrong, in a technical sense. You say:

    “But we know when a new feature or process has been generated. And evolution conferring a resistance to a pesticide is a positive instance of some kind of increase of *something.* ”

    I don’t agree. In this specific case, which is anyway representative of most, if not all, the “evidence” of darwinists, including antibiotic resistance, no positive instance has been created. We are only witnessing a cellulare “disease” which, by mere chance, happens to make a particular “threat” ineffective for the owner of the disease. The only thing which increases in the mutated mosquito is entropy, or if you want Shannon information, or anyway disorder. Indeed, a protein which was in a way designed to have a specific function (ACE1) loses a small quantity of its designed structure and function beacuse of a random event. That’s all.
    Let’s see in more detail the example of antibiotic resistance, which is equivalent but which gives us an interestint further element. For all the details, please read this very good article:

    Is Bacterial Resistance to Antibiotics an Appropriate Example of Evolutionary Change? http://www.trueorigin.org/bacteria01.asp

    The facts are as follows: in bacteria we have two different mechanisms by which antibiotic resistance is conferred:

    a) Mutation of some structure of the bacterium, which was the target of some antibiotic action. This case is the perfect equivalent of the mosquito case.

    b) HGT of some enzyme which can inactivate the antibiotic (eg penicillinase). In this case, we have no evidence that penicillinase is generated by random evolution, no more than any other complex enzyme. Penicillinase is a protein with function and CSI, but we find it in the scenario “ready to be used”. The bacterium acquiring penicillinase by HGT is really acquiring “something”, CSI inseed, but that something is only transferred, not created. So, Dembski’s concepts are absolutely confirmed.

    But let’s go back to the a) scenario. In that case, the acquisition of resistance is not, in any way, a new “function”. I’ll give a metaphoric example whose only purpose is to clarify the logic here:
    If in a population of animals some of them have a genetic mendelian disease, due to a single mutation which inactivates a function, let’s say a coagulation disease, and some bizarre scientist decides that he is interested only in the diseased animals, and not in the normal ones, and he goes on killing the normal animals and keeping the diseased ones, is that a demonstration that the disease is “a positive instance of some kind of increase of *something.* “? Only if you define that “something as disease, loss of function, loss of meaningful information. But positive?
    I state it again, in my opinion the only thing which increases is entropy. And CSI, however you want to define it, is certainly decreased.

    Now, to the “positive” part of my post. I think you introduce a concept which is of the greatest importance, and which is the cause of great confusion in darwinist thought. It is a confusion which is inherent in the concept of “selection”, and which often expresses itself in the use of so called “pseudo-teleological language” by darwinists. It can be summed up this way: many darwinists are IDers without knowing it (not Dawkins, beware. He is one of the pure).
    One of the manifestations of that aspect is the frequent use of inverted commas. Your arguments are a very good example of a honest and creative attempt to express explicitly this bias, and so they are a very good field for discussion. You say:

    “It’s the context **in your mind** that matters. The interpretation. And here the weird thing is that the interpreter is nature itself. ”

    and:

    “What patterns are in the mind of nature that determines what it sees and how it interprets the splotches?”

    and:

    “That is the “mind,” if you will, of nature. It is the landscape of possibilities it can “imagine” as being viable.”

    Indeed I “will”! You are perfectly describing the point of view of a designer. You are describing Nature as the designer. All your words, all your concepts, all your arguments, don’t apply in any way to the specific case of mutation of the mosquito, for the reasons already discussed, but apply perfectly to an intelligent designer, Nature, who can:

    a) Determine non random mutations

    and/or

    b) Select random mutations which are potentially useful according to a predetermined plan or design.

    That’s what Nature does, if we want to call the designer Nature. I don’t know how he/she does it, but he/she does.
    The fact is: according to a strict naturalist view, and that’s a beautiful paradox, Nature does not even exist. Or at least, it exists not as an entity, but as the sum of what exists and of the laws at work. In other words, you could well subsitute the world “reality” for nature, in a naturalistic sense. A naturalistic reality is not only blind, it is by definition not conscious, it is only the outcome of rigid deterministic laws. So, unless you define new laws which I am not aware of, nature has no purpose, it selects nothing, it implements nothing.
    You call it “landscape”, implying again that in some subsets of reality we can find some appearance of meaning. Landscape is a human word. It is the way a conscious being sees a piece of reality and gives meaning to it. It is the way the conscious mind reconstructs apparently meaningless bits of information, sometimes inferring, if the poetic mood is working, a designer or an artist behind them. A landscape exixts only in a mind. Nature exists only in a mind.
    Ouside of a mind, only deterministics laws (not many laws, only three or four, depending how you count them) exist, at least according to present science. So, if you infer special behaviours in nature, which are characteristic of minds, or of a mind possessing Nature, you should show how such behaviours can arise from the strict application of the known laws. That’s exactly what darwinism can’t do, and what it gives for granted. Because to analyze that assumption means to delve deeply in the nature of information, of thought, of order, it means considering and applying serious sciences like mathematics and statistics, and why not, physics. Better stay off from that, and leave it to Dembski and the like of him, calling them fraud, or bad science, or anything else one can device.

    A last note. You say:

    “There is that gene–the one that’s been altered–but it will also have a support system of transcription factors, etc, that regulate its expression. They are all oriented now within the context of a new process, which is to provide insect resistance.”

    That’s exactly the point. That’s what the concept of “irreducible complexity” is all about. The interesting fact is that your argument does not apply to the mosquito case, for the reasons already discussed: in that case, the new “diseased” gene is completely alone, there is no new “support system of transcription factors, etc, that regulate its expression”, because the regulations remain the same as before the mutation, there is nothing “oriented”, there is no “process”, there is nothing trying to “provide insect resistance”. The only “selecting” thing here is the insecticide, whose only fault is of not being able to “predict” the gene’s “disease” (but a new, well designed model certainly could…).
    But your words apply perfectly to the case of a designer: it is perfectly true that any variation in a gene, any true variation which implements a new function, is perfectly meaningless unless it is managed by a coordinated system of regulation, at the transcritpion level and at many other levels. That’s why, for me, any new biological function is irreducibly complex, because the designer has to implement the effector (usually the final protein), but also, and especially, the procedure, that is the code necessary to correctly use the effector. And the procedure is usually more complex than the effector itself. May be much more complex. The procedure implies information processing, boolean checkpoints, measures, quantitative evaluations, decisions, coordination with all the other procedures, interfaces, and so on. Irreducible complexity, in its purest form.

    Enough for now.

  192. Hi Great_Ape:

    Seems the blog is having fun with a Japanese cartoon series and with the Templeton story. Anyway, back on issue.

    You will note I described information-carrying capacity, as that is the real point of the I = -log2p expression. In this case, a single-point mutation in a 4-state digital system, carries up to 2 bits. My 250-point issue has to do with the associated fact that that gives us a capacity of up to 500 bits, i.e. a configuration state space of ~10^150, linked to the Dembski type bound for a unique specified state. [This extends of course to islands and archipelagos of functionality, too.]

    Let’s take up on your point in # 190 that:

    perhaps a wayward gamma ray passed in the vicinity . . . the break was repaired, but repaired incorrectly, as happens from time to time. The repair system is not perfect by any stretch of the imagination. So you have your new amino acid. As it happens, it confers resistance to this insecticide. This all happens in the context of a gene duplication, which are also often the result of repair mishaps . . . . How much new information does it contain that has been generated by evolution? I don’t have the foggiest idea. But it’s more than you think it is, at least if you think that biological information is something above and beyond shannon information . . . A process is born, one involving several components . . . . The material substrate change is modest, but the “interpretation” of the substrate change by nature may or not be more dramatic.

    Remarking:

    1] Of course, I pick the physics-linked case, as that ties into the implications of radiation damage (part of one of the must-do physics major courses I did way back when; recall how the hairs on the back of my neck stood up at appropriate points, too . . . esp. the Hiroshima “expt”). The usual mechanism is ionisation of the most common molecule in the body, H2O, leading to free radicals, thence disruption of DNA and resulting damage. Cell death etc. if bad enough, cancer etc. as a serious possibility, if not that bad. Really minor damage is repairable.

    2] You advert to and imply existing and functional repair mechanisms, which are of course error-detecting and correcting systems based on redundancies in the stored information. [The simplest case of such a system, and the first example I studied, was the 3M voting code: repeat the code twice so there are three copies. [M, M, M] Take a vote, bitwise, and majority wins. Obviously, it corrects a single error, but misses a double on the bit point, which would reverse the vote. (For DNA, if there is such a code that would require voting across four possible point states: G/C/A/T [and the match in the neighboring linked complementary strand . . .]; of course, the actual mechanisms are subtler than a brute force voting mechanism.) But observe: error correction mechanisms can be saturated, leading to possibilities for failure in environments that overly stress the system. Mosquitoes subjected to insecticide assault sounds like a stressed situation.]

    3] Underlying issue and implication: we see a code-based information system, with a control language and by implication algorithmic error correction processes. So, let us not strain out gnats while swallowing camels: in all cases of error-correcting information processing systems based on codes where we directly know the causal story, they are the produce of intelligent agents. Indeed, this is true of all such cases of functionally specified complex information [FSCI or in Dembski's term, CSI] .

    4] That immediately means that relative to what we know – as opposed to speculate – we have a candidate best explanation for such systems that we need a very good reason and evidence to reject.

    Pausing . . . .

    GEM of TKI

  193. Continuing . . .

    5] Further to this, the mutation in view is at root an information-loss incident, not an information-creation event. That is, a previously functioning molecule has been damaged and this in this case frustrates the mechanism by which the insecticide poisoned the cell processes. That is, a sufficiently strong random disruption of the chain of digital information has caused an arbitrary shift in the decoding process, due to the way the code interprets in this case protein coding. [And, BTW, there is evidently a fair amount of redundancy in the code itself including steps that make for graceful degradation by shifting to similar monomers.]

    6] Let us note: the change here in view is microevolutionary; it does not rise to the body-plan change level, and the new population is till a population of disease-carrying mosquitoes, just resistant to a formerly successful insecticide — and by the report, less functional as fliers. [This is quite similar to the case with antibiotic immunity through misshapen enzymes etc and reported HIV immunity through a genetic breakdown in the port used by the virus to gain entry, etc. Cf discussions here (esp fig 1 and table 1), here [observe on the failure to recognise the full implications of the high degree of sparseness indicated in this classic paper, starting with abiogenesis], and p. 11 ff. on CCR5 here (also cf the paper as a whole).]

    7] The information creation side is, in the end, a challenge to empirically justify a claimed process by which through a cluster of random point mutations, we materially gain novel and coherent biofunctionality in the used code space of the cell, especially at the body plan level. Notice the McDonald threshold for information-creating macro-evolution: novelty in body plans which creates a novel, coherent somatic system. This, has never been actually observed and credibly passes the threshold of exhausting the available probabilistic resources.

    8] Further to this, we see beyond this case of modifying an already existing cell, the same challenge at the point of creation of life through proposed abiogenesis mechanisms. For, observed minimally complex life forms have ~ 300 – 500 K or more monomers in their DNA strands, three orders of magnitude of string elements up in information-carrying capacity from the level specified by Dembski as exhausting the PR of the observed cosmos.

    9] The information carrying capacity of course increased exponentially with chain length: 4^N; where 4^250 ~ 10^150. 4^300k ~ 9.94*10^180,617. I conclude for good reason that the islands of functionality are effectively impossibly sparse in such a space, relative to random processes that just so happen to chance upon the coherent codes and implementing machinery for the relevant systems we see in the cell. But, we routinely see intelligent agents creating functional digital configurations that are comparably sparse in say text-string space.]

    10] So, while indeed, the issue of random changes and metrics on information carrying capacity are just a first step, they are an important step as they tell us the size and credible sparseness of the configuration spaces we are working with. In that context, the required increment in biofunctionality coupled to the coding and processing systems and the probabilistic issues as the number of points mounts towards 250+, by direct implication raise the issue of exhausting the probabilistic resources of the situation.

    BOTTOM-LINE: A single-point mutation that actually seems to have damaging consequences and works by information loss, seems to be nowhere near the relevant threshold for explaining macro-evolutionary change. [It aptly explains one mechanism for micro-evolution, but micro evolution is uncontroversial across live options and is not the material issue at stake.]

    In short, interesting, but not yet the level of explanation required.

    Thanks for a good try

    GEM of TKI

  194. GP:

    I see you also linked the same paper on bacterial resistance and its implications.

    I would like to pick up one of your points to G_A rather briefly:

    All your words, all your concepts, all your arguments, don’t apply in any way to the specific case of mutation of the mosquito, for the reasons already discussed, but apply perfectly to an intelligent designer, Nature . . .

    You are here bringing up the Case III in my always linked, cosmological design. [The onward linked philosophical issues faced by evolutionary materialist worldviews are important but of course not scientific questions.]

    The key link is that a fine-tuned cosmos that with minor shifts becomes non-life sustaining, often radically so, exhibits a complex balance that is of a type often set by designers. In this context, should we discover that there is in fact a law of nature that forces or makes highly probable the emergence of life on suitable planets etc, then that is in fact suspicious, given the cosmological finetuning issues in the linked. (BTW, this is also why the project to find a grand theory of everything is actually a design research project, though of course by and large an inadvertent one. Imagine the discovery of a super-law of physics that implies that the sort of fine-tuned cosmos we observe is forced. What does that suggest? And, the often met with alternative, an infinite array of subcosmi with randomly distributed laws or parameters, is of course a resort to ad hoc, ex post facto philosophy not science, and opens the door to the credibility of a Design based worldview alternative.]

    Similarly, observe that bacterial resistance to say Cipro, the subject of Fig 1 in the linked paper, is by detuning the folding of the enzyme in question. Fine-tuning in short is linked to sparseness in the local configuration space. [Observe how Wright in the 1932 paper adverted to cosmological instances to help interpret the sparseness issue,t hen failed to follow through on the implications, i.e. Exhaustion of probabilistic resources. I gather in Monod's Chance and Necessity, 1970 or so, the same gap emerges. Methinks this is a systematic gap in the evolutionary materialist research programme, from hydrogen to humans.]

    We know that intelligent agents routinely create functionally specific, complex [sometimes, irreducibly complex] information rich systems, that exhibit finely-tuned behaviours, and may have defences for the fine-tuning through error detection and correction, sometimes leading to intelligently programmed graceful degradation.

    So, we see here a very familiar pattern . . .

    GEM of TKI

  195. Lemme take a breather then I’ll try once again. On a sidenote, though, I find it interesting, kairosfocus, that you can believe in the fine-tuning concept of the cosmos yet remain so skeptical about darwinian evolution in principle. Is it not possible that the cosmos was arranged in such a way that evolution would happen as it has?. That is, it would happen in a way that was *indistinguishable* from neodarwinism. Yet a omniscient/omnipotent designer could still know what would happen and achieve its goals. In short, sometimes I wonder if you ID guys don’t simply suffer from a lack of imagination concerning the ultimate capacities of the designer. Just something to ponder, as it is close to my personal view.

  196. Hi Great_Ape

    I see your time-out call. Maybe in the meantime, the blog masters will deem it worth the while to post that new thread . . .

    Now, too, you comment on some points I will take up a bit:

    1] You can believe in the fine-tuning concept of the cosmos yet remain so skeptical about darwinian evolution in principle.

    First, while I am simply discussing here within the ambit of the generally accepted timescales and cosmology of the past [I will pick up later], I am pointing to the gap in the empirical evidence as respecting the macro-level claims of NDT. I am aware that at micro-evolutionary levels, NDT-style evolution can and does happen. But Galapagos island finch beaks that get a little longer in a drought, then change back, or “species” that are now hybridising and interbreeding, or the like do not cross the key threshold McDonald noted and Meyer picked up.

    Further to this, from the Cambrian life revolution forward, there is a characteristic pattern of sudden appearance and stasis then abrupt disappearance (on a temporal interpretation . . .) in the now “almost unmanageably rich” fossil record. All of this is linked to the information generation problem I and others have highlighted, including the mosquito case above.

    Until I see clear evidence of ability to cross that core body plan innovation threshold, I will remain skeptical on the common descent through RM + NS etc thesis.

    By sharpest contrast, the picture of the origin and development of the physical cosmos, is in my observation not presented dogmatically, but is both provisional and well-anchored on supportive empirical data. (The contrast in tone between, say, my favourite general survey Astronomy text and the tone I have seen in many a discussion on these matters I have found telling!) The Hertzsprung-Russell diagram, the stellar distance scale [the subject of my very first ever scientific presentation and public speech . . .], the observed red shift and Hubble Constant are all anchored pretty directly to observation.

    2] Is it not possible that the cosmos was arranged in such a way that evolution would happen as it has?. That is, it would happen in a way that was *indistinguishable* from neodarwinism.

    I observe the underlying inference/assertion, that NDT is the actual observed mechanism of origin or is indistinguishable from it. Methinks there is a major evidentiary gap to be bridged before one can assert such, as noted.

    In short, the current state of evidence is at macro-level, quite plainly “distinguishable” from NDT’s predictions and explanations.

    3] sometimes I wonder if you ID guys don’t simply suffer from a lack of imagination

    This is of course a classic gambit. But in fact, the design inference is precisely based on an explanatory filter that reckons with chance, necessity and agency then proceeds to ask how may we credibly and empirically distinguish the three. [Reverse engineering the design is a further stage, and is indeed one being embarked upon. But in the key cases of OOL and macro-evolution, as well as cosmology, the prior issue – given the institutional prevalence of evolutionary materialism as both a guiding philosophy and a paradigm for science -- is whether design is a reasonable and indeed better explanation of the observed data.]

    This has been outlined above, and is in the always linked, but we may briefly summarise:

    a] It is generally accepted that chance, natural regularities and agency are all known as the three major root causal forces at work in the world.

    b] For instance, if a heavy object falls, that is the NR, gravity. If it is a die, the face that is uppermost is effectively a product of chance. If it is tossed as a part of a game, that is agency.

    c] When an empirically observed event is: contingent, complex beyond the credible probabilistic reach of chance in its context, and functionally specified, the resulting FSCI/CSI is a well-known, frequently observed marker of intelligent agency. [Think of coming across 500 pennies, all heads uppermost. Which is the better/likelier explanation, a lucky toss or someone who arranged the pennies?]

    d] Indeed, in every case where we directly observe an event’s cause in action, if it meets the above filter’s criteria, it is produced by agents. That is, we see here an empirical basis for inferring on a best explanation basis, to agency, even when we do not see the cause in action directly. And – certain well-known cases notwithstanding – this inference is routinely made in science and general life.

    So, the matter is not failure of imagination.

    Let’s see whether we continue here or elsewhere

    GEM of TKI

  197. “Let’s see whether we continue here or elsewhere”

    Agreed.

  198. F/N: Six years later, the more things change, the more they remain the same! KF

Leave a Reply