Home » Intelligent Design » ID is Not an Argument from Ignorance

ID is Not an Argument from Ignorance

ID opponents sometimes attempt to dismiss ID theory as an “argument from ignorance.”  Their assertion goes something like this:

1.  ID consists of nothing more than the claim that undirected material forces are insufficient to account for either the irreducible complexity (IC) or the functionally specific complex information (FSCI) found in living things. 

2.  This purely negative assertion is an invalid argument from ignorance.  As a matter of logic, they say, it is false to state that our present ignorance concerning how undirected material forces can account for either the IC or the FSCI found in living things (i.e., our “absence of evidence”), means no such evidence exists.  In other words, our present ignorance of a material cause of IC and FSCI is not evidence that no such cause exists.

This rejoinder to ID fails for at least two reasons.  First, ID is not, as its opponents suggest, a purely negative argument that material forces are insufficient to account for IC and FSCI.  At its root ID is an abductive conclusion (i.e., inference to best explanation) concerning the data.  This conclusion may be stated in summary as follows: 

1.  Living things display IC and FSCI.

2.  Material forces have never been shown to produce IC and FSCI.

3.  Intelligent agents routinely produce IC and FSCI.

4.  Therefore, based on the evidence that we have in front of us, the best explanation for the presence of IC and FSCI in living things is that they are the result of acts of an intelligent agent.

The second reason the “argument from ignorance” objection fails is that the naysayers’ assertion that ID depends on an “absence of evidence” is simply false.  In fact, ID rests on evidence of absence.  In his Introduction to Logic Irving Marmer Copi writes of evidence of absence as follows:

In some circumstances it can be safely assumed that if a certain event had occurred, evidence of it could be discovered by qualified investigators. In such circumstances it is perfectly reasonable to take the absence of proof of its occurrence as positive proof of its non-occurrence.

How does this apply to the Neo-Darwinian claim that undirected material forces can produce IC and FSCI?  Charles Darwin published Origin of Species in 1859.  In the 152 years since that time literally tens of thousands of highly qualified investigators have worked feverishly attempting to demonstrate that undirected material forces can produce IC and FSCI.  They have failed utterly. 

Has there been a reasonable investigation by qualified investigators?  By any fair measure there has been.  Has that 152 year-long investigation shown how undirected material forces can account for IC or FSCI?  It has not.

Therefore, simple logic dictates that “it is perfectly reasonable to take the absence of proof” that undirected material forces can account for IC and FSCI as “positive proof of its non-occurrence.”

As far as I can see, there are two and only two responses the Darwinists can make to this argument:

1.  The investigation has not been reasonable or reasonably lengthy.

2.  Give us more time; the answer is just around the corner.

Response 1 is obvious rubbish.  If thousands of researchers working for over 150 years is not a reasonable search, the term “reasonable search” loses all meaning.

Response 2 is just more of the same Darwinist promissory notes we get all the time.  How many such notes will go unpaid before we start demanding that the materialists start paying COD?

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

51 Responses to ID is Not an Argument from Ignorance

  1. Seeing that the design inference is based on our knowledge of cause and effect relationships, it cannot be an argument from ignorance.

    And seeing that the current theory of evolution relies on our ignorance it can be called an argument from ignorance.

    :cool:

  2. Barry,

    I agree wholeheartedly. In fact, in one specific area the argument can be sharpened considerably. It goes like this:

    A. Humans are intelligent agents. We may not know the nature of intelligence, or how it works, but that it exists is not in question. The denial of this fact has certain unavoidable negative self-referential implications. In other words, speak for yourself, buddy.

    B. Humans have been known to produce long enough, and complex enough, and specified enough strings of DNA to serve as the operating system of a cell. Since humans are intelligent agents, it follows that at least some intelligent agents can produce such DNA strings.

    C. Nature without the intervention of intelligent agents has not been demonstrated to produce such DNA strings. Furthermore, this lack of demonstration has not been for lack of trying; misguided trying, perhaps, but not lack of trying. In addition, there is no well-accepted theory that does not start with the conclusion as one of its premises, that shows how such DNA strings could reasonably have arisen in nature without intelligent guidance.

    D. Therefore, it is more reasonable to believe that such DNA strings were originally the product of an intelligent agent or agents, than to believe that nature produced such DNA strings without the aid of intelligent agents.

    This conclusion is, of course, at least theoretically subject to change. Premise B may not turn out to be correct; Venter’s project could be a fraud. And Premise C could (theoretically) easily be falsified by someone showing simple essentially unguided processes producing long complex specified strings of DNA. But the present weight of evidence favors the conclusion; those who oppose it are going on faith in the teeth of the presently available evidence. Personally, I just don’t have that kind of faith.

  3. Hi Barry,
    You state

    1. Living things display IC and FSCI.

    2. Material forces have never been shown to produce IC and FSCI.

    3. Intelligent agents routinely produce IC and FSCI.

    4. Therefore, based on the evidence that we have in front of us, the best explanation for the presence of IC and FSCI in living things is that they are the result of acts of an intelligent agent.

    Have you read Mathgrrl’s post asking for a definition of FSCI on this very blog, and getting no good answers?

    If so, how can you continue to claim that living things contain FSCI, when you know that no one can even rigorously define it or figure out how to accurately identify it?

  4. 4

    lastyearon,

    You are mistaken. In her post mathgirl asserted that functional complex specified information cannot be measured in a mathematically rigorous way. She is wrong about that, as the comments on her own post demonstrate. But whether she is wrong or right is beside the point with respect to my post. Even mathgirl does not deny the EXISTENCE of FCSI. Why do you?

  5. lastyearon #3

    If so, how can you continue to claim that living things contain FSCI, when you know that no one can even rigorously define it or figure out how to accurately identify it?

    Living things are giant functional hierarchies of organization. No reasonable person can deny that. Do you deny that your body carries out millions of hierarchical functions? The fact that this is not easily quantifiable with a number (FSCI or whatever) strengthens the case of intelligent design rather than weakens it. In fact only simple and unqualified things are easily quantifiable with a single number. For example, how would you represent or measure the organization of your computer by mean of a number? Not easy really. Or do you deny also that your computer carries out many hierarchical functions?

    Your reply to Barry’s post is only “red herring”, to divert attention from his right argument.

  6. lastyearon,

    If you don’t like Barry’s general example, try dealing with my specific example. I presume you don’t deny the existence of long complex specified strings of DNA, whether or not they contain IC or FSCI, or even whether such concepts makes sense.

  7. LYO:

    Let us start with the original “meaningless” conceptual definitions of FSCI and CSI, by two notorious, dunce ID-iots called Orgel [1973] and Wicken [1979].

    NOT!

    Orgel:

    . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]

    Wicken:

    ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.]

    MG et al have been conspicuously silent on the question as to whether Orgel and Wicken were spouting meaningless verbiage.

    The answer is obvious, they are not.

    As to the issue of mathematical quantifications, there have been two major ones in the literature, and around UD we have been using as well a simple, brute force metric, the X-metric.

    DNA complements in living organisms are functional in making he proteins, they are coded ans specific. The DNA complement of unicellular life forms starts out north of 100,000 bases or double that in bits. This is well past the threshold of 1,000 bits where the number of possible configs is 1.07*10^301, or more than the SQUARE of the number of Planck time states of the atoms of our observable universe across its thermodynamic lifespan.

    In short blind watchmaker style random walks and trial and error are unable to even scratch the surface of the possibilities, so that the only analytically credible source of such FSCI, which is plainly quantifiable — is intelligence.

    Beyond that, the more rigorous quantification of FSCI proper, by Durston et al — on an extension of Shannon’s H-metric of average information per symbol in a string, was published in the peer reviewed literature, with measured/ calculated values for 35 protein families. Back in 2007, as you can read in the UD Weak Argument Corrective 27, which has been there for several years now.

    All of which was brought to MG’s attention, and all of which was willfully ignored and dismissed.

    Besides, all of us who post in this thread thereby routinely produce known examples of FSCI in so doing: strings of text in ASCII English characters in excess of 143 characters, that respond to a set theme.

    In short, the announced dismissive blindness to what FSCI is, is willfuly self-referentially absurd.

    That is, if anything, it is this particular ATBC talking point that is meaningless.

    Good day

    GEM of TKI

  8. 2. Material forces have never been shown to produce IC and FSCI.

    This is and has been under debate. The following observations that are based upon this assumption are then also in debate.

    Please take me off moderation the only posts that ever get through are the ones where i mention i go to church.

  9. Good stuff Barry.

    My personal take when confronted with the old argument from ignorance or its sister argument “from incredulity”, is that those objections are invalid because the data used to support ID is largely based on statistical mechanics.

    The principles of SM simply do not allow Darwinian macro-evolution to be even possible.

    Combinatorial dependencies abound in the genome.

    i.e. structures that depend on other structures that depend on more structures, information that depends on other information,… and so on.

    This is obvious in things like organic machines like the famous E. Coli flagellum.

    The protein parts are inter-dependent & you cannot just invent a just-so story to explain their existence or more importantly their correct assembly by precise instructions.

    Indeed, Darwinists have no clue where the assembly instructions for putting the protein parts together in the correct order come from let alone how the parts “evolved” -all in the correct forms and materials etc.

    Parts in any machine have to be precisely aligned, with correct strengths, material properties, distances, viscosity, output torque, size, power factors, rpm, etc.etc

    In short, parts must fit. Parts must endure stress factors applied to them. Parts must be aligned. Parts must have the correct physical properties …

    No different for such organic engines.

    Darwinists NEVER even think about these things. Engineers do.

    Darwinists have no clue what they’re asking of unguided, chance processes just for constructing a “simple” bacterial motor let alone something like DNA or an entire genome.

    Combinatorial dependencies in any complex mechanism always imply statistical mechanics (SM).

    SM has nothing to do with ignorance incredulity. SM has to do with the laws of physics and chemistry…
    whether any motor can function with weak, mis-sized, misaligned, poor friction etc etc.

    SM determines whether it is possible for such machines to even exist and what the probabilities are on whether or not they can just “evolve” and assemble without reason or guidance.

    Arguing from ignorance and unwarranted credulity, in fact, is the sole domain of the whole Darwinian scenario!

    We are given stories in place of empirical evidence. We are told how such and such COULD, MAYBE, MIGHT have occurred

    So where do these blind Darwinists get off claiming IDists use ignorance based args when they are more guilty of it than any other domain in the history of science?!

    All we’ve been shown is quaint “possible” stories that we are supposed to believe on sheer ungrounded faith.

    The ignorance arguments in Darwinism are ubiquitous – otherwise we would have no reason for just-so stories at all.

    Newton needed no imaginative narrative for explaining calculus, gravity or the laws of motion.

    When IDists use specified complexity, combinatorial dependencies, and prescribed information arguments against Darwinian idiocy, they are in fact using SM.

    It has nothing to do with either ignorance or incredulity.

    Let the Darwieners defend themselves in their own use of mass waves of args from ignorance and sheer credulity.
    They never have and indeed they cannot.
    Their literature is full of it.

  10. Barry Arrington,

    In her post mathgirl asserted that functional complex specified information cannot be measured in a mathematically rigorous way.

    That’s not quite correct. I asked for a rigorous mathematical definition of CSI and some example calculations, rather than any of its derivatives, since that is the metric described by Dembski and asserted by many ID proponents to be a clear indicator of the involvement of an intelligent agent.

    She is wrong about that, as the comments on her own post demonstrate.

    Actually, the posts in that very long thread demonstrate that no one who participated was able to provide that rigorous mathematical definition nor was anyone able to provide example calculations. vjtorley has started at least two other threads since that confirm that conclusion.

    But whether she is wrong or right is beside the point with respect to my post. Even mathgirl does not deny the EXISTENCE of FCSI. Why do you?

    As noted, I was discussing CSI rather than any of its variants such as FCSI. My conclusion is that, without a rigorous mathematical defintion and examples of how to calculate it, the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent.

  11. Mathgrrl:
    “without a rigorous mathematical defintion and examples of how to calculate it, the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent”

    My understanding is that CSI is the same thing as specified complexity.
    Are you saying that we cannot recognize specified complexity if we cant calculate it accurately?
    Specified complexity is just that!
    Complexity that is specified.

    Something that is complex is something that is improbable. Something that is specified is something that conforms to an independent pattern.
    If something is both complex and specified, then it is specified complexity.
    An example would be shakespear’s sonnet. It is both extremely improbable to come about by chance and at the same time it is meaningful. So there.
    Even if you cant calculate a number for it, we still know it is a real thing. You just cant argue that.

  12. MathGrrl,

    You are right that it can sometimes prove difficult to associate a specific number with a specific object as a measure of its complex specified information. vjtorley has struggled with this problem in two recent posts, trying to give an estimate for the CSI of the “2001″ monolith. Everyone who watched the movie (except those who deliberately closed their minds to it) recognized that this was a designed object because its very unusual shape precluded a non-intelligent origin for that shape–that was the whole premise of the movie; but whether its CSI was 1000 or 10^6 or 10^12 or some other number is a difficult calculation.

    However, just because it is difficult to give a specific number to every designed object does not mean that the same difficulty exists for all objects with CSI. Some objects, specifically DNA and protein sequences, can have their CSI much more easily measured. I have yet to see you deal with Durston KK et al.,
    ( http://www.tbiomed.com/content/4/1/47 ),
    even though their article was cited in the comments to your post (Comments 12, 250, 320, 333, 340, 365, 391, 392, 393, 394, 420, and 431–I left out comments 215 and 216 as not being specific enough) and two links were given; you sidestepped the issue, specifically in comment 396.

    Could you please explain to us why Durston et al.‘s (peer-reviewed) calculations are not correct, and that FCSI either has no meaning, or cannot be quantified in at least some cases; or else concede the point that FCSI, and therefore CSI, has meaning in at least some (biologically relevant) cases.

  13. Onlookers:

    Observe how MG has again ducked the issue of the definition offered in the literature by Orgel and Wicken, in the 1970′s.

    Judging by how LYO has used the same notion of if we do not have a mathematical definition as a dismissive talking point, I read this as little more than a strawman, dismissive rhetorical tactic.

    Let us note, that if we apply the same standard to anything, including the assertion that absent numerical quantification statements are meaningless, we end up with the absurdity that the position statement insisting on such is itself meaningless, as we face an infinite regress of demands for quantification.

    Quantification is important — and it is addressed in the case of both CSI and the relevant subset FSCI [noting also the recent remarks here that the different metrics are fairly closely related] — but it is not a be all and end all of relevant understanding.

    Finally, we should not let the issue of quantification be abused as a dismissive talking point.

    If MG et al are not willing to claim that Orgel and Wicken were “meaningless” when they gave definitions as cited in 7 above, then that is as plain a refutation of the assertion as can be desired.

    We should not be intimidated by ill-founded, dismissive talking points.

    GEM of TKI

  14. MathGrrl:
    I asked for a rigorous mathematical definition of CSI…

    Nice strawman- the following is what you get:

    Claude Shannon provided the math for information. Specification is Shannon information with meaning/ function (in biology specified information is cashed out as biological function). And Complex means it is specified information of 500 bits or more- that math being taken care of in “No Free Lunch”.

    That is it- specified information of 500 bits or more is Complex Specified Information. It is that simple.

    IOW CSI is what we humans use every day to communicate and get things done.

    And if MathGrrl cannot grasp that concept then she isn’t worth the effort.

  15. Another confusion for MathGrrl is her refusal to understand that CSI pertains to ORIGINS. I provided the quotes from Dembski and Meyer but she refuses to accept it. Willful ignorance is not a good way to try to learn about something.

    Why is this important? She brings up gene duplications. Gene duplications in already existing organisms. That is cheating as gene duplications can only be called blind watchmaker processes if living organisms arose from non-living matter via blind watchmaker processes- all about origins.

    The point being is a gene duplication in a design scenario would not increase the existing CSI as it would be part of it. And if the blind watchmaker produced living organisms from non-living matter then you don’t need gene duplications, ID is already falsified.

    Which brings us to her equivocal use of “evolutionary mechanisms”. The point of CSI is that blind watchmaker processes cannot generate it it from scratch and “evolutionary” mechanisms can, for all she knows, be design mechanisms.

    She thinks that just because we understand the process it means it is a blind watchmaker process. She also thinks that ID requires a designer to come in and physically change the DNA. I’m telling you this person is fried. To wit- we understand the process of executing computer programs- the paths the signals take to produce a result. Yet no one would say computers run via blind watchmaker processes. And I don’t need a computer programmer here to make the decisions the program can make without intervention. IOW she doesn’t even understand the first thing about Intelligent Design.

    OK, moving on. In her point 3 she has a digital organism of 22 bytes. 22 bytes = 176 bits. That is 176 bits of information carrying capacity so depending on the specificity that will determine the amount of specified information.

    So that is how you do it- count the bits and check on the variability. If you have 500 bits but any arrangement can cause the same effect then it ain’t specified.

  16. kuartus,

    My understanding is that CSI is the same thing as specified complexity.

    My understanding from Dembski’s writings is that CSI is specified complexity that exceeds a certain number of bits, so I think we’re in near agreement.

    Are you saying that we cannot recognize specified complexity if we cant calculate it accurately?

    Specified complexity and CSI are presented as objective numerical metrics. I am saying that, without a rigorous mathematical definition, the terms are literally meaningless.

  17. Paul Giem,

    Some objects, specifically DNA and protein sequences, can have their CSI much more easily measured. I have yet to see you deal with Durston KK et al.,
    ( http://www.tbiomed.com/content/4/1/47 ),
    even though their article was cited in the comments to your post

    I am interested in CSI as defined by Dembski since that is what is claimed by many ID proponents as a clear indication of the involvement of intelligent agency. Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.

  18. kairosfocus,

    Observe how MG has again ducked the issue of the definition offered in the literature by Orgel and Wicken, in the 1970?s.

    As I explained repeatedly in the thread following my guest post and just above to Paul Giem, I am interested in CSI as defined by Dembski since that is what is claimed by many ID proponents as a clear indication of the involvement of intelligent agency. Orgel’s work is completely dissimilar except for the name.

  19. Joseph,

    Claude Shannon provided the math for information. Specification is Shannon information with meaning/ function (in biology specified information is cashed out as biological function).

    Dembski does not use Shannon information. Further, Schneider has shown that a small subset of known evolutionary mechanisms can generate arbitrary amounts of Shannon information.

    And Complex means it is specified information of 500 bits or more- that math being taken care of in “No Free Lunch”.

    And yet, no one in my guest thread was able to provide detailed calculations of CSI for the four scenarios I described.

  20. Joseph,

    Another confusion for MathGrrl is her refusal to understand that CSI pertains to ORIGINS.

    That appears to be your own idiosyncratic view, not shared by many, if any, other ID proponents.

  21. MathGrrl propagandizes this statement;

    ‘Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.’

    Well Durston in this video, clearly is claiming that functional Information (FITS) is a reliable indicator of Intelligence;

    Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – video
    http://www.metacafe.com/watch/3995236/

    ,,, But alas MathGrrl this does not really matter to you does it??? for you are not really interested in pursuing the truth in the first place!

  22. Another confusion for MathGrrl is her refusal to understand that CSI pertains to ORIGINS.

    MathGrrl:

    That appears to be your own idiosyncratic view, not shared by many, if any, other ID proponents.

    Strange that I quoted Dembski in support of my claim:

    The central problem of biology is therefore not simply theorigin of information but the origin of complex specified information. page 149 of “No Free Lunch” (bold added)

    ID is based on three premises and the inference that follows (DeWolf et al., Darwinism, Design and Public Education, pg. 92):

    1) High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.

    2) Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.

    3) Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.

    4) Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.
    (bold added)

    IoW MathGrrl proves she is either willfully ignorant or purposely obtuse.

  23. 23

    In her comments leading up to Mathgrrl’s thread, she was fond of saying that evolutionary algorithms can create CSI based upon the definitions given by ID proponents.

    On her thread, a valid challenge (comment #31) was made to that conclusion, in principle.

    She ducked that challenge by repeatedly asking a question that had no bearing whatsoever on the challenge being made.

    There is little doubt that she did not want to acknowledge the validity of the challenge because it would add a certain perspective to her comments that was unwelcome – which is exactly why I made the challenge.

    Materialist who promote EAs often like to portray that a solution to the mystery of the information within the genome is being found, yet the very thing that creates that mystery has nothing whatsoever to do with an evolutionary algorithm.

  24. Claude Shannon provided the math for information. Specification is Shannon information with meaning/ function (in biology specified information is cashed out as biological function).

    MathGrrl:

    Dembski does not use Shannon information.

    I didn’t say he did. Please try to follow along.

    Further, Schneider has shown that a small subset of known evolutionary mechanisms can generate arbitrary amounts of Shannon information.

    Your continued equivocation is duly noted- as is your willful ignorance.

    However neither of those refutes CSI nor addresses wha I posted.

    <bAnd Complex means it is specified information of 500 bits or more- that math being taken care of in “No Free Lunch”.

    MathGrrl

    And yet, no one in my guest thread was able to provide detailed calculations of CSI for the four scenarios I described.

    And yet I told YOU how to do it for yourself.

    Why can’t you give it a go? I gave you one answer already.

  25. MathGrrl,

    The book “No Free Lunch” introduced CSI and states it pertains to origins.

    That you refuse to read the book is an indication that you aren’t interested in anything beyond getting the water all muddy.

  26. MG:

    Pardon, but CSI was NOT defined by Wm Dembski.

    As you were repeatedly corrected in the earlier thread, and as has sat in the UD WACs 25 ff for years, it was defined on key examples — i.e an ostensive definition [and you were also given a tutorial on definition that you ignored] by Orgel and Wicken in the 1970′s.

    What Dembski did, was for sufficiently low probabilites on chance-driven hyps, define a Hartley style log-probability info metric [the BASIS for Shannon's Info metrics and analysis], then apply a beyond a threshold criterion, as Joseph just summarised:

    Claude Shannon provided the math for information. Specification is Shannon information with meaning/ function (in biology specified information is cashed out as biological function). And Complex means it is specified information of 500 bits or more- that math being taken care of in “No Free Lunch”.

    That is it- specified information of 500 bits or more is Complex Specified Information. It is that simple . . .

    1 –> The basic idea here is that once we — us semiotic, judging, observing agents who do science — are looking at identifiable complex specified information, we can assign a reasonable chance-driven hyp and assess a probability.

    2 –> That probability inverted and logged [leaving off a posteriori issues] as Hartley did, gives us an info metric, in bits if the log is base 2. (Cf my basic tut in my always linked, here on this. Have you had to deal with designing, developing or testing or analysing real world digital comms systems working with bits?)

    3 –> This has been naturally extended to identifying quantity of info carrying capacity in bits by looking at number of contingencies per symbol of element or parameter. this is the commonplace measure of memory, CDs etc in bits.

    4 –> Now, we recognise that a space of contingencies based on possible configs is in principle searchable by chance and trial and error. Indeed that is Darwin’s theory in a nutshell. But, once we come to sufficiently large config spaces, the odds of finding recognisably special, hot or target zones or islands of function by random walks plus trial and error falls as bit depth rises.

    5 –> Indeed, so far we see on reported Infinite Monkey real world tests [as has been drawn to your attention repeatedly but never acknowledged as noticed], that spaces of order 10^52 are demonstrably searchable for islands of function, i.e. 175 or so bits. Citing the just linked:

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    6 –> But, spaces far beyond that are growing exponentially. For number of possibilities is as 2^n. So, when we start to look at 400 – 500 bits or 1,000 bits we are dealing with a very different kettle of fish. 10^120 – 10^150 or 10^301 possibilities or so. In a world where the maximum reasonable number of bit operations is 10^120, the maximum number of Planck-time quantum states of 10^80 atoms is 10^150, and 10^301 is ten times the square of that.

    7 –> As you will see in the Little Green men thread (where you declined to comment specifically) from 14 – 16, based on the discussion over the weekend, the undersigned analyses that the various relevant metrics [including Dembski's and variants thereof] are doing a Hartley information beyond a threshold metric i.e they are looking at searching a config space and are positing that beyond a threshold, it is unreasonable to expect that recognisable special zones will be hit by random walks plus trial and error dominated searches. (If you want to argue that the laws of necessity of the cosmos acting on initial conditions force the emergence of life, that is tantamount to a declaration that the cosmos is designed and programmed to produce life; which would immediately imply that the design inference on seeing the FSCI in DNA is correct. There are two observed sources of high contingency, chance and choice.)

    8 –> Now, in the quantitative metrics under description that you deny the effective existence of, the de facto thresholds applied are at 398+, 500 and 1,000 bits, in a context where we semiotic agents identify that we are dealing with complex specification by various means including K-compressibility of the description.

    9 –> In the case of the Dembski metric that you have dismissed,t he threshold will only be passed if the neg log p(T|H) value is in excess of 398 bits, i.e. the probability is of order 1 in 10^120 as an upper bound, which is sufficiently low for the implicit approaximation away from the analytical result to be acceptable.

    10 –> Given the way log of a product operates, phi_S EXTENDS the threshold value, with 10^150 being a natural upper bound. So, the Dembski metric can be seen in Hartley information terms, thusly, excerpting the LGM comment 16:

    Chi=-log2[10^120.Phi_s(T).P(T|H)], . . . Eqn 9

    . . . we see the same structure C = – log2[D*p], with only the value of K = – log2 (D) being differently arrived at. In this case, we have a compound factor one term being a metric of number of bit operations in the observed cosmos, and the other — expanding the threshold bit depth — being a metric of “the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T.” 10^120 is basically a multiplier taking into account “where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120.”

    So, whatever the technical details and critiques involved, the metrics all boil down to identifying a reasonable threshold for which, beyond it, once we have specified complexity by KC-compressibility or functionality etc, we can be confident that the hot zone or the like are maximally not likely to have been hit upon by chance.

    11 –> That is, we move like this:

    Chi=-log2[10^120.Phi_s(T).P(T|H)]

    or, Chi = – log2[D1*D2*p]

    i.e Chi = – log2(p) -K1 -K2

    or, Chi = I – 398 -K2

    12 –> In relevant cases, Dembski’s metric Chi is a measure of specified information in bits beyond a flexible threshold driven by a lower bound of 398 bits, and with a natural upper bound at 500 bits.

    13 –> That threshold is set, again, based on criteria of complexity that are reasonable, identifying when a recognisable hot zone is credibly so deeply isolated that it is a superior inference to hold that if we see something of that much specified complexity, it is most reasonably understood as an artifact, not a product of blind watchmaker processes.

    14 –> And the simple brute force X-metric stands out as again using a reasonable judgement on contingent complexity at 1,000 bits, and specificity by recognisable function based on a limited cluster of configs, use of a meaningful code with restrictive rules and symbols, or the same sort of K-compressibility that is otherwise described. then, we simply use the number of bits used.

    15 –> the 1,000 bit limit is set to get around probability density function debates. The whole observable cosmos acting as a search engine could not sample more than 1 in 10^150 of the states so no credible search is possible.

    16 –> Bluntly put, if you see a flyable jumbo jet the best explanation is design, just as it is the best explanation for Ascii text in English beyond 143 characters, and by extension, the DNA code for the cluster of proteins in the living cell.

    17 –> To overturn this, you do not need to go into all sorts of debates over whether everything has to be reducible to mathematical models to be meaningful (self referentially absurd BTW, reduce that to a math metric please) all you need to do is to produce a case where at least 143 characters of ASCII text in English have been created by Infinite Monkey processes, and you can use the Gutenberg library collection as a test base or the like.

    18 –> And in fact this has been repeatedly pointed out to you. So-called evolutionary algorithms that are intelligently designed to hill climb within islands of function are not counter examples for the obvious reasons. Duplicating functional strings is not an explanation of the origin of the info in the strings by chance and necessity, it is simply duplication.

    19 –> See if, being functional all the way in at least a core group of sentences, you can convert “See Spot run” into a Shakespearean Sonnet much less play, or the like, by duplication, random walk variation, and trial and error, within the search resources of the observable cosmos.

    20 –> if you can, on observation, you have shown the capacity of chance plus necessity to produce FSCI from scratch. That is the criterion of empirical testability.

    21 –> On the infinite monkeys analysis and the induction on reliable tested sign, we hold that FSCI is a reliable sign of design. Indeed, given the link to the second law of thermodynamics, you are setting out on the task of proposing to create an informational equivalent to a perpetual motion machine of the second kind.

    22 –> You will therefore understand our comfortable conclusion that your task is almost certainly hopeless.

    _______________

    GEM of TKI

  27. PS: In case the temptation is to again brush aside the X-metric as non quantitiative, observe how it is used:

    C = 1/0, on semiotic agent’s reasonable evaluation as beyond 1,000 bits of contingent complexity.

    S = 1/0 on SA’s judgement on warrant that the item is specific per funciton, code use, K-compressibility etc

    B = Number of bits used.

    X = C*S*B

    That is, this is a direct application of the explanatory filter. If not contingent, C = 0 and X = 0. If not specific [almost any complex bit string will do], S = 0 and X = 0.

    Only if both complex while being contingent, and specific, can X rise above zero.

    Once past the thresholds, X is the number of functionally specific bits used. In the WACs there is a calculation for a RGB computer screen full of useful information.

    Any English, ASCII text string that passes 143 characters will be deemed FSCI, and the DNA complement of the living cell will be deemed FSCI.

    On the explanatory filter, such FSCI is deemed designed.

    I hold this is obviously quantitative, and is based on a cogent conceptual model that can be practically, operationally used. Indeed, it has been routinely implicitly used when we speak of typical working computer files of any size.

  28. kf
    Congratulation, I think we can all agree that you have successfully demonstrated that a tornado in the junkyard scenario is not a good explanation for the diversity of life we see today! Infinite monkeys: Not so sure! Infinities are always a bit tricky and sometimes they achieve extraordinary things (which must especially be true for infinite monkeys, of all things). Could you elaborate, please?

  29. Indium:

    the threshold begins not at the level of a tornado in a junkyard building a Jumbo jet, but att he level of prducing 125 funcitonally specific bytes, or 1,000 bits or 143 ASCII characters.

    the attempt to dismiss Hoyle’s point as a fallacy is itself a strawman.

    FYI, 125 bytes of information is trivially small for anything that has to seriously function on a specific configuration.

    That obtains for origin of life and for origin of body plans, most notably the origin of the unique human physical equipment to use conceptual, verbal, articulate language.

    Observed cell based life is irreducibly complex on an integration of metabolising capacity and a coded information based von Neumann self-replicator.

    If you want to propose a hypothetical autocatalytic RNA world, you need to produce empirical evidence to substantiate origin of codes, algorithms, data structures to express required info, informational molecular nanomachines and their irreducibly complex functional integration on blind watchmaker processes in credible prelife environments.

    The infinite monkeys result already tells us not credible. But, maybe you know of a set of results not previously known that renders such credible on the gamut of our observed cosmos.
    going beyond, you have to similarly cross the body plan origination threshold, including for the origin of the human language and cognitive ability that is so tightly bound up with it.

    I predict in track record: once the distractive rhetorical gambits [such as the so called fallacy of Hoyle] are set aside, no empirical evidence that crosses the informational gaps, but plenty presumptions on a priori materialism that makes it a “it must have been so.”

    NOT.

    We know intelligence routinely creates FSCO/I. Designers create machines controlled by and expressing FSCI rich software, and we are working on miniaturisation. We know in principle how to design a vNSR, though we are nowhere currently near a full kinematic implementation. There was a promising case of a machine that sort of replicated itself as a 3-d printer recently, though.

    In short, design is an infinitely better warranted explanation for FSCO/I than blind watchmaker chance plus necessity, including in living cells and complex multicellular organisms up to and including language using man.

    And that sticks crossways in the gullet of the materialist establishment.

    GEM of TKI

  30. PS: You were already given a link on the infinite monkeys theorem discussion. Here is the Wiki article, which brings up and addresses all the relevant issues at 101 level.

  31. kf
    Yes I know the wiki link-I just don´t understand what you´re getting at, hence the question.
    Anyway, I will no longer distract from from the main topic of this thread.

  32. MathGrrl (#16),

    You say,

    I am interested in CSI as defined by Dembski since that is what is claimed by many ID proponents as a clear indication of the involvement of intelligent agency. Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.

    Let me help you. Durston’s metric is a subset of Dembski’s metric. I’ll give you four examples that illustrate the difference, and the similarity.

    A. The tar at the bottom of a Miller-Urey apparatus has long polymeric chains, but no discernible order. This has neither Durston FCSI nor Dembski CSI.

    B. A long string of DNA with random bases has a long specified backbone but no discernible order to the bases themselves. Whether the backbone itself can be formed without intelligent intervention can be disputed (I tend to believe it can’t), but the arrangement of the bases does not have either Durston FCSI or Dembski CSI.

    C. A long string of DNA capable of coding for a 500 amino acid residue protein, at least half of which must be correct in order for the protein to function, has a probability of 20^(-250) of forming spontaneously, and thus has log2 (2^250 * 10^250), or 250 + 830, or 1080 bits of information, well over the Dembski limit. Since its information is defined by its ability, when translated, to perform a function, it has both Dembski’s CSI and Durston’s FSCI.

    D. Venter’s watermarks have 60 amino acid residues coded for total, which means 259 bits (actually a little more because of the absence of stop codes, plus if there are stop codes on either side of each string the string length is increased to 70). These watermarks do not have Durston’s FSCI, as their specification is not functional, but they do have CSI as defined by Dembski, just not 499 bits so as to surpass the universal limit set by Dembski.
    _

    I’d be willing to hazard a guess that most of the ID-friendly commentators to UD would agree with me. I am inviting those who choose to agree or disagree.

    Since you apparently disagree with this analysis, could you please explain why, and specifically why Durston FSCI is not a subset of Dembski CSI.

  33. ID opponents sometimes attempt to dismiss ID theory as an “argument from ignorance.”

    It is an argument from ignorance.

    It goes something like this:

    I’m ignorant of the arguments for ID.

    Therefore, ID theorists are ignorant.

    It follows by the impeccable force of logic that ID is an argument from ignorance.

    http://www.talkorigins.org/indexcc/CA/CA100.html

  34. Indium:

    The underlying issues for the Infinite Monkeys analysis are at the heart of the questions on the design theory issue. One must understand what it is getting at — the business of random walk plus trial and error searches of large configuration spaces — if one is to have any reasonable idea of the issues at stake in the discussion.

    So, pardon a bit of a tutorialish pause . . .

    1 –> The infinite monkey theorem is about real world testing of the likelihood of random walk search plus trial and error to find functionally specific, complex information; or,

    2 –> at least to find hot zone clusters of microstates (in the thermodynamics context).

    3 –> It was long — and often — said that evolutionary advocates from C19 on were arguing that a large enough group of Monkeys, banging away at keyboards at random, would eventually type out the works of Shakespeare.

    4 –> Thus, from my childhood [I recall there was more than one Sci Fi short story on this], I have been familiar with the rhetorical claim that it is plausible for surprisingly unusual configurations can be accessed by chance given enough resources.

    5 –> Indeed, that is the general context of Dawkins’ notorious targetted search Weasel software, and indeed it is partly why he chose a phrase in Shakespeare.

    6 –> From the state of debate on the roots in Wiki, it seems the claimed C19 provenance in evo debates is not documented [as opposed to is not real -- not everything gets written down or printed . . . ], but there is documentation of use as a metaphor for the challenges implied in trying to get around the second law of thermodynamics by chance (which may reflect oral tradition on use in debates on evolution!).

    6 –> I never met this theorem in that thermodynamics context, but I met the rough equivalent of the question of the odds of the O2 molecules in a lecture room all rushing to one end by chance. Logically and physically possible, but not observable on the gamut of the cosmos.

    7 –> This grounds the sort of premise used in the 2nd law of thermodynamics: not all that is possible is sufficiently likely to spontaneously happen. Hence Hoyle’s scaled up metaphor of a tornado in a Junkyard (and Robertson’s metaphor of an air traffic control system gone awry where they no longer know where the many many planes are, as a model for the informational thermodynamical view of molecular chaos).

    8 –> Cf my thought experiment scaling back down of the chance assembly challenge to Brownian motion level here.

    (My copy of Kittel’s Thermal Physics has somehow been misplaced in going back and forth across the region, so I cannot check the Wiki cite from him directly now.)

    9 –> In short, the point addressed by the Monkeys analysis is central to the issues being raised by the design inference approach.

    10 –> In particular, there are some things that are sufficiently remotely likely that they are empirically implausible and practically unobservable due to the balance of relative statistical weight of microstate clusters linked to the general macro-level circumstances, as Abel has elaborated in his recent paper here.

    11 –> This idea is also connected, at a much simpler level, to traditional hypothesis testing.

    12 –> For the idea is that if you pick samples at random from especially a bell type distribution, it is much less likely to be in the far tails than in the central bulk.

    (So if the null hyp is that you belong to distribution A not B, but your sample is in the far tail of A but could possibly come from the bulk of B, it is more reasonable to infer that the better explanation is that you are in the bulk of B than the far tail of A. So, with a certain level of confidence, one rejects the null and accepts the alt hyp.)

    13 –> I have used the image of dropping darts from a stepladder unto a chart of a distribution, with say 30 points. If you then mark even-width stripes yo will see that he dart drops are far more likely to hit the tall stripes from the bulk than the far tails, but will give a sample that can be turned into a fair picture of the original chart. 30 hits is of course chosen for that is about the point where the law of large numbers has an impact. (This is actually an adaptation of one of my first university level physics exercises, of tossing darts at a graph paper with a point target and plotting the resulting distribution in bands, which I adapted for my own teaching, to a model of statistical process control with six-sigma banding . . . )
    ____________

    Okay, that should help set the issues in context for a more focussed reflection and discussion.

    GEM of TKI

  35. Dr Giem:

    Durston is using real world observed distributions of AA’s in protein families to assess the ways in which we get islands of function in the space of possible configs.

    His analysis of null, ground and functional states on Shannon’s H-metric of average information per symbol and increment in information per symbol to go from state to state, is strongly related to the Dembski type islands of function approach.

    I cannot understand why it is that some would try to drive a wedge between the two looks at the matter. They are obviously related. Of course Dembski has been trying to get at a broader view, so that he does not use function as the specific way to impose a specification, but he does speak of function as one way to cash out specification; which goes back to Wicken and to Orgel.

    The way I see it is that if we have real world results in the form of distributions of AA’s for proteins in families, for various organisms, why not use that distribution as a good sample of the real world possibilities?

    My own quick and dirty look, as noted in discussing a hypothetical protein’s sequence variability while retaining function, in my always linked note, contains this remark:

    If, instead, we model the the individual AA’s as varying at random among 4 – 5 “similar” R-group AA’s on average without causing dys-functional change, the full 232-length string would vary across 10^150 states. As a cross-check, Cytochrome-C, a commonly studied protein of about 100 AA’s that is used for taxonomic research, typically varies across 1 – 5 AA’s in each position, with a few AA positions showing more variability than that. About a third of the AA positions are invariant across a range from humans to rice to yeast. That is, the observed variability, if scaled up to 232 AA’s, would be well within the 10^150 limit suggested; as, e.g. 5^155 ~ 2.19 * 10^108. [Cf also this summary of a study of the same protein among 388 fish species.]

    That looks like a picture of an island of function in a wider sea of possibilities, at least to me.

    GEM of TKI

  36. kf
    Yes I understand all this stuff. Thanks for the summary. Some things are very unlikely to happen if the only resource you have is complete randomness.

    Dawkins Weasel shows what happens when you start to have non-random components in the process. We could probably argue about some details again for hours (latching!), but that is beside the point: As soon as you have some kind of feedback in the process, your chances will increase dramatically.
    So, to the point, what you attack is a straw man version of evolution. Evolution has highly non-random feedback mechanisms that filter the mutation induced noise in each generation.
    Please note that I don´t say that this answers all the questions with regard to the realistic capabilities of evolution. I just say that what you attack here has not much to do with evolution at all. It´s a straw man, pure and simple.

  37. Indium:

    Dawkins’ Weasel — cf my discussion here in my always linked — is in fact a demonstration of design, here where variants are rewarded on increments to target without regards to functionality, as he himself admitted. Weasel should never have been used, it only succeeds in misleading the people.

    And I have never said anything about “Some things are very unlikely to happen if the only resource you have is complete randomness.”

    I think you need to read, say the introductory remarks on the issues of origins science here to get a better balanced view on what is going on; you seem to have thought that the Darwinist critics at heir sites will give a true and fair view. Not so, on long experience.

    You will easily see that phenomena trace their causes to chance and/or necessity and/or art, on an aspect by aspect basis. Each has characteristic signs and capabiliteis.

    Mechanical necessity (a dropped heavy object reliably falls) does not account for high contingency but for regularities of nature like the law of gravity just exemplified. Chance contingency leads to stochastic distributions of outcomes. For instance if our dropped object is a fair die, it comes up in positions 1 to 6 at random, with more or less equal frequency. Two dice, would sum from 2 to 12, with 7 the most likely outcome.

    That domination by statistical weight of possible ranges of outcomes means that if an island of function is sufficiently isolated in the relevant config space there will not be enough search resources for chance to hit on its shores, i.e to get that first level of success that can then lead to hill climbing. And, recall “enough resources” issues start as quickly as 1,000 bits of information.

    Intelligence is able to generate purposeful choice contingency and so gives things directed configurations that are functional and complex, i .e on islands of isolated, complex function. That is why FSCO/I beyond 1,000 bits is a reliable sign of design. Have you seen any coherent posts in this blog that were credibly produced by mechanical necessity and/or chance contingency?

    The problem with origin of life on evolutionary models is that first the only observed life embeds a metabolic capacity coupled to a von Neumann, stored coded information based self replicator, which itself requires codes, algorithms, data structures, information that is highly specific, and a means of putting that set of instructions to work. Such is irreducibly complex, and the DNA tells us that the stored information starts at 100 k+ bits. To compare, y/day I showed how a blank word doc has 150+ k bits. DNA is extremely efficient coding to do what it does in the space it uses!

    And yet, 100 k bits is well past the 1k bit threshold. Until it is passed, there is no credible capacity to do metabiolism and to do self-replication, including makig the set of required working molecules to carry on the activities of life.

    When it comes to more complex body plans, we are looking at 10 + million bits, dozens of times over.

    When it comes to origin of the human body plan with language capacity the same again.

    Darwinian type evolutionary mechanism can explain modest hill climbing within an island of function, but they have no empirical demonstration of capacity to get to such an island of function. That is, no ability to explain body plan level macroevolution, which is required to explain the origin of the range of species.

    Some other forms of evolution could explain such, on intelligent intervention. Indeed, a nanotech molecular technology lab a few generations beyond Venter could do it. mechanisms to effect design are quire conceivable and Venter has demonstrated the first steps to routinising that. Already GMOs are a force in agriculture, they are even talking of GMO fish being approved, though I am a little leery. GMO corn is a major crop. GMO sugar cane is what drives Brazil’s energy cane industry.

    In short we see that design can do it, and we do not see how darwinian mechanisms — despite 150 years of claims — can. Indeed, since the infinite monkeys threshold starts at 125 bytes of info, we have a strong analytical barrier, not just observations.

    That is what you need to answer to and answer on empirical data not just so stories that presume a priori materialism, so serious improbabilites are brushed aside. nor will strawman distortions of the issues being raised like I cited at the top of this discussion, do.

    And, Dawkins’ Weasel trick is not a good place to begin.

    GEM of TKI

  38. PS: Onlookers, since one of the current talking points is that ID supporters are ducking challenges, I took time to respond with a mini essay to Indium. In fact, at this point, I have little confidence that he will pay any mind to what was just put down. If, after Dawkins’ admission of the failure of Weasel to address the real challenges of functional improvement on random changes, he still brings it up after years of exchanges here at UD, that tells me a lot, none of it good.

  39. bornagain77,

    MathGrrl propagandizes this statement;

    ‘Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.’

    Well Durston in this video, clearly is claiming that functional Information (FITS) is a reliable indicator of Intelligence;

    Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – video
    http://www.metacafe.com/watch/3995236/

    ,,, But alas MathGrrl this does not really matter to you does it??? for you are not really interested in pursuing the truth in the first place!

    Your lack of civility is noted. I trust you won’t be enough of a hypocrite to accuse others of the same in the future.

    My statement is completely true. At the time I wrote it, I was not aware that Durston’s metric had been claimed to be an indicator of intelligent agency. Further, it remains true that Durston’s metric is not the same as Dembski’s CSI.

  40. Upright BiPed,

    n her comments leading up to Mathgrrl’s thread, she was fond of saying that evolutionary algorithms can create CSI based upon the definitions given by ID proponents.

    On her thread, a valid challenge (comment #31) was made to that conclusion, in principle.

    That is not a “valid challenge”, it’s simply your attempt to define CSI in such a way that it requires intelligence. That’s pretty uninterestng mathematically and not related to Dembski’s description of CSI, which was the topic under discussion.

  41. Joseph,

    Dembski does not use Shannon information.

    I didn’t say he did. Please try to follow along.

    I’m following just fine. You introduced the concept of Shannon information. Dembski’s CSI is not based on Shannon information. I’m interested in understanding Dembski’s CSI. If you didn’t intend your discussion of Shannon information to suggest a relationship, it is simply a non sequitur.

  42. Paul Giem,

    I am interested in CSI as defined by Dembski since that is what is claimed by many ID proponents as a clear indication of the involvement of intelligent agency. Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.

    Let me help you. Durston’s metric is a subset of Dembski’s metric. I’ll give you four examples that illustrate the difference, and the similarity.

    It would be more helpful if you could show that Durston’s metric is mathematically equivalent to Dembski’s description of CSI. If it isn’t, then Durston’s metric cannot be used to support claims made about CSI.

  43. PPS: Onlookers, in fact — as is in teh already linked discussion of Weasel in my always linked — “latching” was empirically demonstrated, on the record, as a behaviour of runs of reasonable Weasel type programs.

    Indium is raising a red herring leading out to a strawman, which he already was setting up for soaking in distortion-laced ad hominems.

    This is bringing me to the verge of the conclusion that I am dealing with a troll.

    Unless he shows me some definite signs of reasonableness, I shall take the position that is recommended best practice for such trolls: “don’t feed da trollz”

  44. 44

    MAthgrrl,

    Do you understand what “in principle” means?

    Are you suggesting that if you make a comment about a subject using mathematics, but that comment is then invalidated by other reasoning, your comment stands as valid regardless?

    How exactly is that possible Masthgrrl?

  45. 45

    Mathgrrl,

    Below is the comment that you refused to engage. You can choose to do so now.

    - – - – - –

    Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality.

    In other words, if the evolutionary algorithm – by any means available to it – should add perhaps a ‘UCU’ within an existing sequence, does that addition create new information outside (independent) of the semiotic convention already existing?

    If we lift the convention, does UCU specify anything at all?

    If UCU does not specify anything without reliance upon a condition which was not introduced as a matter of the genetic algorithm, then your statement that genetic algorithms can create information is either a) false, or b) over-reaching, or c) incomplete.

  46. MG:

    pardon a few direct words.

    It was already shown, for years [cf WACs 27 - 28], that FSCI is a subset of CSI. Indeed, it is the biologically relevant subset, as can be seen from Orgel’s description; which drips with allusions to biofunction.

    In the freshly prepared linked excerpts and analysis the links and relationships to CSI are clearly shown.

    Remember, the Dembski metric boils down to — you ducked out of the thread where that was presented on Sunday, on a fairly flimsy excuse — a measure of bits beyond a threshold that starts at 398 bits, and is predicated on the issue that islands of function in such spaces are going to be too deeply isolated to be found without active information assisted search. You may quibble at how he got there, but that is where he got to, and it is a reasonable metric on those terms:

    CHI = – log2 [D1*D2*p]

    Where D1 = 10^120 ~ 2^398

    so, on Hartley’s negative log metric approach:

    CHI = Ip – [398 + K2]

    Where also K2 ranges up to 100 or so bits, as VJT discussed and so rounded off the effect of D1 and D2 as a threshold of 500 bits.

    So, the CHI metric is a measure of information beyond a threshold where it is reasonable that blind chance and necessity cannot credibly get to islands or hot zones that are in configuration spaces defined by at least that many bits. Remember the number of quantum events of the atoms in our solar system is of order 10^102, which is the sort of space taken up by 339 bits.

    The Durston metric brings out a measure of the size of such islands of function and a comparison to the config spaces they sit in. The Durston approach is via extending Shannon’s metric of average information per symbol, H, for functional as opposed to ground states, and judging increment in info to do that jump. The Dembski metric looks at probabilities of being on islands of function or target or hot zones otherwise, then converts to bits and deducts a threshold for complexity that specifes degree of isolation.

    The two are plainly closely related; all that happens is the Durston et al metric does not explicitly identify a threshold, but the obvious range of such thresholds is from 400 or so to 500 or 1,000 bits.

    Just as my own simple brute force X-metric simply stipulates the threshold on search considerations then assesses specificity and complex contingency, giving bit value, if the item is beyond the threshold.

    I am sorry, but this looks like the fallacy of endless objection.

    Especially where, after weeks of ignoring the longstanding metrics and calculations that you deny exist, after confusing many others in the process, and after dodging the issue of whether Orgel and Wicken were meaningful in laying out the key concepts, we have yet to see a single substantial contribution from you, mathematical or otherwise.

    So, are you serious or are you simply playing an endless objections and obfuscations rhetorical game?

    GEM of TKI

  47. MathGrrl (#42),

    You say,

    It would be more helpful if you could show that Durston’s metric is mathematically equivalent to Dembski’s description of CSI. If it isn’t, then Durston’s metric cannot be used to support claims made about CSI.

    Let me try again. Durston’s metric is a subset of Dembski’s metric. Thus, the paper showing that Durston’s metric can be measured in specific cases, shows that, because these are also examples of Dembski’s metric, in these cases Dembski’s metric can also be measured. To clarify things, are you claiming that Durston’s metric is not a subset of Dembski’s metric (and can you support this claim)? Or are you claiming that the reports of Durston’s metric being measured are wrong (and can you support this claim)? Or are you now conceding that at least sometimes Dembski’s metric can be measured?

  48. BREAKING: The collapse of MG’s claims on CSI and Dembski

  49. Paul Giem,

    It would be more helpful if you could show that Durston’s metric is mathematically equivalent to Dembski’s description of CSI. If it isn’t, then Durston’s metric cannot be used to support claims made about CSI.

    Let me try again. Durston’s metric is a subset of Dembski’s metric.

    You said that before, but you still haven’t demonstrated it to be the case. In fact, I’m not sure what you mean by a metric being a “subset” of another metric. Do you mean that Durston’s is an approximation to Dembski’s, similar to the way Newton’s equations can be viewed as an approximation to Einstein’s?

    In any case, the equivalence of Durston’s metric to Dembski’s CSI remains to be demonstrated mathematically.

  50. MathGrrl:

    I’m following just fine. You introduced the concept of Shannon information. Dembski’s CSI is not based on Shannon information. I’m interested in understanding Dembski’s CSI. If you didn’t intend your discussion of Shannon information to suggest a relationship, it is simply a non sequitur.

    No, you are not following fine. Not at all.

    You seem to take pride in taking what I say out-of-context. Strange…

    CSI is a specified subset of Shannon information- Shannon information is the superset and SI (and therefor CSI) is a subset of that superset.

    That is a fact.

    That said what I said was that CSI is Shannon information with meaning/ functionality and of a certain complexity, ie number of bits.

  51. MathGrrl (#49),

    You have interesting standards. In #18 you say,

    Orgel’s work is completely dissimilar except for the name.

    No rationale, no reasoning, just the bald assertion.

    Yet somehow after I explained my reasoning in #32, you feel that

    You said that before, but you still haven’t demonstrated it to be the case

    You can make bald assertions, but if someone else explains the meaning of his statements, you can demand a demonstration.

    I have already explained my reasoning. Perhaps you could explain what part you don’t get or disagree with or find incomplete, so as to facilitate my clarifying the concept to you. If not, since I stated that Durston’s FCSI is a subset of Dembski’s CSI, perhaps you could give a counterexample where something has a threshold amount of FCSI but does not have CSI. You do agree that Durston’s FCSI exists, don’t you?

Leave a Reply