Home » Intelligent Design » On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

I do enjoy reading ID’s most vehement critics, both in formal publications (such as books and papers) and on the, somewhat less formal, Internet blogosphere. Part of the reason for this is that it gives one something of a re-assurance to observe the vacuous nature of many of the critics’ attempted rebuttals to the challenge offered to neo-Darwinism by ID, and the attempted compensation of its sheer lack of explicative power by the religious ferocity of the associated rhetoric (to paraphrase Lynn Margulis). The prevalent pretense that the causal sufficiency of neo-Darwinism is an open-and-shut case (when no such open-and-shut case for the affirmative exists) never ceases to amuse me.

One such forum where esteemed critics lurk is the Panda’s Thumb blog. A website devoted to holding the Darwinian fort, and one endorsed by the National Center for Selling Evolution Science Education (NCSE). Since many of the Darwinian heavy guns blog for this website, we can conclude that, if consistently demonstrably faulty arguments are common play, the front-line Darwinism defense lobby is in deep water.

Recently, someone referred me to two articles (one, two) on the Panda’s Thumb website (from back in 2007), by Arthur Hunt (professor in Department of Plant and Soil Sciences at the University of Kentucky). The first is entitled “On the evolution of Irreducible Complexity”; the second, “Reality 1, Behe 0″ (the latter posted shortly following the publication of Behe’s second book, The Edge of Evolution).

The articles purport to refute Michael Behe’s notion of irreducible complexity. But, as I intend to show here, they do nothing of the kind!

In his first article, Hunt begins,

There has been a spate of interest in the blogosphere recently in the matter of protein evolution, and in particular the proposition that new protein function can evolve. Nick Matzke summarized a review (reference 1) on the subject here. Briefly, the various mechanisms discussed in the review include exon shuffling, gene duplication, retroposition, recruitment of mobile element sequences, lateral gene transfer, gene fusion, and de novo origination. Of all of these, the mechanism that received the least attention was the last – the de novo appearance of new protein-coding genes basically “from scratch”. A few examples are mentioned (such as antifreeze proteins, or AFGPs), and long-time followers of ev/cre discussions will recognize the players. However, what I would argue is the most impressive of such examples is not mentioned by Long et al. (1).

There is no need to discuss the cited Long et al. (2003) paper in any great detail here, as this has already been done by Casey Luskin here (see also Luskin’s further discussion of Anti-Freeze evolution here), and I wish to concern myself with the central element of Hunt’s argument.

Hunt continues,

Below the fold, I will describe an example of de novo appearance of a new protein-coding gene that should open one’s eyes as to the reach of evolutionary processes. To get readers to actually read below the fold, I’ll summarize – what we will learn of is a protein that is not merely a “simple” binding protein, or one with some novel physicochemical properties (like the AFGPs), but rather a gated ion channel. Specifically, a multimeric complex that: 1. permits passage of ions through membranes; 2. and binds a “trigger” that causes the gate to open (from what is otherwise a “closed” state). Recalling that Behe, in Darwin’s Black Box, explicitly calls gated ion channels IC systems, what the following amounts to is an example of the de novo appearance of a multifunctional, IC system.

Hunt is making big promises. But does he deliver? Let me briefly summarise the jist of Hunt’s argument, and then briefly weigh in on it.

The cornerstone of Hunt’s argument is principally concerned with the gene, T-urf13, which, contra Behe’s delineated ‘edge’ of evolution, is supposedly a de novo mitochondrial gene that very quickly evolved from other genes which specified rRNA, in addition to some non-coding DNA elements. The gene specifies a transmembrane protein, which aids in facilitating the passage of hydrophilic molecules across the mitochondrial membrane in maize – opening only when bound on the exterior by particular molecules.

The protein is specific to the mitochondria of maize with Texas male-sterile cytoplasm, and has also been implicated in causing male sterility and sensitivity to T-cytoplasm-specific fungal diseases. Two parts of the T-urf13 gene are homologous to other parts in the maize genome, with a further component being of unknown origin. Hunt maintains that this proves that this gene evolved by Darwinian-like means.

Hunt further maintains that the T-urf13 consists of at least three “CCCs” (recall Behe’s argument advanced in The Edge of Evolution that a double “CCC” is unlikely to be feasible by a Darwinian pathway). Two of these “CCCs”, Hunt argues, come from the binding of each subunit to at minimum two other subunits in order to form the heteromeric complex in the membrane. This entails that each respective subunit have at minimum two protein-binding sites.

Hunt argues for the presence of yet another “CCC”:

[T]he ion channel is gated. It binds a polyketide toxin, and the consequence is an opening of the channel. This is a third binding site. This is not another protein binding site, and I rather suppose that Behe would argue that this isn’t relevant to the Edge of Evolution. But the notion of a “CCC” derives from consideration of changes in a transporter (PfCRT) that alter the interaction with chloroquine; toxin binding by T-urf13 is quite analogous to the interaction between PfCRT and chloroquine. Thus, this third function of T-urf13 is akin to yet another “CCC”.

He also notes that,

It turns out that T-urf13 is a membrane protein, and in membranes it forms oligomeric structures (I am not sure if the stoichiometries have been firmly established, but that it is oligomeric is not in question). This is the first biochemical trait I would ask readers to file away – this protein is capable of protein-protein interactions, between like subunits. This means that the T-urf13 polypeptide must possess interfaces that mediate protein-protein interactions. (Readers may recall Behe and Snokes, who argued that such interfaces are very unlikely to occur by chance.)

[Note: The Behe & Snoke (2004) paper is available here, and their response (2005) to Michael Lynch's critique is available here.]

Hunt tells us that “the protein dubbed T-urf13 had evolved, in one fell swoop by random shuffling of the maize mitochondrial genome.” If three CCC’s really evolved in “one fell swoop” by specific but random mutations, then Behe’s argument is in trouble. But does any of the research described by Hunt make any progress with regards to demonstrating that this is even plausible? Short answer: no.

Hunt does have a go of guesstimating the probabilistic plausibility of such an event of neo-functionalisation taking place. He tells us, “The bottom line – T-urf13 consists of at least three ‘CCCs’. Running some numbers, we can guesstimate that T-urf13 would need about 10^60 events of some sort in order to occur.”

Look at what Hunt concludes:

Now, recall that we are talking about, not one, but a minimum of three CCC’s. Behe says 1 in 10^60, what actually happened occurred in a total event size of less that 10^30. Obviously, Behe has badly mis-estimated the “Edge of Evolution”. Briefly stated, his “Edge of Evolution” is wrong. [Emphasis in original]

Readers trained in basic logic will take quick note of the circularity involved in this argumentation. Does Hunt offer any evidence that T-urf13 could have plausibly evolved by a Darwinian-type mechanism? No, he doesn’t. In fact, he casually dismisses the mathematics which refutes his whole argument. Here we have a system with a minimum of three CCCs, and since he presupposes as an a priori principle that it must have a Darwinian explanation, this apparently refutes Behe’s argument! This is truly astonishing argumentation. Yes, certain parts of the gene have known homologous counterparts. But, at most, that demonstrates common descent (and even that conclusion is dubious). But a demonstration of homology, or common ancestral derivation, or a progression of forms is not, in and of itself, a causal explanation. Behe himself noted in Darwin’s Black Box, “Although useful for determining lines of descent … comparing sequences cannot show how a complex biochemical system achieved its function—the question that most concerns us in this book.” Since Behe already maintains that all life is derivative of a common ancestor, a demonstration of biochemical or molecular homology is not likely to impress him greatly.

How, then, might Hunt and others successfully show Behe to be wrong about evolution? It’s very simple: show that adequate probabilistic resources existed to facilitate the plausible origin of these types of multi-component-dependent systems. If, indeed, it is the case that each fitness peak lies separated by more than a few specific mutations, it remains difficult to envision how the Darwinian mechanism might adequately facilitate the transition from one peak to another within any reasonable time frame. Douglas Axe, of the biologic institute, showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalisation. If a duplicated gene is neutral (in terms of its cost to the organism), then the  maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). One other study, published in Nature in 2001 by Keefe & Szostak, documented that more than a million million random sequences were required in order to stumble upon a functioning ATP-binding protein, a protein substantially smaller than the transmembrane protein specified by the gene, T-urf13. Douglas Axe has also documented (2004), in the Journal of Molecular Biology, the prohibitive rarity of functional enzymatic binding domains with respect to the vast sea of combinatorial sequence space in a 150 amino-acid long residue (Beta-Lactamase).

What, then, can we conclude? Contrary to his claims, Hunt has failed to provide a detailed and rigorous account of the origin of T-urf13. Hunt also supplies no mathematical demonstration that the de novo origin of such genes is sufficiently probable that it might be justifiably attributed to an unguided or random process, nor does he provide a demonstration that a step-wise pathway exists where novel utility is conferred at every step (being separated by not more than one or two mutations) along the way prior to the emergence of the T-urf13 gene.

The Panda’s Thumb are really going to have to do better than this if they hope to refute Behe!

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

398 Responses to On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

  1. T-URF13 is just one component. The membrane was already in place.

    It also came about due to our selective breeding- strange things start appearing with artificial selection.

  2. Another issue is take a look at Dr Behe’s IC mousetrap- 5 components.

    Explaining a lever and fulcrum does not explain a block and tackle pulley system.

    Look at Dr Hunt’s IC examples- very simple and basic. So even if he was correct he would still fall far short of aything IDists have claimed.

    Dr Behe:

    How about Professor Coyne’s concern that, if one system were shown to be the result of natural selection, proponents of ID could just claim that some other system was designed? I think the objection has little force. If natural selection were shown to be capable of producing a system of a certain degree of complexity, then the assumption would be that it could produce any other system of an equal or lesser degree of complexity. If Coyne demonstrated that the flagellum (which requires approximately forty gene products) could be produced by selection, I would be rather foolish to then assert that the blood clotting system (which consists of about twenty proteins) required intelligent design.

    And in the “Edge of Evolution” Dr Beh puts the edge at two new protein to protein binding sites.

    (You can’t really get much complexity with that. However three- well anyone who has playe with connecting things see the advantages. So producing three would be a big deal.)

  3. Dr. Hunter wrote an article on T-URF13 here that shows that, by far, the most parsimonious explanation is that the T-URF13 gene/protein arose, via ‘front-loading’, from the non-coding region of the DNA;

    De Novo Genes: What are the Chances?
    Excerpt: The URF13 protein design dwarfs the experimentally screened function of ATP binding. And yet, even in that simple case, and with conservative assumptions, we find the probabilities of the T-urf13 de novo gene arising via blind evolution to be one in ten million (that is, 1 in 10,000,000). The real number undoubtedly has many more zeroes.
    http://darwins-god.blogspot.co.....ances.html

    Needless to say Dr. Hunter, for pointing out the obvious implications of the actual evidence, earned the vitriolic response of atheistic Darwinists;

    De Novo Genes: Criticism From Nick Matzke
    http://darwins-god.blogspot.co.....-nick.html

    Joe Felsenstein: De Novo Genes Trumped by Metaphysics
    Excerpt: I pointed out here that the blind evolution of the de novo gene T-urf13 is highly unlikely. In typical fashion, the evolutionist completely ignored the scientific issue at hand and skipped straight to the metaphysics.
    http://darwins-god.blogspot.co.....umped.html

    The Evolutionist’s T-urf13 Silence: Day 10
    http://darwins-god.blogspot.co.....ay-10.html

  4. Jonathan, are you truly suggesting that a Darwinist would stoop to faulty logic, if not outright logical fallacies, in attempting to “refute” ID? Shocking?!! Especially as it occurred on The Panda’s Thumb, which we all know is recognized for its high level of civil and respectful discourse on the ID/Evolution debate.

    Hunt, Matzke and all the rest of the anti-ID crowd at PT and elsewhere can pontificate and handwave to their little heart’s content. They can toss out all the references to research studies they want. The bottom line where irreducible complexity is concerned is quite clear. Behe published Darwin’s Black Box in 1996. The claim he made that really got the Darwinist’s panties in a twist was that there was not one single published research study in any relevant peer reviewed journal that provided the detailed, testable (and falsifiable) model of how Darwinian evolution actually built any of the IC systems Behe described in his book. That was 15 years ago…plenty of time for someone to publish something somewhere…even if only ONE study. However, it is still the case there is not one such study to be found in any of the peer reviewed scientific journals, and what Hunt mentions doesn’t cut it either. It is all bluff and bluster.

    Oddly, the one study that most often gets mentioned when the challenge to produce one is made is the (in)famous Avida computer model developed by Lenski et.al. at Mich. State Univ. As if a study showing a computer model somehow substitutes for actual biological reality. If they had such knowledge available, all they had to do was point to it in the Avida study. Curiously, they did not.

    Then there was that much talked about Matzke and Pallen review paper a couple years back. The gist of that one was to describe how researchers might go about setting up a study to demonstrate how evolution could build an IC system (they had the bacterial flagellum clearly in mind). Again, there was not a single reference to any actual studies that provided any sort of detailed, testable model. If ever there was opportunity to bring out the list, that was it. Ooops…guess not.

    The upshot of all this is that unless and until such a study is actually forthcoming, all this other bantering on blogs like Panda’s Thumb is wishful thinking at best, and outright deception at worst.

    Show me the research!!

  5. As stated at the top, Dr. Hunt observes what appears to be a 3 “CCC” change and, from the fact that it is present, must have been produced by a Darwinian process, despite the apparently insurmountable odds postulated by Dr. Behe. This must seem a reasonable line of argument to him, but what if he observed a 6 “CCC” change, or 20? At what point would he agree that this is too far to span by simple chance?

    In fact, there is no limit to the faith of some in the magic powers of Darwinian evolution. It is not encumbered by the limitations of math and chemistry.

  6. We observe the baterial flagellum. Therefor the blind watchmaker did it.

    We observe a variety of eyes, therefor the blind watchmaker did it.

    We observe a variety of immune systems, therfor the blind watchmaker did it.

    SCheeseman is right- this is easy.

  7. #6

    We observe the bacterial flagellum. Therefore it was designed.

    We observe a variety of eyes. Therefore it was designed.

    We observe a variety of immune systems. Therefore it was designed.

    Gosh – it is easy.

  8. MF:

    Really?

    Can you explain, in empirical, observational evidence backed up steps, how the organised, functionally specific complexity of the flagellum, the eyes and immune systems originated per the blind watchmaker thesis?

    “It MUST have happened that way, as science must only explain on forces of chance and necessity” — i.e. begging the question and imposing Sagan- Lewontin [etc] a priori materialism — is not permitted.

    GEM of TKI

  9. kf:

    Can you explain, in empirical, observational evidence backed up steps, how the organised, functionally specific complexity of the flagellum, the eyes and immune systems originated per the design thesis?

  10. Pedant, Since you have, in your very own short post generated, by Intelligence, far more information than anyone has ever witnessed generated by material processes, then Intelligence is the only known presently acting cause sufficient to explain the effect (information) in question;

    Stephen C. Meyer – The Scientific Basis For Intelligent Design – video
    http://www.metacafe.com/watch/4104651/

    further note;

    That the information found in life vastly exceeds, in integrated complexity, what man has ever produced in his best computer programs, and that the fundamental processes of ‘life’ tend to be optimal in regards to the parameters allowed by the laws of physics, and that the universe is shown to be Theistic in its foundation, instead of materialistic, is what gives us a solid basis for arguing life is Intelligently designed, indeed for arguing the entire universe is designed with ‘life in mind’;

    further note;

    Alain Aspect and Anton Zeilinger by Richard Conn Henry – Physics Professor – John Hopkins University
    Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the “illusion” of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one’s own mind is sure to exist). (Dr. Henry’s referenced experiment and paper – “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 – “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word “illusion” was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; “material reality is a “secondary reality” that is dependent on the primary reality of God’s mind” to exist. Then again I’m not a professor of physics at a major university as Dr. Henry is.)
    http://henry.pha.jhu.edu/aspect.html

    The Mental Universe – Richard Conn Henry – Professor of Physics John Hopkins University
    Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, Physicists shy away from the truth because the truth is so alien to everyday physics. A common way to evade the mental universe is to invoke “decoherence” – the notion that “the physical environment” is sufficient to create reality, independent of the human mind. Yet the idea that any irreversible act of amplification is necessary to collapse the wave function is known to be wrong: in “Renninger-type” experiments, the wave function is collapsed simply by your human mind seeing nothing. The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy.
    http://henry.pha.jhu.edu/The.mental.universe.pdf

    The Failure Of Local Realism – Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    The falsification for local realism (materialism) was recently greatly strengthened:

    Physicists close two loopholes while violating local realism – November 2010
    Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.
    http://www.physorg.com/news/20.....alism.html

    Ions have been teleported successfully for the first time by two independent research groups
    Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,,
    http://www.rsc.org/chemistrywo.....ammeup.asp

    Atom takes a quantum leap – 2009
    Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
    “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.
    http://www.freerepublic.com/fo.....1769/posts

    Systems biology: Untangling the protein web – July 2009
    Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. “Combine all this and you can start to think that maybe some of the information flow can be captured,” he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. “The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent,” he says. “The simple pathway models are a gross oversimplification of what is actually happening.”
    http://www.nature.com/nature/j.....0415a.html

    William Bialek – Professor Of Physics – Princeton University:
    Excerpt: “A central theme in my research is an appreciation for how well things “work” in biological systems. It is, after all, some notion of functional behavior that distinguishes life from inanimate matter, and it is a challenge to quantify this functionality in a language that parallels our characterization of other physical systems. Strikingly, when we do this (and there are not so many cases where it has been done!), the performance of biological systems often approaches some limits set by basic physical principles. While it is popular to view biological mechanisms as an historical record of evolutionary and developmental compromises, these observations on functional performance point toward a very different view of life as having selected a set of near optimal mechanisms for its most crucial tasks.,,,The idea of performance near the physical limits crosses many levels of biological organization, from single molecules to cells to perception and learning in the brain,,,,”
    http://www.princeton.edu/~wbialek/wbialek.html

  11. Pedant:

    That is actually simple enough: we have abundant direct observational experience of the design of functionally specific complex systems. And, EMBEDDED PROCESSOR, DIGITAL SYSTEMS ARE A CASE IN POINT.

    We have not as of yet mastered the nanotech to go genetic level engineering but the work of a Craig venter and co, plainly show the way we will go in upcoming decades.

    Your attempted turnabout self-destructs.

    And, it reveals the blatant absence of evidence that has been swept aside in the rush to create the impression that the darwinian macroevolutionary theory is comparable in empirically tested support to the theory of Gravitation.

    GEM of TKI

  12. kairosfocus,

    I would also be very interested in seeing a direct, detailed answer to Pedant’s above question (“Can you explain, in empirical, observational evidence backed up steps, how the organised, functionally specific complexity of the flagellum, the eyes and immune systems originated per the design thesis?”).

    Neither of your responses provide any reference to empirical evidence nor to the specific steps that were taken, according to your design thesis, to create CSI.

    If you choose to reply, please don’t simply attempt to argue against modern evolutionary theory. I’m asking for empirical observations that positively support ID, without reference to other views.

  13. MG:

    Turnabout fallacy.

    I have provided a summary of evidence [actually naming a scientist who is beignning to miniaturise and apply to biosystems the already decades old practice of embedded systems engineering using digital control systems technology -- abundant empirical evidence], and you want to shift a burden of proof, when in fact you have no evidence for your evolutionary materialistic contention whatsoever.

    Utterly revealing.

    GEM of TKI

  14. F/N: Do you really want to go through a course in embedded systems design?

  15. F/N 2: similarly, what is the known, routine source of digital, prescriptive coded instructions, programs and algorithms used in discrete state controlled processes, again?

  16. markf:

    We observe the bacterial flagellum. Therefore it was designed.

    We observe a variety of eyes. Therefore it was designed.

    We observe a variety of immune systems. Therefore it was designed.

    Nope, ID is not the polar negative of the ToE. The ToE makes bald claims, the design inference is based on our knowledge of cause and effect relationships.

  17. Pedant:
    Can you explain, in empirical, observational evidence backed up steps, how the organised, functionally specific complexity of the flagellum, the eyes and immune systems originated per the design thesis?

    It is all about our knowledge of cause and effect relationships.

    Therefor to refute any given design inference all one has to do is step up and demonstrate chance and necessity can account for it.

    Something other than a bald declaration please.

    See The Design Inference- How it Works

  18. kairosfocus,

    I have provided a summary of evidence [actually naming a scientist who is beignning to miniaturise and apply to biosystems the already decades old practice of embedded systems engineering using digital control systems technology -- abundant empirical evidence], and you want to shift a burden of proof, when in fact you have no evidence for your evolutionary materialistic contention whatsoever.

    The burden of proof is on the one making the claim. You are claiming that an intelligent designer exists and intervenes in the evolution of life on this planet. I am asking you to provide empirical evidence for this claim to answer the questions of who, what, when, where, and how. It is no more than I would ask of any scientist making claims in any other discipline.

    Where is the positive evidence that explicitly supports your position?

  19. MathGrrl:

    If you choose to reply, please don’t simply attempt to argue against modern evolutionary theory. I’m asking for empirical observations that positively support ID, without reference to other views.

    That’s a joke, right?

    I say that because part of the design inference is to eliminate other views- they cannot be ignored.

    Ya see we don’t infer we ae holding an artifact if nature, operating freely can produce it. We don’t infer a homicide if it is a natural death.

    The design inference is always referenced against alternatives.

  20. MathGrrl:
    The burden of proof is on the one making the claim.

    Yup your position makes claims yet can’t support them.

    MathGrrl:
    You are claiming that an intelligent designer exists and intervenes in the evolution of life on this planet.

    ID doesn’t make such claims.

    MathGrrl:

    Where is the positive evidence that explicitly supports your position?

    All over the place. Just look around.

  21. Segue BACK to the OP-

    Art Hunt was trying to produce evidence that the blind watchmaker could produce irreducibl complex systems- IOW there isn’t any requirement for a designer to produce IC.

    Even if true his examples only apply to that level of complexity (or less). If theexamples don’t reach the 5 component complexity level of the mousetrap, then it would be obvious they do not apply to anything IDists have claimed.

    Then there is the problem of mechanism- as in how was it detrmined the mechanism that produced it is a blind watchmaker mechansim.

  22. Math Girl:

    The burden of proof is on the one making the claim. You are claiming that an intelligent designer exists and intervenes in the evolution of life on this planet. I am asking you to provide empirical evidence for this claim to answer the questions of who, what, when, where, and how. It is no more than I would ask of any scientist making claims in any other discipline.

    Your basic question is this:

    Can an intelligent designer exist and intervene in the evolution of life on this planet? And, you want empirical evidence to support this.

    Here’s my empirical evidence in answer to the more general question: Can an intelligent designer exist and intervene on this planet?

    The tilma of Juan Diego that hangs in the Basilica of Our Lady of Gaudalupe on the outskirts of Mexico City.

    The formation of its image (including the image of a kneeling Juan Diego himself in the eye of the Woman) cannot be explained scientifically. There are no known methods that could reproduce the image; i.e., no human agency could have brought it about 400 years ago.

    Math Girl, if you can’t explain the tilma’s image, then admit outside agency. If you refuse to admit outside agency, then admit that you’re completely closed to the possibility of any kind of outside agency. (A recourse to “aliens”, one must admit, is a quite interesting possibility to entertain. After all, it is recourse to something that is beyond ALL empirical observation, and the positing of agency to that which is not known to exist. Isn’t that precisely your complaint should someone invoke a Designer: viz., He is not empirically observable and since He’s outside of our world, He cannot act within it? Despite these objections to a Designer, you willing concede them to “aliens”. How interesting, eh?)

  23. MG:

    Have you actually shown — i.e. empirical evidence, how undirected chance plus necessity can give rise to symbolic codes, instructions, step by step algorithms with initiation, sequence and halting, and the machinery that makes the codes work?

    The whole microelectronics industry shows that intelligence is capable of such systems, and Mr Venter’s work shows that we are beginning to move it down to the scale of the cell.

    So, on inference to best explanation, we have excellent warrant — as opposed to “proof” [a very slippery term, and implying demands that are often selectively hyperskeptical] — for inferring that we are looking at a superior technology but of a familiar type.

    GEM of TKI

    PS: here’s a free online textbook on the general class, embedded systems.

  24. Jonathan M,
    there are known mechanism how parts of the DNA can get copy&pasted, transpons for example – it does not have to be “magic”. Only Arthur Hunt did not go into great detail about it. But maybe I don’t quite understand your objection. Or maybe I misunderstand you completely and you are arguing that because of Behe’s calculation the protein simply can’t have evolved.

    But what I really honestly would like to know is why you think Behe’s probability calculation is so much more reliable than the evolutionary mechanism. Isn’t there any chance that Behe’s calculation might be mistaken?

    Btw. here is link to an updated post:

    http://aghunt.wordpress.com/20.....n-revised/

  25. MathGrrl, Let’s get this straight, even though the further man peers down into the cell, the more levels of jaw-dropping integration he unveils that vastly exceeds our own designs, you simply will not infer design no matter what is discovered in the cell simply because you did not see the ‘Designer’ implement the design in the first place and in spite of the fact you have no materialistic account for it?!? And suppose you found a monolith buried in your backyard inscribed with all sorts of strange markings you did not understand, would you also automatically assume it was not designed since you did see the ‘designer’ inscribe it or bury in your backyard?

  26. correction;
    since you did NOT see the ‘designer’ inscribe it or bury it in your backyard?

  27. MathGrrl

    You asked:

    “You are claiming that an intelligent designer exists and intervenes in the evolution of life on this planet. I am asking you to provide empirical evidence for this claim to answer the questions of who, what, when, where, and how.”

    Wow. Quite a demand, I think. Since there is no Moses around to talk to Creator why not re-read recent series by Kairos where he set up test experiment that’s actually doable.

    If not that, please give some ideas for specific experiment.

  28. myname, if t-urf13 arose by evolutionary processes it would be ‘magic’, whereas, as you somewhat admitted, several integrated (non-evolutionary) processes (copy and pasting of DNA segments) were involved in the T-urf13′s origination from non-coding DNA regions. (This is not an evolutionary process myname!!!) Dr Hunter comments here;

    De Novo Genes: What are the Chances?
    Excerpt: The URF13 protein design dwarfs the experimentally screened function of ATP binding. And yet, even in that simple case, and with conservative assumptions, we find the probabilities of the T-urf13 de novo gene arising via blind evolution to be one in ten million (that is, 1 in 10,000,000). The real number undoubtedly has many more zeroes.
    http://darwins-god.blogspot.co.....ances.html

    And yet myname, though you yourself admit that non-evolutionary processes were involved in T-urf13, you claim that this as stunning proof for what unguided evolutionary processes can do? It simply does not follow!!!, in fact, being that corn is a major food crop for man, it follows much more readily that this is a front-loaded adaptation. Moreover, from what evidence I have for corn/maize now, I hold that, overall, this adaptation came at a loss of preexistent information in the genome of maize.

    notes;

    Of note: The reduced genetic variability brought about by ‘selection’ in major food crops, such as corn, is a major concern facing scientists today since the much larger genetic variability, which is found in the parent species of corn, maize, gives greater protection from a disease wiping out the entire crop of corn.

    Genetic diversity and selection in the maize starch pathway:
    The tremendous diversity of maize and teosinte has been the raw genetic material for the radical transformation of maize into the world’s highest yielding grain crop.
    http://www.pubmedcentral.nih.g.....tid=130568

    “Supergerms (MRSA) are not supergerms any more than hybrid corn is supercorn—today’s hybrid corns are so delicate that they can’t even sprout unless they are planted underground. They can’t even grow effectively unless the ground is weeded. They can’t even reproduce unless technicians at seed-houses mate them artificially and with great effort.”
    http://www.answersingenesis.or.....rgerms.asp

    Biodiversity is essential for our existence:
    Excerpt: Unfortunately, industrial agriculture has caused a dramatic reduction of genetic diversity within the animal and plant species typically used for food.
    http://www.sustainabletable.or.....diversity/

    EXPELLED – Natural Selection And Genetic Mutations – video
    http://www.metacafe.com/watch/4036840

    “…but Natural Selection reduces genetic information and we know this from all the Genetic Population studies that we have…”
    Maciej Marian Giertych – Population Geneticist – member of the European Parliament – EXPELLED

    Could Chance Arrange the Code for (Just) One Gene?
    “our minds cannot grasp such an extremely small probability as that involved in the accidental arranging of even one gene (10^-236).”
    http://www.creationsafaris.com/epoi_c10.htm

    “Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds” 2004: – Doug Axe ,,,this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences.”
    http://www.mendeley.com/resear.....yme-folds/

    As well neo-Darwinism presupposes that the ‘beneficial mutations’ which conferred the advantage for Tibetans to live at high altitudes was completely random, yet when looked at from the point of population genetics, the evidence gives every indication that the ‘beneficial mutations’ were not random at all but were in fact ‘programmed’ mutations:

    Another Darwinian “Prediction” Bites the Dust – PaV – August 2010
    Excerpt: this means the probability of all three sites changing “at once” (6.25 X 10^-9)^2 = approx. 4 X 10^-17 specific bp change/ yr. IOW (In Other Words), for that size population, and this is a very reasonable guess for size, it would take almost twice the life of the universe for them to take place “at once”. Thus, the invocation of “randomness” in this whole process is pure nonsense. We’re dealing with some kind of programmed response if, in fact, “polygenic selection” is taking place. And, that, of course, means design.
    http://www.uncommondescent.com.....more-14516

    etc.. etc…

  29. Question of the day-

    Has Dr Behe responded to any of Dr Hunt’s challenges/ observations?

  30. Joseph, here is Dr. Behe’s personal blog on UD;

    http://behe.uncommondescent.com/

    It goes all the way back to publication of ‘The Edge’ (of note; you must click on previous entries at the bottom to go deeper into the blog); Behe’s deals with all peer-reviewed challenges that were published, and even deals with a few blog challenges, but after brief skim through it, I found no response to Dr. Hunt. Perhaps Dr. Hunt has not issued his ‘challenge’ in peer-review?

  31. Correct he doesn’t address t-urf13 in his blog. Tha’s why I ased.

    However in “The Edge of Evolution” he does address it in a general way saying that he is just considering cellular protein binding to other cellular proteins, not to foreign proteins.

    Destructive protein-protein binding is much easier to achieve by chance- page 149

  32. kf:

    Your attempted turnabout self-destructs.

    I don’t find any warrant for that claim in your remarks. The reason I turned your question to markf around is because I’m puzzled by what I perceive to be an inconsistency in the design argument, at least as generally presented here. You consider it fair to ask an evolutionist for a detailed step-by-step mechanistic scenario of complex organelle evolution, while providing nothing in the way of a step-by-step mechanistic scenario informed by design theory.

    As MathGrrl suggested, surely it is equitable for anyone to ask design protagonists to provide better explanations for the origins of biological entities than the mechanistic explanations they find wanting on the part of conventional science.

  33. Pendant, you state;

    ‘while providing nothing in the way of a step-by-step mechanistic scenario informed by design theory.’

    While implementation of optimal design in life is not directly observable, being in the remote past, as are all inferences for evolution of such unmatched complexity, we can directly witness the ‘mechanistic effect’ for optimal design in the past by observing all adaptations now witnessed are detrimental in nature (exactly the consistent ‘mechanistic’ type variation that evolution does not predict!);
    “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
    Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net ‘fitness gain’ within a ‘stressed’ environment i.e. remove the stress from the environment and the parent strain is always more ‘fit’)
    http://behe.uncommondescent.co.....evolution/

    Michael Behe talks about the preceding paper on this podcast:

    Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time – December 2010
    http://intelligentdesign.podom.....3_46-08_00

    Evolution Vs Genetic Entropy – Andy McIntosh – video
    http://www.metacafe.com/watch/4028086

    Poly-Functional Complexity equals Poly-Constrained Complexity
    http://docs.google.com/Doc?doc.....Zmd2emZncQ

    Human Genome Project Supports Adam, Not Darwin
    Excerpt: What is on the top tier (accounting for variation within humans)? Increasingly, the answer appears to be mutations that are ‘deleterious’ by biochemical or standard evolutionary criteria. These mutations, as has long been appreciated, overwhelmingly make up the most abundant form of nonneutral variation in all genomes. A model for human genetic individuality is emerging in which there actually is a ‘wild-type’ human genome—one in which most genes exist in an evolutionarily optimized form. There just are no ‘wild-type’ humans: We each fall short of this Platonic (optimized) ideal in our own distinctive ways.
    http://www.creationsafaris.com.....#20110221a

    etc.. etc.. etc..

  34. Pedant:

    We know — per massive observation [as repeatedly noted] — that intelligence is causally sufficient for functionally specific, complex organisation and associated information. So much so, that such FSCO/I is a reliable index of design.

    It is claimed by darwinists, that observations of FSCO/I in life forms, including in DNA, proteins and associated molecules that carry out the step by step molecular processes of life, are due to chance plus necessity, through chemical and/or biological evolution.

    That claim is what needs to be backed up by em0piricqal demonstration,a s there is a serious reason — cf, the infinite monkeys analysis as wel as the pattern of observed cause — to quesiton such a claim.

    It is highly interesting that, instead of provideing empirical support for the claim, you are trying a turnabout, as though it were not the case that intelligence is already known to be causally sufficient for FSCO/I.

    And, so, I find the attempt to demand observation of he unobservable past — i.e MG’s demand for who what where when — as a telling implicit admission, hiding behind a fallacy.

    That implicit admission is that you do not have ANY serious evidence that forces of chance and necessity without intelligent direction, can and do give rise to phenomena exhibiting FSCO/I.

    In short, we are back to the a priori imposition of evolutionary materialism, and the abuse of institutional power to back it up.

    Let the following plain facts be known, then:

    1: FSCO/I is a routine, reliable sign of intelligence, where we can directly observe cause.

    2: The infinite monkeys analysis shows why it is maximally unlikely that forces of chance plus necessity will give rise to FSCO/I in absence of intelligent intervention.

    3: Indeed, there are no cases where in our direct observation, FSCO/I has been produced by chance plus necessity, without intelligent oversight, and the infinite monkeys analysis shows that such is maximally unlikely on the gamut of the observed cosmos.

    4: On matters of origins science, we cannot make direct observations of the deep past, so ever since Newton’s suggestion and the work of men like Lyell and Darwin, we have recognised that if a certain causal force is sufficient for an effect, and there are certain characteristic signs that we can identify, then the best explanation for those same signs in cases in the deep past is the one we observe in the present. This is the uniformity principle.

    5: In the days of Darwin etc, it was not known that “protoplasm” contained molecular nanomachines effecting digital coded information to carry out the core activities of the cell.

    6: Subsequent to the 1950s’ we do know that.

    7: The only known and reliable source of digital systems and coded functional information, is intelligence.

    8: So, per uniformity [and subject only to actual empirical demonstration that chance + necessity can regularly produce FSCO/I] we have every epistemic right to see this as a reliable sign of intelligent cause.

    9: So, we have a known causally sufficient explanation, competing with an institutionally entrenched clam that is NOT credibly causally sufficient to account for literally thousands of cases of FSCO/I in life forms.

    So, the problem is that the superior explanation, by any objective, reasonable standard, is politically incorrect.

    That is why we are now seeing a turnabout, selectively hyperskeptical attempt to demand a degree of warrant that is known not to be possible on origins science cases, for the simple reason that we cannot directly observe the deep past of origins.

    So, the point is, that if we apply that suggested standard across the board, poof, all claimed knowledge of the deep past evaporates.

    If, instead we use the uniformity principle, it is telling us that he best explanation of the FSCO/I in life forms is intelligent design.

    So, there are three options:

    I: Accept the evidence and the implication that FSCO/I is best explained on design

    II: Reject the uniformity principle, and all claimed knowledge of the past on inference from signs in the present; including BTW history, including that of scientific records, for the past, in general is unobservable.

    III: be inconsistent in pursuit of an ideological agenda, here, evolutionary materialism.

    It should be plain that neither II nor III are rational views.

    I find it absolutely telling that we are seeing III plainly emerging as the view of choice for evolutionary materialists.

    GEM of TKI

  35. KF,

    FSCO/I

    We know that information can be instantiated into matter in the form of an arrangement of symbols which can be mapped to objects or control processes. This is even moreso obvious when the symbols are presented in digital sequence manner, where decoding is rules based. That is the FCS-I part of the two acronyms above.

    But the FSC-O part…

    A lever with a complimentary hole and a shaft to form a pivot, which then acts within a function. This would seem to be information instantiated into matter as well (in perhaps an analog manner, or perhaps that descriptor misses the issue).

    Do you have anything on these thoguhts or ideas – done any work around it? A link?

    Thanks!

  36. UB:

    Look at the discussions here and here, including the onward links.

    Something as simple as a nut and a bolt shows functionally specific complex organisation pointing to design. (Just look at your handy hardware store nuts and bolts section to see just how functionally specific such things are! And, yes these are analogue, but the blueprint is reducible to digital form, via a mesh of nodes, arcs and interfaces.]

    A classic quotation, by a leading OOL researcher is:

    __________________

    >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.] >>
    __________________

    This suggests the nodes, arcs and interfaces reduction that I have used; which then implies onwards, the information that can be extracted, and tested for being an island of function, i.e. what happens when we inject some noise, do we tend to run out of a shore of an island of function?

    The digital reduction is, on that resolution, without loss of generality, as we are in effect deducing the blueprint for the system, including the tolerances that will still work.

    It is interesting to note, how many precision machines that until recently, what had to be done was to fit and try specific parts to get one that would fit. The old 0.303 Lee Enfield Rifle is a classic case in point.

    GEM of TKI

  37. F/N: Pedant and MG, before any further arguments, it would be helpful if you could provide a cogent counter to the points presented here and here, on FSCO/I and the infinite monkeys analysis. Failing that, we have every epistemic right to infer from FSCO/I as a reliable sign, to design as the empirically known causally sufficeient and reliable source for such.

  38. —Pedant [and MathGirl] “You consider it fair to ask an evolutionist for a detailed step-by-step mechanistic scenario of complex organelle evolution, while providing nothing in the way of a step-by-step mechanistic scenario informed by design theory.”

    —”As MathGirl suggested, surely it is equitable for anyone to ask design protagonists to provide better explanations for the origins of biological entities than the mechanistic explanations they find wanting on the part of conventional science.”

    It is not only not “equitable,” it is not even rational. According to your account, all biodiversity was the result of a mechanistic, step-by-step process. According to our account, it was not. So now you are asking us to do what? –to provide evidence for something we say did not happen and that you say did happen? What kind of nonsense is that ?

    Incrementalism and the naturalistic, step-by-step process is your gig, not ours. That fact is, you cannot provide any evidence at all to support your wide-sweeping claim that such a mechanism can do what you say it can do. The counter fact is that we can provide plenty of evidence to support our minimalist claims for design in nature.

    Big claims like yours require big support, yet you have none. Small claims like ours require much less support, yet we provide that and more.

  39. Pedant:

    You consider it fair to ask an evolutionist for a detailed step-by-step mechanistic scenario of complex organelle evolution, while providing nothing in the way of a step-by-step mechanistic scenario informed by design theory.

    It is all about capabilities, meaning we know that intelligent agencies are capable of producing CSI and IC. However we don’t have any observations or experience with blind, undirected processes doing so.

    Therefor your position needs the detail.

    And BTW your position hardly rests on conventiona science…

  40. Hmmm….

    It’s nice to see that my essay still can generate some interest after all these years. Two points/questions:

    1. It is not clear to me what is being claimed. Jonathan, are you suggesting that the mechanisms known to be involved in homologous and/or non-homologous recombination (the processes that pretty clearly gave rise to the T-urf13 gene) do not obey the rules of chemistry? This is the only way I can make sense of your argument.

    Stated another way, you seem to be claiming that T-urf13 must be beyond the reach of “Darwinian” processes, that in this case must involve recombination. Since T-urf13 obviously exists, and since we know when, where, and how (at least in general terms) it arose, then your argument seems to boil down to an assertion that recombinational mechanisms are either insufficient (obviously wrong) or guided in some (unstated) way. If you are trying to say something else, a bit of clarification would be appreciated.

    2. Axe’s views about the nature of the functional space of protein sequences are plainly wrong. I try to illustrate this here (I also explain why the ID party line with respect to the 2004 JMB paper is incorrect.) Given this, the objections that rely on the alleged fantastic isolation of functionality in sequence space are pretty irrelevant. (This is, of course, one of the points of the T-urf13 example. It shows quite clearly that functionality, even irreducible complexity, is not the stupendous impossibility that is claimed by Behe and others.)

  41. Mr Hunt et al:

    the basic problem with the typical objections to irreducible complexity as a key sign of design is the C1 – 5 issue, in the context of needing to account for significantly complex systems, not minor shifts. I cite Angus Menuge:

    ___________

    >> For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

    C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

    C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

    C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

    C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

    C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

    ( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.) >>
    ___________

    I get the consistent picture that those who dismiss IC as a significant issue have never had to seriously design complex systems, and do not have a feel for what is required to put together a networked composite entity that requires matched interfaces, complex organization and needs to fulfill an overall function.

    Until I see something that soundly answers to the C1 – 5 factors, and backs it up with demonstration of how something like a flagellum on empirical observation emerged by direct development or co-option, I cannot but concluse that I am looking at strawman objections.

    GEM of TKI

  42. @ kairosfocus

    The only way to advance the discussion is if you take the answers to your questions C1-C5 that have been given and explicitly and in detail state why these are not sound.
    And the answers are out there probably most of them freely available.

  43. MN:

    Why not provide an example, one that cogently responds to C1 – 5 on the merits, with empirical evidence that shows that this is not just a paper idea.

    (ONLOOKERS: a literature bluff that claims that there is a cogent response that they coyly does not provide such, is an implicit admission that there is no real response. MN: In short, you need to show that you are not putting up a literature bluff. Luskin’s summary on this is: “Those who purport to explain flagellar evolution almost always only address C1 and ignore C2-C5.”)

    As someone who has had to design things, I am highly confident that C1 – 5 summarise real engineering and manufacturing issues that have to be resolved if something is to actually work.

    Cf my discussion here, which includes a summary on the actual demonstration of irreducible complexity for the flagellum, in the lab.

    So, MN, once you provide an actual case — one that cogently addresses C1 – 5 in a realistic case backed up by observational data, then we can see how this matter can be moved forwards.

    GEM of TKI

  44. F/N: Here is my in-brief on a typical objection, the T3SS proposed as ancestral to the flagellum:

    unless all five factors are properly addressed, the matter has plainly not been adequately explained. Worse, the classic attempted rebuttal, the Type Three Secretory System [T3SS] is not only based on a subset of the genes for the flagellum [as part of the self-assembly the flagellum must push components out of the cell], but functionally, it works to help certain bacteria prey on eukaryote organisms. Thus, if anything the T3SS is not only a component part that has to be integrated under C1 – 5, but it is credibly derivative of the flagellum and an adaptation that is subsequent to the origin of Eukaryotes. Also, it is just one of several components, and is arguably itself an IC system . . .

    Dembski has provided more details here, ever since 2003.

    And of course, this is a case in point of an answer that is inadequate on C1 and ignored 2 – 5.

  45. @ kairosfocus

    You have just admitted that there are answers to C1. So we can start with these.

  46. As a side note, in C1 to C5 you seem to be mixing the issue of how the flagellum is build and how it evolved/was designed.

    We can of course drop the assumption that if a bacterium today produces a flaggellum it only utilizes purely natural processes. But in this case a discussion seems to be meaningless to me.

  47. MN:

    Pardon.

    I have pointed out that there are strawman tactics that have been played with C1, and have been corrected on the record at least since 2003. I do not appreciate that being twisted into the pretence of a sound answer to C1.

    I have also showed that the strawman tactic has been extended to the tactic of simply ignoring C2 – 5. your gambit above, to focus on strawmanish responses on C1 ignoring C2 – 5, is a strong indication that in fact, you do not have a cogent response across the set of constraints C1 – 5.

    The problem is, unless there are answers to C1 – 5 that are solid across all, and backed by empirical evidence, you are nowhere. That is, you have not put forth any empirically well supported mechanism to account for functionally specific irreducibly complex organised systems on undirected chance plus mechanical necessity.

    For C1 and C2 – 5 are all NECESSARY conditions.

    GEM of TKI

  48. F/N: MN, the flagellum, the iconic case of IC, needs to be accounted for as a system that is built by a cell, and that building and operation have to be accounted for on the claimed evolutionary mechanisms. if it is not built it does not exist. If its components are not available and matched as needed, it cannot be built. If its parts are not properly organised and assembled, it will not work.

  49. Onlookers: video

  50. Kairos

    “I get the consistent picture that those who dismiss IC as a significant issue have never had to seriously design complex systems, and do not have a feel for what is required to put together a networked composite entity that requires matched interfaces,”

    Thank you.

    I wonder what would they say about autonomous light seeking robot I built. I let him(it) roam in my backyard looking for best light to charge his batteries via small solar panel. That the only purpose in his little robotic life. ( does he dream about new Ni-MH batteries)

    If he could be made size 1/1000th mm I wonder if they would say he (it) designed itself.

  51. @ kairosfocus

    Just to be clear, you think that the bacterial flagellum is not build by purely natrual processes?

  52. MN:

    Pardon.

    The issue is not me, nor what I think.

    It is whether, on empirical evidence, you can ground the claim that a demonstrated IC system — cf the knockout studies in the previously linked — can originate by forces of undirected chance and necessity. In the case of the flagellum, in the main by chance genetic variations and natural selection, involving whatever sub-components you wish and whatever de novo components you wish.

    These must come together to form a functioning entity, in light of C1 – 5.

    We already know that intelligent, skilled design is causally sufficient for complex, multi-part entities that arrange, interface, integrate and use components according to a wiring plan. (And with Venter et al, we are scaling down to bio-molecule level, with real world commercial applications probable across this decade. The next stage of mechatronics.)

    The question on the table, is can you show that forces of chance and necessity, without such design, are causally sufficient to do that, the literally thousands of times required to explain life and its forms and structures?

    GEM of TKI

  53. PaV,

    Your basic question is this:

    Can an intelligent designer exist and intervene in the evolution of life on this planet?

    No, my question is: Where is the empirical evidence to support the claim that an intelligent designer exists and did intervene in the evolution of life on this planet?

  54. Eugen,

    “You are claiming that an intelligent designer exists and intervenes in the evolution of life on this planet. I am asking you to provide empirical evidence for this claim to answer the questions of who, what, when, where, and how.”

    Wow. Quite a demand, I think.

    Not really. As I noted, it’s no more than would be asked of the proponents of any other hypothesis.

    Since there is no Moses around to talk to Creator…

    I thought that ID was a scientific, not religious, hypothesis. Do you think otherwise?

    …why not re-read recent series by Kairos where he set up test experiment that’s actually doable.

    If not that, please give some ideas for specific experiment.

    If you don’t already have empirical evidence, there is no basis for your hypothesis. It is up to the proponents of ID to provide that evidence, or at least admit that it doesn’t (yet) exist.

  55. StephenB,

    ”As MathGirl suggested, surely it is equitable for anyone to ask design protagonists to provide better explanations for the origins of biological entities than the mechanistic explanations they find wanting on the part of conventional science.”

    It is not only not “equitable,” it is not even rational. According to your account, all biodiversity was the result of a mechanistic, step-by-step process. According to our account, it was not. So now you are asking us to do what? –to provide evidence for something we say did not happen and that you say did happen?

    No, I’m merely asking for evidence that such a designer exists (or existed), that it intervened in biological evolution, what it did, and how it accomplished whatever it did.

    Again, this is no more than would be expected of the proponents of any other hypothesis.

  56. @ kairosfocus

    I’m sorry, I was asking if you think it is build or assembled by purely natrual processes not about how it originated.

  57. MN:

    The issue is not how a cell, a digitally controlled automaton, operates and replicates, but how it originated. Especially, here, how the particular feature of interest, a flagellum — an example of an IC entity — originated.

    As Eugen described, we know how to design and build automata by intelligence, and thanks to von Neumann, we know how they can be given a self-replicating facility, through we have yet to be able to fully implement that. A reasonable matter of time.

    The unanswered quesiton, remains: can you account for such entities, on empirical evidence, from chance plus necessity without intelligent intervention?

    Given that this matter has been up front and centre for years now, if there were such an answer the answer would be trumpeted to the high heavens.

    So, the evasive side tracks, verbal sparring games, distractions and turnabout attempts are increasingly eloquent evidence that you have no sound answer on the merits.

    The same basic problem extends to the other rebuttal attempts above.

    Here is what this all boils down to: by fiat of the magisterium in the holy lab coat, we are expected to salute and say yessir on the claimed evolutionary materialist account of origins, simply on the grounds of logical possibility, the obvious and plainly superior alternative having been ruled out as an explanation to be seriously addressed, by question-begging rules tampering games.

    Sorry, I am not buying that.

    Not when I can do science say the way Newton did.

    GEM of TKI

  58. MathGrrl

    “Since there is no Moses around to talk to Creator…
    I thought that ID was a scientific, not religious, hypothesis. Do you think otherwise?”

    That was supposed to be a joke. I’m not expert on the science-religion issue.

    I think you are ninja. You are evading and redirecting pretty good.

    Now, I don’t have any idea for the experiment you are asking about. Please give some hints, maybe it could be accommodated.

  59. F/N: it is quite obvious that the flagellum is a programmed entity that is assembled by the cell itself in accordance with a complex algorithm, or rather a cluster of algorithms that start with triggering such formation and producing then transporting in the right sequence the components. Why not ask Eugen — who works in the relevant field day to day — about what it takes to set up an automated assembly line, and what sort of smarts it would take to so completely automate it as in the living cell?

    –> Eugen, what do you think the odds are of generating the info to set up such a line by lucky noise backed up by trial and error testing?

  60. EUGEN?

    Did you catch that?

    G

  61. I notice that the link (in comment #40 above) to my essay on Axe’s JMB paper doesn’t seem to be working. Here is a copy and paste url (sorry about any frustration attendant with this):

    http://aghunt.wordpress.com/20.....-function/

    Enjoy!

  62. Yes, I think Dr Bot is ninja,too. I’ve been looking for him re. digital communications and storage across posts but there are so many.

    “–> Eugen, what do you think the odds are of generating the info to set up such a line by lucky noise backed up by trial and error testing?”

    I sometimes run thought experiments to examine what would happen if I take just two instructions or wires and switch them around.

    Depends what is switched problems could range from simple stop to catastrophic.

    I fight randomness with all I have.

  63. –mathgirl: “NO, I’m merely asking for evidence that such a designer exists (or existed), that it intervened in biological evolution, what it did, and HOW IT ACCOMPLISHED WHAT IT DID” [my emphasis on the capital letters]

    [No, I am not asking for evidence to support how the designer did what it did.]

    —”Again, this is no more than would be expected of the proponents of any other hypothesis.?”

    [Yes, I am asking for evidence of how the designer did what it did it?]

    The last part of your first statement contradicts the second. Please choose one objection.

    Also, absorb this point:

    ID doesn’t presume to know how the designer did what it did. It is not incumbent on the scientist to validate a hypothesis that he doesn’t make, nor is it a rule of science that he should posit a “process” in order to make his hypothesis scientific. Science is a search for causes not processes.

    Darwinism does preseumt to know that process [random variation and natural selection] is responsible for all biodiversity

    Therefore, it is incumbent in the Darwinist to justify the assertion his assertion and provide evidence for it.

    Further, ID does not claim that the designer “intervened.” It is important to know something about the proposition that you are trying to argue against.

    Summary:

    It is NOT incumbent on the ID scientist to justify an assertion he doesn’t make. If ID doesn’t hypothesize a process, ID can’t justify a process.

    It IS incumbent on the Darwinist to justify an assertion that he does make.

  64. SB:

    We have posited an inference to a cause, on a warrant rooted in empirically reliable signs backed up by analysis of the infinite monkeys challenge.

    And, we have plainly met the challenge.

    As we just saw from Eugen, he backs up what I would say too: the secret to a real-time automated system, is that it must start to a known initial condition, then move under control after that on a path that has been worked out, or at least which it will work out in a predictable, effective and safe way based on programming [and a lot of testing].

    If it ever escapes control, you try to build in a safe recovery. Esp. in industrial environments, where with the power levels and masses involved BAD things can happen faster than you want to think.

    But, randomness is the enemy, not your friend.

    And the moreso as the systems become more and more integrated.

    The notion that something as specific, integrated and complex as that, assembles and programs itself out of lucky noise corrected by trial and error, is ludicrous.

    Pardon such directness.

    GEM of TKI

  65. —kairosfocus: “Pardon such directness.”

    By all means Sometimes directness penetrates the fog.

    ID posits a designer as the cause of some features in nature and provides ample evidence to support that proposition.
    [Modest claim justified]

    Darwinism posits a mechanistic process as the cause of all biodiversity and provides no evidence at all to support that proposition.

    [Bold claim unjustified]

    Meanwhile, Darwinists ask ID scientists to justify their proposed process, as if one had been proposed, while failing to justify their own process, which was proposed.

    [Irrational assumption about equivalence of hypotheses]

  66. StephenB @63:

    ID doesn’t presume to know how the designer did what it did.

    Why not? Why would that be a presumption? What is preventing ID from investigating the designer’s modus operandi? Is the designer assumed or posited to be beyond the reach of empirical research?

    It is not incumbent on the scientist to validate a hypothesis that he doesn’t make…

    On the contrary, it is incumbent on anyone who makes an empirical knowledge claim to acknowledge the existence of entailments of that claim and to respond to challenges concerning those entailments. As you just said above about the Darwinist:

    Therefore, it is incumbent in the Darwinist to justify the assertion his assertion [sic] and provide evidence for it.

    It looks like you employ a double standard of criticism.

    …nor is it a rule of science that he should posit a “process” in order to make his hypothesis scientific. Science is a search for causes not processes.

    If by “process,” you mean how something works, that has become a key question for science since the 16th Century. See, for example, Newton’s laws of motion, especially the second law (F = ma), which allows quantitative calculations of how velocities of objects change when forces are applied.

  67. Pedant:

    How did you type your post?

    In short, there is more than one way to skin a cat, and depending, the different ways may not leave specific signs.

    But the “skinned cat[fish]” in this case, however done, has in it the distinctive sign dFSCI, which points to the root cause, whatever mechanism employed: touch-typing on a QWERTY keyboard, typing on a Dvorak keyboard, two-finger typing, or even typing by pointing to letters on a touch-board with your mouth or toes, depending on possible handicap.

    The mechanical expressions may vary, the intent and action of mind behind these expressions does not, and that is what FSCO/I detects.

    Meanwhile, after hours and hours, you are still not coming forward with the evidence that would show on empirical data that undirected forces of chance plus necessity are causally adequate to reliably and repeatedly generate FSCI.

    But the very post you used to distract form this is yet another instance where the inference from FSCO/I to design as the best explanation, is again accurate.

    1208 ASCII characters in contextually calculatedly evasive English text, 128^1208 = 3.23*10^2545 possibilities; well beyond the FSCI rule of thumb threshold.

    As to mechanisms to create the organisms, anything from a super sophisticated nanotech lab on up to front-loaded genes and so forth are possible. But, that is the operative word: possible. We know what is credible: design, and we know it is possible to do so, so next step reverse and forward engineering.

    But that is onward engineering.

    The decisive, even revolutionary point is the empirically well warranted inference to design as cause — which your “evade at all costs” remarks inadvertently testify to.

    GEM of TKI

  68. KF at 36, thanks. I should have known to go to IOSE. I am interested in information being instantiated into matter not by symbol, but by form.

  69. F/N: Onlookers, in the already linked discussion in response to Miller on the T3SS, in 2003, Dembski responded to the sort of objection just above as follows:

    _____________

    >>Conflating ID with Interventionism:

    According to Miller, intelligent design “requires that the source of each and every novelty of life was the direct and active involvement of an outside designer whose work violated the very laws of nature he had fashioned…. The notion at the heart of today’s intelligent design movement is that the direct intervention of an outside designer can be demonstrated by the very existence of complex biochemical systems” Miller and I have discussed this criticism in public debate on several occasions. By now [2003!] he should know better.

    Intelligent design does not require organisms to emerge suddenly or be specially created from scratch by the intervention of a designing intelligence. To be sure, intelligent design is compatible with the creationist idea of organisms being suddenly created from scratch. But it is also perfectly compatible with the evolutionist idea of new organisms arising from old by a process of generation. What separates intelligent design from naturalistic evolution is not whether organisms evolved or the extent to which they evolved but what was responsible for their evolution.

    Naturalistic evolution holds that material mechanisms [= chance plus necessity, which as you can see eight years later we still cannot get a sound empirical support for] alone are responsible for evolution (the chief of these being the Darwinian mechanism of random variation and natural selection). Intelligent design, by contrast, holds that material mechanisms are capable of only limited evolutionary change and that any substantial evolutionary change would require input from a designing intelligence. Moreover, intelligent design maintains that the input of intelligence into biological systems is empirically detectable, that is, it is detectable by observation through the methods of science. For intelligent design the crucial question therefore is not whether organisms emerged through an evolutionary process or suddenly from scratch, but whether a designing intelligence made a discernible difference regardless how organisms emerged.

    For a designing intelligence to make a discernible difference in the emergence of some organism, however, seems to Miller to require that an intelligence intervened at specific times and places to bring about that organism and thus again seems to require some form of special creation. This in turn raises the question: How often and at what places did a designing intelligence intervene in the course of natural history to produce those biological structures that are beyond the power of material mechanisms? Thus, according to Miller, intelligent design draws an unreasonable distinction between material mechanisms and designing intelligences, claiming that material mechanisms are fine most of the time but then on rare (or perhaps not so rare) occasions a designing intelligence is required to get over some hump that material mechanisms can’t quite manage. Hence Miller’s reference to “an outside designer violat[ing] the very laws of nature he had fashioned.”

    As I’ve pointed out to Miller on more than one occasion, this criticism is misconceived. The proper question is not how often or at what places a designing intelligence intervenes but rather at what points do signs of intelligence first become evident. Intelligent design therefore makes an epistemological rather than ontological point. To understand the difference, imagine a computer program that outputs alphanumeric characters on a computer screen. The program runs for a long time and throughout that time outputs what look like random characters. Then abruptly the output changes and the program outputs the most sublime poetry. Now, at what point did a designing intelligence intervene in the output of the program? Clearly, this question misses the mark because the program is deterministic and simply outputs whatever the program dictates.

    There was no intervention at all that changed the output of the program from random gibberish to sublime poetry. And yet, the point at which the program starts to output sublime poetry is the point at which we realize that the output is designed and not random. [notice, dFSCI as a reliable and recognisable sign of design] Moreover, it is at that point that we realize that the program itself is designed. But when and where was design introduced into the program? Although this is an interesting question, it is ultimately irrelevant to the more fundamental question whether there was design in the program and its output in the first place. We can tell whether there was design (this is ID’s epistemological point) without introducing any doctrine of intervention (ID refuses to speculate about the ontology of design)

    Intelligent design is not a theory about the frequency or locality at which a designing intelligence intervenes in the material world. It is not an interventionist theory at all. Indeed, intelligent design is perfectly compatible with all the design in the world being front-loaded in the sense that all design was introduced at the beginning (say at the Big Bang) and then came to expression subsequently over the course of natural history much as a computer program’s output becomes evident only when the program is run. This actually is an old idea, and one that Charles Babbage, the inventor of the digital computer, explored in the 1830s in his Ninth Bridgewater Treatise (thus predating Darwin’s Origin of Species by twenty years).

    Let’s be clear, however, that such preprogrammed evolution would be very different from evolution as it is now conceived. Evolution, as currently presented in biology textbooks, is blind — nonpurposive material mechanisms run the show. Within this naturalistic conception of evolution, the origin of any species gives no evidence of actual design because mindless material mechanisms do all the work. Within a preprogrammed conception of evolution, by contrast, the origin of some species and biological structures would give evidence of actual design and demonstrate the inadequacy of material mechanisms to do such design work. Thus naturalistic evolution and preprogrammed evolution would have different empirical content and be distinct scientific theories.

    Of course, such preprogrammed evolution or front-loaded design is not the only option for the theory of intelligent design. Intelligent design is also compatible with discrete interventions at intermittent times and diverse places. Intelligent design is even compatible with what philosophers call an occasionalist view in which everything that occurs in the world is the intended outcome of a designing intelligence but only some of those outcomes show clear signs of being designed. In that case the distinction between natural causes and intelligent causes would concern the way we make sense of the world rather than how the world actually is (another case of epistemology and ontology diverging).

    We may never be able to tell how often or at what places a designing intelligence intervened in the world or even whether there was any intervention in Miller’s sense of violating natural laws. But that’s okay. What’s crucial for the theory of intelligent design is the ability to identify signs of intelligence in the world — and in the biological world in particular — and therewith conclude that a designing intelligence played an indispensable role in the formation of some object or the occurrence of some event. That is the start. Often in biology there will be clear times and locations where we can say that design first became evident. But whether that means a designing intelligence actually intervened at those points will require further investigation and may indeed not be answerable. As the computer analogy above indicates, the place and time at which design first becomes evident need have no connection with the place and time at which design was actually introduced.

    In the context of biological evolution, this means that design can be real and discernible in evolutionary change without requiring an explicit “design event,” like a special creation, miracle, or supernatural intervention. At the same time, however, for evolutionary change to exhibit actual design would mean that material mechanisms were inadequate by themselves to produce that change. The question, then, that requires investigation is not simply what are the limits of evolutionary change, but what are the limits of evolutionary change when that change is limited to material mechanisms. This in turn requires examining the material factors within organisms and in their environments capable of effecting evolutionary change. The best evidence to date indicates that these factors are inadequate to drive full-scale macroevolution. Something else is required — intelligence. >>
    ______________

    In short, we are seeing an old long since cogently rebutted tactic replayed, and to the same evasive end.

    GEM of TKI

  70. UB:

    Welcome.

    Information can be explicitly coded in dFSCI. It can be implicit in organisation of discrete elements, and it can be implicit in the 3-d structure of a unified entity that takes a functionally specific form, i.e. whatever specifies the mould from which it came, or the NC machine or the like.

    G

  71. F/N 2:

    I should add to the above, that not all scientific questions are dynamical. Some are historical of factual, indeed there used to be a term, natural history.

    Others, are about cause. And, here we have three main classes of cause: chance, mechanical necessity (what dynamics like Newtonian dynamics are classically about) and intelligence.

    When, for instance, one wishes to scientifically validate the Glasgow Coma Scale, one is not interested in the dynamics by which conscious mind gives rise to speech, just that speech that is contextually responsive and coherent, is a sign of consciousness. And, that scale is used in literally life or death situations.

    That should — pardon the directness, but it seems well warranted and needed — underscore the underlying intellectual irresponsibility of the evasive distractors that have been presented above.

    We know on abundant evidence that FSCO/I is a reliable index pointing to intelligence as its cause. The proposition of the evolutionary materialistic establishment, is that chance plus necessity are causally sufficient to account for it. On being challenged to empirically substantiate such a claim without begging questions, we routinely see – as again above — evasions, turnabout tactics, and the like.

    That pattern — a very familiar one for those of us who had to deal with Marxists and the like 30 years ago — tells us beyond reasonable doubt that the evo mat position is ideological, not scientific. And so, one of the most important services of ID theory to the cause of the advancement of science, is to help liberate science from ideological captivity.

    GEM of TKI

  72. kf,

    I beg your patience. I don’t want to be your adversary. I don’t see that as a pathway to increasing my understanding of Intelligent Design as a scientific program. I do think it would assist my understanding if you or StephenB would kindly answer the question that I posed to him above:

    What is preventing ID from investigating the designer’s modus operandi? Is the designer assumed or posited to be beyond the reach of empirical research?

    I hope you will consider that a fair question.

  73. –Pedant: “Why not? Why would that be a presumption? What is preventing ID from investigating the designer’s modus operandi?”

    The ID scientist, unlike the Darwinist, knows his limitations, meaning that he doesn’t make claims that he cannot defend. The current ID paradigms cannot detect the designer’s modus operandi and the ID scientist knows that. So, he makes only a modest claim that he can support with evidence: certain features in nature were likely designed. It is not necessary to know how a thing was designed in order to know that it was designed, as any archeologist or forensic scientist will attest.

    –Is the designer assumed or posited to be beyond the reach of empirical research?”

    If you mean “is the designer’s “identity” out of reach for current scientific paradigms, the answer would be yes. If you mean is the fact of the designer’s “existenc” of out reach,” the answer would be no.

    [It is not incumbent on the scientist to validate a hypothesis that he doesn’t make…]

    —”On the contrary, it is incumbent on anyone who makes an empirical knowledge claim to acknowledge the existence of entailments of that claim and to respond to challenges.”

    The logic of entailments does not require a scientist to explain anything outside of the paradigmatic constructs he is using. Indeed, it would be impossible to do so since it would rule out the possibility of measurement. “Irreducible complexity,” for example, is a claim about function. It is not a claim about a process. Darwinism is a claim about a process.

    ID scientists provide evidence that the functions they observe have been designed.

    Darwinists, on the other hand, provide no evidence that their process they describe can do what they say it can do.

    —”It looks like you employ a double standard of criticism.”

    I simply ask the scientist to show me his methods and provide evidence for the claims he makes, not for the methods he doesn’t use or claims he doesn’t make. What could be more reasonable than that?

    —”If by “process,” you mean how something works, that has become a key question for science since the 16th Century.”

    Yes, a key question, but not the only question.

  74. StephenB, this is one of the few times I find that I disagree with you somewhat, for I think we live at such a privilege time of ‘advanced’ knowledge, into the workings of nature, through quantum mechanics, special and general relativity, that it is possible to ‘know’ with a very high degree of certainty, the transcendent characteristics of the Designer. But seeing as the atheists/neo-Darwinists who visit UD will not even concede the blatantly obvious fact that vastly superior design exists in life, I have to agree with you that your approach is by far the most pragmatic, since it does indeed lend itself most readily and primarily to defend the main point to be defended against the neo-Darwinists. i.e. that life clearly exhibits all the hallmarks of a superior craftsman.

  75. bornagain, I am not sure we disagree, but I am definitely intrigued by this subject matter. When I say that ID cannot bridge the gap between the designer’s existence and the designer’s identity, I speak mainly of the formal ID paradigms that stop at the functional threshold, as is the case with Dembski, Behe, Meyer et al. or those which stop at measuring finely-tuned constants. I am not prepared to say, in principle, that science is confined to that level of discovery, so we may be on the same wavelength here.

    Indeed, as Father Thomas Dubay, Benjamin Wiker, and others have pointed out, nature exhibits beauty as well as design, and science can, at some level, measure beauty, especially in terms of balance and proportion. That fact alone points to something more than the existence of a designer.

    All we may need is one more genius to stand on Behe’s and Dembski’s shoulders [I consider each to be a genuis] and conceive a new paradigm that will make the hoped for stretch. Or, perhaps someone else is working in a mode I don’t know about and has already broken the ice. I do not think, though, that we can do it without that new paradigm and that new genius. What do you think?

  76. StephenB, the knowledge is there for the gap to be bridged, if someone comes along (a genius) who can synthesize it into proper logical form that will truly be a thing of beauty.

  77. MathGrrl[53]:

    No, my question is: Where is the empirical evidence to support the claim that an intelligent designer exists and did intervene in the evolution of life on this planet?

    So we’re right back to what I originally said:

    Math Girl, if you can’t explain the tilma’s image, then admit outside agency. If you refuse to admit outside agency, then admit that you’re completely closed to the possibility of any kind of outside agency.

    Does the tilma of Juan Diego represent sufficient evidence to you of outside agency?

    It would seem that if an image can appear out of nowhere and you deny that an outside agency CAN and DOES intervene in bringing it about, then what possible evidence would ever satisfy you when it comes to the world of biology.

    So, it gets back to my original

  78. StephenB:

    The ID scientist, unlike the Darwinist, knows his limitations, meaning that he doesn’t make claims that he cannot defend. The current ID paradigms cannot detect the designer’s modus operandi and the ID scientist knows that.

    Thank you for your courtesy in replying so informatively. I will not pursue this matter further, except to ask, How does the ID scientist know her limitations? How does she know that there is a boundary to her inquiry concerning design in nature?

    If you mean “is the designer’s “identity” out of reach for current scientific paradigms, the answer would be yes.

    Again, What is the empirical/deductive process of reasoning that accounts for the confidence of the ID scientist that the nature of the designer is out of reach?

  79. Miller: “Hence Miller’s reference to “an outside designer violat[ing] the very laws of nature he had fashioned.”

    Kairo: “As I’ve pointed out to Miller on more than one occasion, this criticism is misconceived… Intelligent design therefore makes an epistemological rather than ontological point.”

    And unlike what Miller does, ID does not make a theological point. Miller does this when he rejects ID on the basis of “an outside designer violat[ing] the very laws of nature he had fashioned.”

    Firstly, ID does not necessarily claim that the designer of earth’s life is “outside of the laws of nature”.

    Secondly, even if the designer is outside the “laws of nature”, the “laws of nature” are A) merely man’s description of the regular patterns of evidence of nature as we know it, and B) there is no reason why the designer should be limited by these regularities other than THEOLOGICAL reasons.

    Miller is free to reject whatever he wants for whatever reasons he wants, but in this case, the reason is patently theological, and not on any scientific basis.

  80. Arthur Hunt:

    (This is, of course, one of the points of the T-urf13 example. It shows quite clearly that functionality, even irreducible complexity, is not the stupendous impossibility that is claimed by Behe and others.)

    Arthur, you are confused. How may components does t-urf13 code for? How many components in that irreducibly complex system? Comnare that to Dr Behe’s mousetrap and you will see your example doesn’t measure up.

    As for recombinations- how as it determined that recombinations are lind watchmaker processes?

  81. Pedant:

    Again, What is the empirical/deductive process of reasoning that accounts for the confidence of the ID scientist that the nature of the designer is out of reach?

    In the absence of direct observation or deigner input, the only possible way to makany scientificdetermination about the designer(s) or the specific process(es) used, is by studying the design in quesion.

    With forenics the evidence and data don’t always point to the criminal. It takes detective work to flesh him/ her out.

    So we do what we can with what we have available- resource and evidence wise.

    Heck how long have we been trying to unravel Stonehenge? Yet the best we can say is “humans didit”.

  82. —Pedant: “How does the ID scientist know her limitations? How does she know that there is a boundary to her inquiry concerning design in nature?”

    I don’t think that the ID scientist knows her limitations in a final sense, but only in the context of the paradigm that she chosen. Perhaps a more expansive paradigm would reveal more information and allow the researcher to probe more deeply into the nature of the designer.

    What potential information, for example, is available to the researcher who conceives and applies the scientific paradigm of “irreducible complexity,” or, for that matter, “specified complexity.” It seems to me that the former yields information about the organism’s unity, while the latter tells us something about its function. But notice that, in the first case, at least, I am using a philosophical principle [unity] to shed light on and interpret information made available by science.

    With science we can examine facts, isolate variables, and measure things, but with philosophy we can, perhaps, get at the meaning and significance of science’s findings. Each discipline has its own methods, and each pursues one aspect of truth, but I think it is a mistake to put up a wall of separation between them. Ultimately, it is the responsibility of philosophy to illuminate science and not the other way around. Science can do nothing on its own. Without the first principles of right reason as a foundation, for example, science cannot even begin to function.

    —”Again, What is the empirical/deductive process of reasoning that accounts for the confidence of the ID scientist that the nature of the designer is out of reach?”

    I am not sure that the nature of the designer is out of reach for science, in principle, but I think that the paradigm that the researcher chooses is self-limiting in the sense that it asks only a small number of questions for the sake of focus and precise measurement. Science does require certain trade offs, I think. Normally, the scientist only gets answers to the questions he/she asks. It takes a lot of skill to formulate those questions and isolate other questions. However, there is such a thing as serendipity and I wouldn’t presume to rule it out. Sometimes, we do get answers to questions that we didn’t ask, but only because we had enough disciplined control over what we were doing to recognize the anomalies that appear.

    Can science tell us about the designer’s capacity for creativity, power, and intent. I suspect that it can with the help of philosophical reasoning, but I don’t know if it can do it on its own. Evidence does, after all, need to be interpreted, and only philosophy can provide the rules of reason that eventually do the interpreting.

  83. Intelligent Design is about the design not the designer(s)- that is its limits.

    But given any design inference the natural questions are “who, how, why, ie purpose, (where & when)”. IOW research questions borne from the design inference, meaning the design inference is not a dead-end.

  84. Mr Hunt “refutes” me concerning human chromosome 2 by:
    1) misrepresenting what I’d said;
    2) misrepresenting current scientific knowledge;
    3) misrepresenting Darwinism;
    4) engaging in easily identified logical fallacies
    … and I’m no one.

    So, it seems to me that it rather stands to reason the Mr Hunt will be compelled to do likewise as he “refutes” Mr Behe.

  85. KB:

    I am actually citing Dembski’s reply to Miller of 2003.

    G

  86. Does the tilma of Juan Diego represent sufficient evidence to you of outside agency?

    Was the tilma of Juan Diego the result of divine intervention and how do we establish if it is – all we have to go on are some accounts from the 15th centuary, which vary and some of which refer to it as being painted by a native Mexican whilst others calim it appeared out of nowhere.

    I’m afraid I don’t consider your personal belief that it is of divine provenance to count much as evidence, certainly not evidence of divine intervention in biological processes.

  87. Dr. Bot. please for give me for jumping in here, but I noticed you stated;

    ‘certainly not evidence of divine intervention in biological processes.’

    It seems to me that there is now a ‘fingerprint’ of ‘divine intervention in biological processes’;

    First we find that quantum entanglement falsifies reductive materialism (local reality),,,

    The Failure Of Local Realism – Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    Physicists close two loopholes while violating local realism – November 2010
    Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.
    http://www.physorg.com/news/20.....alism.html

    ,,, and yet reductive materialism/local realism is the premise that neo-Darwinism is built on, and thus when quantum entanglement is found in biology,,,

    Quantum Information/Entanglement In DNA & Protein Folding – short video
    http://www.metacafe.com/watch/5936605/

    ,,, Dr. Bot, does that not also falsify the reductive materialistic conjecture in biology of neo-Darwinism??? If not, why not?

    further note;

    Of related note; there is a mysterious ‘higher dimensional’ component to life:

    The predominance of quarter-power (4-D) scaling in biology
    Excerpt: Many fundamental characteristics of organisms scale
    with body size as power laws of the form:

    Y = Yo M^b,

    where Y is some characteristic such as metabolic rate, stride length or life span, Yo is a normalization constant, M is body mass and b is the allometric scaling exponent.
    A longstanding puzzle in biology is why the exponent b is usually some simple multiple of 1/4 (4-Dimensional scaling) rather than a multiple of 1/3, as would be expected from Euclidean (3-Dimensional) scaling.
    http://www.nceas.ucsb.edu/~dre.....18_257.pdf

    “Although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional. Quarter-power scaling laws are perhaps as universal and as uniquely biological as the biochemical pathways of metabolism, the structure and function of the genetic code and the process of natural selection.,,, The conclusion here is inescapable, that the driving force for these invariant scaling laws cannot have been natural selection.” Jerry Fodor and Massimo Piatelli-Palmarini, What Darwin Got Wrong (London: Profile Books, 2010), p. 78-79
    http://www.uncommondescent.com.....ent-369806

    4-Dimensional Quarter Power Scaling In Biology – video
    http://www.metacafe.com/w/5964041/

    Though Jerry Fodor and Massimo Piatelli-Palmarini rightly find it inexplicable for ‘random’ Natural Selection to be the rational explanation for the scaling of the physiology, and anatomy, of living things to four-dimensional parameters, they do not seem to fully realize the implications this ‘four dimensional scaling’ of living things presents. This 4-D scaling is something we should rightly expect from a Intelligent Design perspective. This is because Intelligent Design holds that ‘higher dimensional transcendent information’ is more foundational to life, and even to the universe itself, than either matter or energy are. This higher dimensional ‘expectation’ for life, from a Intelligent Design perspective, is directly opposed to the expectation of the Darwinian framework, which holds that information, and indeed even the essence of life itself, is merely an ‘emergent’ property of the 3-D material realm.

    Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
    Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
    http://journals.witpress.com/journals.asp?iid=47

  88. I’ve read through the responses since my last post on this thread and I see some analogies, some attempted arguments against evolutionary theory, and some assertions about the purpose of ID, but no references to empirical evidence.

    I don’t want to put words in peoples’ mouths, so I ask ID proponents to point out where I might have missed something. As near as I can tell, the following statements are true:

    - There is no empirical evidence that any intelligent agent existed at the time during which “design” presumably took place in the evolution of life on Earth.
    - There is no empirical evidence that any particular biological artifact is the result of intelligent agency.
    - There is no empirical evidence demonstrating how an intelligent agent might have manipulated a particular biological artifact.

    A scientific hypothesis is an explanation for empirical observations. Without any such observations, ID cannot logically derive an hypothesis. Without an hypothesis, no testable predictions can be made. This inability to answer the who, what, when, where, and how questions is why ID is not considered scientific.

    I am very interested in learning about the positive evidence for ID, the hypotheses that explain that evidence, and the testable predictions that those hypotheses entail. Can anyone here provide references?

  89. MathGrrl, since I just asked Dr. Bot, I will ask you the same,

    Since quantum entanglement falsified reductive materialism, and since quantum entanglement is found embedded in life,,,

    Quantum Information/Entanglement In DNA & Protein Folding – short video
    http://www.metacafe.com/watch/5936605/

    ,,, and since neo-Darwinism is premised upon reductive materialism, and quantum entanglement is found embedded in life, is not neo-Darwinism falsified since its reductive materialistic foundation is shown to be completely disconnected from the effect to be explained in the first place? If not, please explain exactly why not.

  90. MG:

    Do you imagine that by mislabelling instantiation as analogies, and by begging the question of empirically tested reliable signs, you can conclude:

    I see some analogies, some attempted arguments against evolutionary theory, and some assertions about the purpose of ID, but no references to empirical evidence[?]

    I suggest you take the time to read here, and in the onward linked, then respond on merits, not strawman mislabelling and begged questions.

    The digital code in D/RNA is there for all to see [it is not an analogy, it is instantiation, of a 4-state digital code that is also an algorithm, implemented through cellular, molecular scale nanomachines . . .which is what ribosomes etc plainly are], and is so integral to how cells work that it was there from their origin.

    It is by definition, empirical.

    It is a fact in evidence, and the only empirically reliable and infinite monkeys theorem credible source of such digitally coded, functionally specific, complex information — on billions of test cases — is intelligence.

    Further to this, we are looking at the remote, unobservable past.

    If you are consistent instead of selectively hyperskeptical, you just wrote off all claimed evidence of the origins past.

    but of course, you are not, you want to project tot the past the idea that chance chemistry and survival of what worled, was enough to create the cell out of whatever version of Darwin’s warm little pond you faqvou5r. Only problem, you are trying to oppose to a known sufficient cause of dFSCI, what is credibly not causally sufficient.

    So, the failure of providing evidence that chance plus necessity without intelligence can give rise to dFSCI, coupled to strained dismissals, shows only too plainly the real problem: ideological a prioi materialism, and associated question begging and selective hyperskepticism.

    Time to think again, and without ideological blinkers, methinks.

    G’day

    GEM of TKI

  91. The digital code in D/RNA is there for all to see (…) and is so integral to how cells work that it was there from their origin.

    It is by definition, empirical.

    It is observed, its origins were not, as you point out:

    we are looking at the remote, unobservable past.

    Yet you also claim:

    it was there from their origin.

    Now you say

    So, the failure of providing evidence that chance plus necessity without intelligence can give rise to dFSCI, coupled to strained dismissals, shows only too plainly the real problem: ideological a prioi materialism, and associated question begging and selective hyperskepticism.

    So when we don’t know if something is possible we should conclude that some unobserved entity did it – a priori theism perhaps? – does this mean we should stop investigating?

    Not me, I prefer “We don’t know” when I’m doing science. Time to think again, and without ideological blinkers, methinks.

    I have no problem entertaining the idea that the unknown is explainable by reference to God, but that doesn’t stop be from checking to see if it is actually the result of Gods universe operating according to his rules.

    Mathgrrl is asking for evidence in support of the hypothesis that an intelligent entity intervened to create life, or to alter life. You claim that life cannot arise through the natural laws that (I believe) God created, therefore you are citing your belief that another hypothesis (natural processes) won’t work as evidence in support of this hypothesis – Mathgrrl is asking for evidence in support of the ID hypothesis.

    Not good enough I’m afraid, and remember, chemistry is not random, what chemistry can do cannot be determined by looking at the output of a random number generator.

  92. Mathgrrl,

    As to your first and third questions, I think that there is no empirical evidence.

    But as to your second I would say that the bacterial flagellum is empirical evidence of design. A computer is also empirical evidence of design.

    The design hypothesis does require an inference, but inferences are made in scientific hypotheses all the time.

    A design inference is, IMO, just as legitimate as an inference to the Big Bang based on the redshift of light and even more legitimate than the “multiverse” inference which has no empirical evidence whatsoever other than the mere existence of our universe.

  93. Collin

    A design inference is, IMO, just as legitimate as an inference to the Big Bang based on the redshift of light

    Good point, but, as with physics, how do you proceed to test the hypothesis experimentally?

    Big bang theory offers us ways of testing the hypothesis, how do we test the ID hypothesis as it pertains to a specific question in biology? (e.g. origins of life)

  94. DrBot and MathGrrl, please address the question I asked you,,, if you did not understand the full implications that this ‘quantum’ dilemma presents to the neo-Darwinian framework, let me try to make it simpler. Quantum entanglement is shown to be an effect that is instantaneous and universal,

    The Failure Of Local Realism – Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    i.e. entanglement is shown to be completely transcendent of any constraints of time or space, and entanglement is also shown to exercise dominion of ‘material’ particles. Particles which are themselves constrained by time and space. Yet, recently, quantum entanglement was shown to be integral in molecular biology,,

    Does DNA Have Telepathic Properties?-A Galaxy Insight
    Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn’t be able to. Explanation: None, at least not yet.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
    http://www.dailygalaxy.com/my_.....ave-t.html

    Quantum Information/Entanglement In DNA & Protein Folding – short video
    http://www.metacafe.com/watch/5936605/

    ,,,Yet how can entanglement in biology possibly be explained by by the reductive materialistic/local realism framework of neo-Darwinism when entanglement falsified the validity of reductive materialism in the first place??? It is simply ludicrous to appeal to the framework that has been falsified by the very effect you are seeking to explain!

  95. Dr.Bot and Mathgirl to ID: ID supporters, explain yourself”

    ID: “OK. We know for a fact that the activities of intelligent agents always leave clues in the form of design patterns. Those same design patterns exist in nature, so we conclude that they, too, were caused by an intelligent agent.”

    ID to DrBot and Mathgirl: “Darwinists explain yourself.”

    DrBot and Mathgirl: “OK. We have no evidence whatsoever to justify our claim that naturalistic processes alone generated biodiversity. However, we have a strong faith in Darwinism that compensates for our lack of evidence.

    DrBot and Mathgirl to ID:

    “Your inference to the best explanation based on evidence is not good enough. Our leap of faith based on our fondest hopes is more than good enough.”

    Welcome to the wacky world of Darwinism.

  96. Stephen:

    Sadly, yes.

    G

  97. F/N:

    Dr Bot, once you see the role of D/RNA and the stored code in the manufacture and despatching of proteins to target points in the cell and even beyond it, it is quite plain that no D/RNA, and no code, no living cell.

    And there is absolutely zero evidence of operational metabolising and self-replicating biological life that is not based on cellular technologies; autocatalytic reaction sets don’t even come close. Even, viri are parasites that hijack cellular machinery.

    So, since the D/RNA with their codes are a necessary part of the cell’s core operations, then it is in fact reasonable to draw the conclusion that they were there from the beginning. Without them, no cell.

    And, they are replete with dFSCI, which we have excellent empirical and analytical grounds to see as a reliable sign of intelligent cause.

    GEM of TKI

  98. DrBot:

    The image of the Virgin cannot be explained by scientists. Do you understand this and recognize this?

    If scientific methods fail to provided an explanation for an image—that is, a pattern, a design!—that is there for all to see, then, please, tell me what ’caused’ the image to appear.

    Is it not an agency that falls outside of ‘natural laws’? If you answer, “no”, then why can’t the image be explained using what is known of natural laws?

    The purpose of this exercise—an exercise that Mathgrrl keeps running away from—is to get you, and others, to see the effect of presupposing that everything can be explained by—and only!—forces acting according to natural laws. The effect is to limit what it is you can explain. The ID argument simply says that natural explanations, i.e., Darwinian mechanisms, are too limited: they CAN’T explain what we see happening in living entities.

    From the fact that the tilma’s image cannot be reduced to natural forces, one is required to infer an outside agency. If you refuse to do that with what is right there before your eyes, and documented scientifically, then what chance is there you will make the proper inference to outside agency when the ‘image’ is ‘hidden’, so to speak? None, really.

    Thus, your inability, or unwillingness, to infer outside agency DRIVES you toward any kind of putative natural explanation for existent life—for, after all, that is all that Darwinism amounts to. Maybe you do it because you don’t want to believe in God. Maybe Darwin did it because he couldn’t believe that God would “create” the kinds of malignant things we find in nature. But these are theological/metaphysical concerns; not scientific. (N.B. I would recommend a very close reading of the first chapter of the Book of Job when we attack the issue of malignancy in our world.)

    From a purely scientific perspective, just as scientist say they cannot explain the image on Juan Diego’s tilma, so, too, should they say they can’t explain the diverse richness of life via Darwinian mechanisms. Humility is a good thing.

  99. F/N 2: in light of the above, actually, Dr Bot: Mathgrrl is asking for [sadly, evading] evidence in support of the ID hypothesis.

    I repeat: we know that intelligence is causally sufficient for dFSCI; indeed we ourselves routinely show that by posting in this blog. The ONLY empirically known sufficient cause fro dFSCI is intelligence, and the implications of our whole cosmos being unable to scratch search spaces for 1,000 bits or more make the other possible causes, chance and/or necessity, maximally unlikely to create dFSCI. For, such are simply utterly unlikely to ever get to the shores of an island of function.

    This is backed up by the fact that after a lot of trying, in fact, chance and/or necessity are consistently unable to produce dFSCI; spaces of order 10^50 configs are searchable, but those of order 10^300 or more are not.

    Just as the analysis predicts.

    So, I am not surprised to see MG trying to dismiss the positive evidence of what can and does produce dFSCI, what does not, and the warrant provided for inferring that dFSCI is a reliable sign pointing to intelligence as its cause.

    All that tells us is that the inference from dFSCI as reliable sign to design as credible, well warranted cause, is unwelcome in some circles. Not, that it is unwarranted or lacks evidence, but that the evidence and the warrant are unwelcome.

    So much so, that we saw above a set of assertions that would instantly demolish the possibility of scientific study of origins, if it were applied consistently. But of course, that is exactly what is not likely to happen: we have logic with a swivel here — the hyperskeptical objection is being trotted out to dismiss what is ideologically unacceptable, instead of being examined to see if it is a sound criterion of scientific knowledge.

    GEM of TKI

  100. PaV,

    The image of the Virgin cannot be explained by scientists.

    There are scientists who disagree with your statement. There are also many historians who make a good case that Juan Diego never existed.

  101. About a year ago, I discussed T-urf 13 on Arthur Hunt’s blog. You can find it there.

    I spent more time than I cared to, then, and I have avoided any discussion now.

    That said, I have re-visited the issue, nevertheless.

    I will point these things out:

    (1) The method of inheritance of mitochondria is not the same as that of nuclear DNA—the benchmark of neo-Darwinism.

    (2) The idea of three CCC’s is hypothetical, and not more. The “third” CCC that Hunt proposes—a binding site between a toxin and the gated ion-channel—can just as easily, and more plausibly, be explained by the toxin ‘evolving’ a binding site for the ion-channel.

    (3) The kind of “recombination” that takes place in (plant) mitochondria is not your normal Mendelian recombination. Hunt eludes to this when he says at [40]: “Jonathan, are you suggesting that the mechanisms known to be involved in homologous and/or non-homologous recombination (the processes that pretty clearly gave rise to the T-urf13 gene) do not obey the rules of chemistry?”

    Notice his use of “rules of chemistry” and not, e.g., the “rules of biology”. This is because in plant mitochondria, ‘sub-circles’ of mitochondrial DNA can accumulate through ‘intra-molecular’ recombination. There apparently is some kind of machinery that allows portions of the original ‘circular’ genome of the mitochondria to take parts of the original mitochondrial genome and fashion other circles. This machinery (notice that this terms presupposes some kind of inter-purposiveness) obeys not genetic, but chemical rules, with the result that a huge diversity of “recombinations” can be cobbled together. As someone pointed out in Hunt’s blog back when T-urf 13 was being discussed, there are great similarities between the diversity bought about in anti-body production and that of plant mitochondrial recombination. This, quite obviously, falls outside of normal “Darwinian” mechanisms. It seems a little bit disingenious that Art is now correctly referring to this quite different type of recombination as following chemical rules, yet insist that the CCC’s he finds here dispute Behe’s claims that “Darwinian” mechanisms can’t produce much more than 2 CCC’s of complexity.

    (4) Let’s just be aware that Behe uses White’s number of 1 in 10^20 in EoE, a number that represents not theoretical figures of probability, but actual in vivo probabilites; i.e., this is what is found in the lab. As to ‘theoretical figures’, the number should be 1 in 10^16, and, hence, 3 CCC’s would represent, theoretically, 1 in 10^48 improbability, under the UPB used by most scientists. I add this simply for the sake of clarity.

    (5) In a recent paper, a “de novo” gene was being touted. Guess what? It turns out that a portion of a “non-coding” gene and its flanking element was involved in the manufacturing of this “de novo” gene. This is exactly what we find in T-urf 13. Hence, when I used the term “machinery” in (3) above, indeed, this seems to be a maneuver that living cells have at their disposal, thus warranting a search for this new mechanism and not the false claim of truly “de novo” genes. As in the case with T-urf-13, the “new” gene is nothing more than the demolishing of another gene: i.e., the critical portion of the “de novo” gene represented no more than a portion of another gene. (The transcribed portion of T-urf 13 that provides this ‘amazing’ gated ion-channel, is only a very small part of the “de novo” gene.) Again, this is consistent with Behe’s latest article.

    (6) There are two “nuclear restorers” that can restore the plant to ‘male fertility’ from the sterile condition found in the Texas maize from which T-urf 13 is derived. Interestingly, and provocatively, when the “nuclear restorers” work their magic, guess what? T-urf 13 is no longer found: evidence, again, that a “mechanism”, and “machinery” is at play.

    Getting back to the original post here, Jonathan quite correctly demonstrates that Darwinian mechanisms are being assumed to be at work with the manufacture of URF-13 protein, and, yet, from all indications, whatever is happening to the maize, has very little to do with true Darwinian mechanisms. I would hope Arthur Hunt might acknowledge this.

    Let’s just finish here by pointing out again that T-urf 13 involves a kind of degradation of maize. In the case of the Texas maize–hence the T—the T-urf 13 was located by researchers because it was there that the toxin that decimated the corn grown in Texas in the late 60′s attached itself. So the “manufacturing” of this “de novo” gene proved to make the maize less fit. This is in keeping with Behe’s latest findings.

  102. I believe one of the objections of MathGrrl was that there is no evidence for an intelligent agent existing before humans,, yet as Dr. Craig illustrates here,,

    The First Cause Must Be A Personal Being – William Lane Craig – video
    http://www.metacafe.com/w/4813914

    ,, the origin of the universe itself requires an Intelligent agent acting according to His own free will. Moreover, far from being a Deistic Entity which created this universe and then did nothing else, This Transcendent Entity is shown to be fully Theistic in Its characteristics in that the universe is shown not to be self sustaining, but requires a ‘first mover’ to account for each moment of the universe,,,

    ,,,In conjunction with the mathematical, and logical, necessity of an ‘Uncaused Cause’ to explain the beginning of the universe, in philosophy it has been shown that,,,

    “The ‘First Mover’ is necessary for change occurring at each moment.”
    Michael Egnor – Aquinas’ First Way
    http://www.evolutionnews.org/2.....first.html

    I find this centuries old philosophical argument, for the necessity of a ‘First Mover’ accounting for change occurring at each moment, to be validated by quantum mechanics. This is since the possibility for the universe to be considered a self-sustaining ‘closed loop’ of cause and effect is removed with the refutation of the ‘hidden variable’ argument, as first postulated by Einstein, in entanglement experiments. As well, there also must be a sufficient transcendent cause (God/First Mover) to explain quantum wave collapse for ‘each moment’ of the universe.

    Dr. Quantum – Double Slit Experiment & Entanglement – video
    http://www.metacafe.com/watch/4096579

    BRUCE GORDON: Hawking’s irrational arguments – October 2010
    Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world.
    http://www.washingtontimes.com.....arguments/

    notes,,

    It is interesting to note that some materialists seem to have a very hard time grasping the simple point of the double slit experiments, but to try to put it more clearly; To explain an event which defies time and space, as the quantum erasure experiment clearly does, you cannot appeal to any material entity in the experiment like the detector, or any other 3D physical part of the experiment, which is itself constrained by the limits of time and space. To give an adequate explanation for defying time and space one is forced to appeal to a transcendent entity which is itself not confined by time or space. But then again I guess I can see why forcing someone who claims to be a atheistic materialist to appeal to a non-material transcendent entity, to give an adequate explanation, would invoke such utter confusion on their part. Yet to try to put it in even more ‘shocking’ terms, the ‘shocking’ conclusion of the experiment is that a transcendent Mind, with a capital M, must precede the collapse of quantum waves to 3-Dimensional particles. Moreover, it is impossible for a human mind to ever ‘emerge’ from any 3-D material particle which is itself semi-dependent on our ‘observation’ for its own collapse to a 3D reality in the first place. This is more than a slight problem for the atheistic-evolutionary materialist who insists that our minds ‘emerged’, or evolved, from 3D matter. In the following article Professor Henry puts it more clearly than I can:

    The Mental Universe – Richard Conn Henry – Professor of Physics John Hopkins University
    Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, Physicists shy away from the truth because the truth is so alien to everyday physics. A common way to evade the mental universe is to invoke “decoherence” – the notion that “the physical environment” is sufficient to create reality, independent of the human mind. Yet the idea that any irreversible act of amplification is necessary to collapse the wave function is known to be wrong: in “Renninger-type” experiments, the wave function is collapsed simply by your human mind seeing nothing. The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy.
    http://henry.pha.jhu.edu/The.mental.universe.pdf

    Astrophysicist John Gribbin comments on the Renninger experiment here:

    Solving the quantum mysteries – John Gribbin
    Excerpt: From a 50:50 probability of the flash occurring either on the hemisphere or on the outer sphere, the quantum wave function has collapsed into a 100 per cent certainty that the flash will occur on the outer sphere. But this has happened without the observer actually “observing” anything at all! It is purely a result of a change in the observer’s knowledge about what is going on in the experiment.
    http://www.lifesci.sussex.ac.u.....tm#Solving

    i.e. The detector is completely removed as to being the primary cause of quantum wave collapse in the experiment. As Richard Conn Henry clearly implied previously, in the experiment it is found that ‘The physical environment’ IS NOT sufficient within itself to ‘create reality’, i.e. ‘The physical environment’ IS NOT sufficient to explain quantum wave collapse to a ‘uncertain’ 3D particle.

    Why, who makes much of a miracle? As to me, I know of nothing else but miracles, Whether I walk the streets of Manhattan, Or dart my sight over the roofs of houses toward the sky,,,
    Walt Whitman – Miracles

    That the mind of a individual observer would play such an integral, yet not complete ‘closed loop’ role, in instantaneous quantum wave collapse to uncertain 3-D particles, gives us clear evidence that our mind is a unique entity. A unique entity with a superior quality of existence when compared to the uncertain 3D particles of the material universe. This is clear evidence for the existence of the ‘higher dimensional soul’ of man that supersedes any material basis that the soul/mind has been purported to emerge from by materialists. I would also like to point out that the ‘effect’, of universal quantum wave collapse to each ‘central 3D observer’, gives us clear evidence of the extremely special importance that the ’cause’ of the ‘Infinite Mind of God’ places on each of our own individual souls/minds.

    Psalm 139:17-18
    How precious concerning me are your thoughts, O God! How vast is the sum of them! Were I to count them, they would outnumber the grains of sand. When I awake, I am still with you.

  103. Dr Bot:

    So when we don’t know if something is possible we should conclude that some unobserved entity did it – a priori theism perhaps?

    No theism required, just knowledge of cause andeffect relationships.

    – does this mean we should stop investigating?

    Do archaeologists sop investigating once they determine they are holding an artifact? Do police stop investigating once forensics say a homicide took place?

    Not me, I prefer “We don’t know” when I’m doing science.

    Except people like you really sa “We don’t know but we know it wasn’t design”, then you hand out promissory notes.

    Mathgrrl is asking for evidence in support of the hypothesis that an intelligent entity intervened to create life, or to alter life.

    And it has been provided and ignored, time and again.

    So here’s the deal- you guys present a tstable hypothesis along with positive (supporting) evidence for your position- that way we can compare and you can’t keep running round with the goalposts.

    Are you up to the task? Or are you just another evo propaganda mouth-piece?

  104. MathGrrl:

    As to your citation, here’s a portion:

    Actually, infrared photographs show that the hands have been modified, and close-up photography shows that pigment has been applied to the highlight areas of the face sufficiently heavily so as to obscure the texture of the cloth.

    What does a close reading of this indicate?

    If the author writes: “close-up photography shows that pigment has been applied to the highlight areas of the face sufficiently heavily so as to obscure the texture of the cloth. . . .”, then, obviously, there are large portions of the image where the “texture of the cloth” can be seen—and, yet, the image is there, but there is NO paint!

    It is preposterous to assume that because an Indian added flourishes to the original image to make it correspond with a more “Spanish” type art, that this therefore establishes the agency for the original image as human. It does not. Indeed, it clearly shows just the opposite. IOW, there are parts of the image that clearly can be shown to have human agency as their cause (as has been documented historically), which only goes to show that the other parts of the image DON’T have human agency as their cause. I

    It is easy to delude ourselves. But facts are facts. And you have to deal with them. And if you can’t deal with them, then maybe you should question what is really underlying your opposition to outside agency.

  105. PaV,

    You are ignoring the rest of the assessment:

    Rosales examined the cloth with a stereomicroscope and observed that the canvas appeared to be a mixture of linen and hemp or cactus fiber. It had been prepared with a brush coat of white primer (calcium sulfate), and the image was then rendered in distemper (i.e., paint consisting of pigment, water, and a binding medium). The artist used a “very limited palette,” the expert stated, consisting of black (from pine soot), white, blue, green, various earth colors (“tierras”), reds (including carmine), and gold. Rosales concluded that the image did not originate supernaturally but was instead the work of an artist who used the materials and methods of the sixteenth century (El Vaticano 2002).

    It’s just a painting.

  106. MathGrrl, I do not know anything of that image, and have never even heard of it, but perhaps you would like to explain how this image formed;

    Turin Shroud Enters 3D Age – Pictures, Articles and Videos
    https://docs.google.com/document/pub?id=1gDY4CJkoFedewMG94gdUk1Z1jexestdy5fh87RwWAfg

    Turin Shroud 3-D Hologram – Face And Body – Dr. Petrus Soons – video
    http://www.metacafe.com/w/5889891/

    Shroud Of Turin Carbon Dating Overturned By Scientific Peer Review – Robert Villarreal – Press Release video
    http://www.metacafe.com/watch/4041193

    Now that the flawed carbon dating is finally brought into line, all major lines of evidence now converge and establish the Shroud as authentic. This rigidly tested, and scrutinized, artifact establishes the uniqueness of the Shroud among all ancient artifacts of man found on earth. I know of no other ancient artifact, from any other culture, which has withstood such intense scrutiny and still remained standing in its claim of divine origin. It is apparent God thought this event so important for us to remember that He took a “photograph” of the resurrection of Jesus Christ using the Shroud itself as a medium. After years of painstaking research, searching through every materialistic possibility, scientists still cannot tell us exactly how the image of the man on the Shroud was imprinted.

    How Did The Image Form On The Shroud? – video
    http://www.metacafe.com/watch/4045581

    “The shroud image is made from tiny fibres that are (each) 1/10th of a human hair. The picture elements are actually randomly distributed like the dots in your newspaper, photograph or magazine photograph. To do this you would need an incredibly accurate atomic laser. This technology does NOT exist (even to this day).”
    Kevin Moran – Optical Engineer

    “the closest science can come to explaining how the image of the Man in the Shroud got there is by comparing the situation to a controlled burst of high-intensity radiation similar to the Hiroshima bomb explosion which “printed” images of incinerated people on building walls.”
    Frank Tribbe – Leading Scholar And Author On Shroud Research

    ,,, Actually there is a ‘theory’ for how the image formed;

    I find it extremely interesting, and strange, that quantum mechanics tells us that instantaneous quantum wave collapse to its ‘uncertain’ 3-D state is centered on each individual observer in the universe, whereas, 4-D space-time cosmology (General Relativity) tells us each 3-D point in the universe is central to the expansion of the universe. These findings of modern science are pretty much exactly what we would expect to see if this universe were indeed created from a higher dimension by a omniscient, omnipotent, omnipresent, eternal Being who knows everything that is happening everywhere in the universe at the same time. These findings certainly seem to go to the very heart of the age old question asked of many parents by their children, “How can God hear everybody’s prayers at the same time?”,,, i.e. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that you or I, or anyone else, should exist? Only Theism offers a rational explanation as to why you or I, or anyone else, should have such undeserved significance in such a vast universe:

    Psalm 33:13-15
    The LORD looks from heaven; He sees all the sons of men. From the place of His dwelling He looks on all the inhabitants of the earth; He fashions their hearts individually; He considers all their works.

    The expansion of every 3D point in the universe, and the quantum wave collapse of the entire universe to each point of conscious observation in the universe, is obviously a very interesting congruence in science between the very large (relativity) and the very small (quantum mechanics). A congruence that Physicists, and Mathematicians, seem to be having a extremely difficult time ‘unifying’ into a ‘theory of everything’.(Einstein, Penrose).

    Quantum Mechanics Not In Jeopardy: Physicists Confirm Decades-Old Key Principle Experimentally – July 2010
    Excerpt: the research group led by Prof. Gregor Weihs from the University of Innsbruck and the University of Waterloo has confirmed the accuracy of Born’s law in a triple-slit experiment (as opposed to the double slit experiment). “The existence of third-order interference terms would have tremendous theoretical repercussions – it would shake quantum mechanics to the core,” says Weihs. The impetus for this experiment was the suggestion made by physicists to generalize either quantum mechanics or gravitation – the two pillars of modern physics – to achieve unification, thereby arriving at a one all-encompassing theory. “Our experiment thwarts these efforts once again,” explains Gregor Weihs. (of note: Born’s Law is an axiom that dictates that quantum interference can only occur between pairs of probabilities, not triplet or higher order probabilities. If they would have detected higher order interference patterns this would have potentially allowed a reformulation of quantum mechanics that is compatible with, or even incorporates, gravitation.)
    http://www.sciencedaily.com/re.....142640.htm

    The conflict of reconciling General Relativity and Quantum Mechanics appears to arise from the inability of either theory to successfully deal with the Zero/Infinity problem that crops up in different places of each theory:

    THE MYSTERIOUS ZERO/INFINITY
    Excerpt: The biggest challenge to todays physicists is how to reconcile general relativity and quantum mechanics.,,, What the two theories have in common — and what they clash over — is zero.
    http://www.fmbr.org/editoral/e....._mar02.htm

    The following Physicist offers a very interesting insight into this issue of ‘reconciling’ the mental universe of Quantum Mechanics with the space-time of General Relativity:

    How the Power of Intention Alters Matter – Dr. William A. Tiller
    Excerpt: Quantum mechanics and relativity theory are the two prime theoretical constructs of modern physics, and for quantum mechanics and relativity theory to be internally self-consistent, their calculations require that the vacuum must contain an energy density 10^94 grams per cubic centimeter. How much energy is that? To find out you simply use Einstein’s equation: E=MC2. Here’s how this comes out in practical terms. You could take the volume of, say, a single hydrogen atom (which is incredibly small, an infinitesimally small fraction of a cubic centimeter), and multiply that by the average mass density of the cosmos, a number which is known to astronomers. And what you find out is that within the amount of vacuum contained in this hydrogen atom there is, according to this calculation, “almost a trillion times as much energy as in all of the stars and all of the planets out to a radius of 20 billion light years!” If human consciousness can interact with that even a little bit, it can change things in matter. Because the ground state energies of all particles have that energy level due to their interaction with this stuff of the vacuum. So if you can shift that stuff of the vacuum, change its degree of order or coherence even a little bit, you can change the ground state energies of particles, atoms, molecules, and chemical equations.,,,, In conclusion Tiller states, “despite our attachment to it and our feeling of its solidity and persistence, what we think of as the physical universe is an almost incomprehensibly minuscule part of the immensity of All That Is.” “Matter as we know it,” Tiller concludes poetically, “is hardly a fragrance of a whisper.”
    http://www.spiritofmaat.com/ar.....tiller.htm

    Yet, the unification, into a ‘theory of everything’, between what is in essence the ‘infinite world of Quantum Mechanics’ and the ‘finite world of the space-time of General Relativity’ seems to be directly related to what Jesus apparently joined together with His resurrection, i.e. related to the unification of infinite God with finite man. Dr. William Dembski in this following comment, though not directly addressing the Zero/Infinity conflict in General Relativity and Quantum Mechanics, offers insight into this ‘unification’ of the infinite and the finite:

    The End Of Christianity – Finding a Good God in an Evil World – Pg.31
    William Dembski PhD. Mathematics
    Excerpt: “In mathematics there are two ways to go to infinity. One is to grow large without measure. The other is to form a fraction in which the denominator goes to zero. The Cross is a path of humility in which the infinite God becomes finite and then contracts to zero, only to resurrect and thereby unite a finite humanity within a newfound infinity.”
    http://www.designinference.com.....of_xty.pdf

    Moreover there actually is physical evidence that lends strong support to the position that the ‘Zero/Infinity conflict’, we find between General Relativity and Quantum Mechanics, was successfully dealt with by Christ:

    The Center Of The Universe Is Life – General Relativity, Quantum Mechanics, Entropy and The Shroud Of Turin – video
    http://www.metacafe.com/w/5070355

  107. further note:

    All attempts to reproduce the Shroud fail:

    Experts Question Scientist’s Claim of Reproducing Shroud of Turin – Oct 6, 2009
    http://www.ewtn.com/vnews/gets.....ber=98037#

    The Shroud of Turin has NOT been reproduced ! – video
    http://www.youtube.com/watch?v=TjxZFfHVtsE

    PROOF SHROUD OF TURIN CANNOT BE A FAKE – video
    http://www.youtube.com/watch?v=dfDdbxMKZRw

    ,,,,,,,,,,,

    Even with the advantage of all our advanced space-age technology at their fingertips, all scientists can guess is that it was some type of electro-magnetic radiation (light) which is not natural to this world. Kevin Moran, a scientist working on the mysterious ’3D’ nature of the Shroud image, states the ‘supernatural’ explanation this way:

    “It is not a continuum or spherical-front radiation that made the image, as visible or UV light. It is not the X-ray radiation that obeys the one over R squared law that we are so accustomed to in medicine. It is more unique. It is suggested that the image was formed when a high-energy particle struck the fiber and released radiation within the fiber at a speed greater that the local speed of light. Since the fiber acts as a light pipe, this energy moved out through the fiber until it encountered an optical discontinuity, then it slowed to the local speed of light and dispersed. The fact that the pixels don’t fluoresce suggests that the conversion to their now brittle dehydrated state occurred instantly and completely so no partial products remain to be activated by the ultraviolet light. This suggests a quantum event where a finite amount of energy transferred abruptly. The fact that there are images front and back suggests the radiating particles were released along the gravity vector. The radiation pressure may also help explain why the blood was “lifted cleanly” from the body as it transformed to a resurrected state.”
    http://www.shroudstory.com/natural.htm

    If scientists want to find the source for the supernatural light which made the “3D – photographic negative” image I suggest they look to the thousands of documented Near-Death Experiences (NDE’s) in Judeo-Christian cultures. It is in their testimonies that you will find mention of an indescribably bright ‘Light’ or ‘Being of Light’ who is always described as being of a much brighter intensity of light than the people had ever seen before. All people who have been in the presence of ‘The Being of Light’ while having a deep NDE have no doubt whatsoever that the ‘The Being of Light’ they were in the presence of is none other than ‘The Lord God Almighty’ of heaven and earth.

    In The Presence Of Almighty God – The NDE of Mickey Robinson – video
    http://www.metacafe.com/watch/4045544

    The Day I Died – Part 4 of 6 – The NDE of Pam Reynolds – video
    http://www.metacafe.com/watch/4045560

  108. MathGrrl:

    Rosales used a “stereo microscope”. Here’s what Wikipedia has to say:

    The stereo or dissecting microscope is an optical microscope variant designed for low magnification observation or a sample using incident light illumination rather than transillumination. It uses two separate optical paths with two objectives and two eyepieces to provide slightly different viewing angles to the left and right eyes. In this way it produces a three-dimensional visualization of the sample being examined.

    This instrument can only tell us about surface phenomena.

    Callahan, OTOH, used infrared photograpy, which can penetrate the layering. Here’s what he found:

    In 1979, Dr. Philip Serna Callahan, a biophysicist affiliated with the University of Florida, conducted an infrared photographic investigation into the composition of the image in the Basilica. His preliminary findings have interesting implications for t h e indigenista exegeses. Callahan (1981) has described numerous overlays of pigments upon a primitive (original) image, without an underdrawing, and has suggested the composition of the colorants. Among the “retouches” he finds are the moon, sun rays, sash, all gold ornamentation (including the stars and the Nahui Ollin figure), to mention a few. Dr. Callahan’s research has already been translated into Spanish in Mexico and published with the support of the Archdiocese of Mexico (Callahan and Smith 1981). Callahan contends, like art critics before him, that the additions he identifies are simply (and rather obviously, to him) International Gothic ornaments typical of fifteenth- and sixteenth-century Spanish paintings of the Virgin Mary (Callahan 1981:8).

    Did I mention that a bomb was placed upon the altar in front of the image in the early 1900′s. When the bomb went off, the thick brass crucifix on top of the altar was bent over with the top of the crucifix pointing straight down; yet the image was unscathed.

    That said, I think the Shroud of Turin, because it has been more thoroughly studied, makes the same point, and in a stronger, less ambiguous way. Same argument. Different miracle.

  109. PaV:

    That said, I think the Shroud of Turin, because it has been more thoroughly studied, makes the same point, and in a stronger, less ambiguous way. Same argument. Different miracle.

    Argument it is. Convincing case it is not.

  110. So, it seems that the ID squad here is promoting the proposition that miracles (events that run counter to the usual course of observation) occur.

    Do all (of the ID persons) agree with that proposition?

  111. 111

    Mathgrrl,

    Do you think something existed before the beginning of the universe? If you do, then on what grounds do you think so?

  112. Pedant,

    So, it seems that the ID squad here is promoting the proposition that miracles (events that run counter to the usual course of observation) occur.

    Do all (of the ID persons) agree with that proposition?

    I can’t speak for all, but I certainly do.

  113. Pendant as to;

    ‘So, it seems that the ID squad here is promoting the proposition that miracles (events that run counter to the usual course of observation) occur.’

    Though it could be forcefully argued that the quantum events which form the foundation of our reality are ‘miraculous’ in nature, since they blatantly defy our concepts of time and space in the first place, to back up the claim that ‘extraordinary/miraculous’ events extend into our ‘macroscopic’ world (ignoring quantum wave collapse to each unique point of observation in the universe), I found this article yesterday;

    Medical Miracles Really Do Happen
    Excerpt: No one knows exactly how often such cases occur. Approximately 3,500 medically documented cases of seeming miracles — based on reports from doctors in America and around the world dating to 1967 — have appeared in 800 peer-reviewed medical journals and cover all major illnesses, including cancer, heart disease, diabetes and arthritis.*,,, Have any of your patients ever experienced this type of healing? Early in my career, I had an elderly patient with cancer in both lungs that had spread throughout his body. We had no medical therapies for this type of cancer, but during visiting hours, people from his church stood near his bed, praying and singing gospel music. I expected him to die within days, and when he asked to go home, I respected his wishes and discharged him.

    About a year later, this patient was back in the hospital — this time with a bad case of the flu. I went to the radiology department to look at his current chest X-ray, and it was completely normal. I thought that in the past year he must have had a dramatic response to additional therapy of some kind, but he hadn’t undergone any therapy. To me, this was a true miracle cure.
    http://www.bottomlinesecrets.c.....e_id=42254

  114. DrBot,

    I do not know how one would test ID. But I also don’t know how astronomers test the Big Bang. Could you explain?

  115. DrBot,

    I would like to clarify. I don’t think you can test the Big Bang but you can make predictions and find confirming (or disconfirming) evidence. I think we can do that for ID and for Big Bang. For example, a prediction of ID might be that more and more of junk DNA will be found to have function. We might also predict that earth and its ecology is tightly fitted for life. I think that bornagain can expound on that.

  116. Collin

    I’m not a physicist but from my understanding the Big bang theory was developed from redshift observations that suggested galaxies were all travelling away from a point of origin. When you trace matter back to a point like this you can study, based on what we know about matter, what early conditions in the universe might be like, and then from this predict phenomena that we should be able to observe today – for example the cosmic microwave background – The origional conjuecture leads to a theoretical analysis which then allows experiments to be designed, and observations made, which in turn can either confirm, refine or destroy the theory.

    Big bang theory survives because many experiments and observations derived from the theory have either confirmed an aspect of the theory, or provided new information which allows the theory to be refined (and new experiments to be derived) – although the picture is still incomplete. One of the reasons for experiments with particle accelerators is to understand these hypothesised early conditions better, so that this and other theories can be better tested.

    The problem with the ID prediction that junk DNA should have function is that it assumes a particular design methodology – that the designer wouldn’t include junk – and this goes against observations of design by humans. For example many consumer products that use embedded computers contain junk code that fills up spare memory – this is done to make attempts at back engineering by competitors harder.

    Human designers also copy from earlier designs and sometimes copy features because they are there, not always because they are still of value, or because they ever served a function. Sometimes they are copied because the designer just assumes they are functional.

    The observation from what we know about observed designers is that they can and do include junk, but not always, so we can’t predict what a non human designer would do without having to say something about the methods or motivations of this designer.

    When it comes to the earth and ecology being tightly fitted for life, well we would expect evolution to produce the same, as far as life and ecologies go, and we wouldn’t expect to see life arise and evolve in an environment unsuitable for life.

    When it comes to predictions about design, if we don’t know the constraints or motivations of a designer (who could have unlimited power) then how can we determine that they would have designed life and our planet this way – they don’t have to tightly fit earth and its ecology if they have enough power.

    There is always the argument from improbability – that it is improbable that a world suitable for life could exist – I would say we don’t know nearly enough about the universe or complex chemistry, e.g. how common are planets that could support life? As a theist though I beleve in God, I’m just not committed to any ideology that suggests God couldn’t have designed a universe which generated life, and which could then evolve.

  117. Dr Bot:

    The key facets of the big bang are that it points tot he radical contingency of our observed cosmos, and its circumstances point to the multiple ways in which the physics, parameters and boundary conditions more broadly of our cosmos are fine tuned in ways that make C-chemistry, cell based intelligent life possible.

    The first — even through multiverse speculations — points to a necessary and powerful being as the underlying cause of such a contingent cosmos. Such a being has no external necessary [switch on to enable] causal factors, and as such is eternal, and necessarily non-material; as matter is contingent cf. E = m*c^2 for one way that is so. In the old days, it had been often held that the necessary being was the cosmos as a whole, but the discovery of the Hubble Red shift and its implications on General Relativity, put reluctant cosmologists into the corner of acknowledging a beginning to our cosmos.

    The second defines that necessary being in interesting ways: capability to configure and initiate the existence of a cosmos with physics fine-tuned like we observe. Even on a multiverse, that requires a cosmos bakery that is as much fine tuned as our cosmos, i.e. we need to scan the local point in the phase space of possible cosmi, very closely indeed.

    Ability to set up such a cosmos implies not only power, but intelligence, knowledge and skill, as well as strongly pointing to intent.

    We are looking at a necessary, eternal, awesomely powerful, mon-material, self-sustaining, intelligent, knowledgeable, skilled, credibly purposeful creator. That, on evidence from physics and in light of basic principles of being and cause: contingency vs necessity.

    In short, we have here a serious candidate for a cause beyond the physical cosmos. One who sounds fairly familiar: “Fiat lux!”

    When we turn to our own minds, and look seriously at the properties of mindedness, we see again that we need a cause that transcends the physical cause-effect chain that our bodies and brains are locked into, something that can move beyond mere negative feedback to governing, by providing a steering path input. Something self-moved, that serves as higher order supervisory controller in the loop.

    Something, that sounds a lot like well, err, mind, or even soul; in — err, ahh, ehm — the image of the greater Mind that created the cosmos.

    Otherwise, our experienced reality of ourselves as reasonable, responsible conscious, deciding creatures, collapses. Especially, we see that evolutionary materialist accounts of mind end in self-referentially incoherent absurdity.

    So, from the beginnings of the cosmos to the beginnings of our own conscious mindedness, there are signs everywhere that point strongly beyond the mechanical and/or chaotic world of chance and necessity.

    GEM of TKI

  118. F/N: In fact it was commonly held by leading darwinist advocates for decades, that the vast majority of the gene complement of the cells in our and other bodies was “junk” as an expectation of the chance plus fit a niche or perish view of a cobbled together plan for living organisms. It was design thinkers who said otherwise, and it is they who are being vindicated.

  119. F/n 2: Dr Bot, if this time around, you will look here, you will see that the fine tuning implied by the well-known, deeply studied, life enabling properties of water and the circumstances that make H, He, O and C the most abundant elements, are not based on what we do not know, but what we know very well. The “we don’t know enough to know the cosmos is fine tuned enough to be improbable on chance” dodge is not good enough.

  120. Pedant:

    So, it seems that the ID squad here is promoting the proposition that miracles (events that run counter to the usual course of observation) occur.

    Your whole position is based on “events that run counter to the usual course of observation”.

    So what is your point?

  121. DrBot, among the many things I take exception to in your post this particular one struck a more disharmonious cord than the rest;

    ‘When it comes to the earth and ecology being tightly fitted for life, well we would expect evolution to produce the same,’

    No we would not,,,

    Because of this basic chemical requirement of complex photosynthetic bacterial life establishing and helping maintain the proper oxygen levels necessary for higher life forms on any earth-like planet (oxygen is essential for increased metabolism), this gives us further reason to strongly believe the earth is extremely unique in its ability to support intelligent life in this universe. What is more remarkable is that the balance for the atmosphere,,,

    Composition Of Atmosphere – Pie Chart and Percentages:
    http://www.ux1.eiu.edu/~cfjps/1400/FIG01_010.JPG
    http://www.ux1.eiu.edu/~cfjps/1400/TBL01_0T2.JPG

    which just so happens to be nearly optimal for humans to exist (Denton; Nature’s Destiny), is maintained through complex symbiotic relationships with other bacteria, all of which are intertwined in very complex geochemical processes. All of the studies of early life, and processes, on early earth fall directly in line with the anthropic hypothesis and have no rational explanation, from any materialistic theory based on blind chance, as to why all the first types of bacterial life found in the fossil record would suddenly, from the very start of their appearance on earth, start working in precise harmony with each other, and with geology, to prepare the earth for future life to appear. Nor can materialism explain why, once these complex bacterial-geological processes had helped prepare the earth for higher life forms, they continue to work in precise harmony with each other to help maintain the proper balanced conditions that are of primary benefit for the complex life that is above them:

    Microbial life can easily live without us; we, however, cannot survive without the global catalysis and environmental transformations it provides. – Paul G. Falkowski – Professor Geological Sciences – Rutgers
    http://www.bioinf.uni-leipzig......g_2008.pdf

    In fact even if evolution were able to generate any ‘non-trivial’ functional information whatsoever, which I firmly believe it can’t, the world would ‘naturally’ be expected to be far less hospitable to higher life forms than the ‘luxury’ currently enjoyed,,,

    notes;

    Michael Behe defends the one ‘overlooked’ protein/protein binding site generated by the HIV virus, that Abbie Smith and Ian Musgrave had found, by pointing out it is well within the 2 binding site limit he set in “The Edge Of Evolution” on this following site:

    Response to Ian Musgrave’s “Open Letter to Dr. Michael Behe,” Part 4
    “Yes, one overlooked protein-protein interaction developed, leading to a leaky cell membrane — not something to crow about after 10^20 replications and a greatly enhanced mutation rate.”
    http://behe.uncommondescent.com/page/4/

    An information-gaining mutation in HIV? NO!
    http://creation.com/an-informa.....ion-in-hiv

    In fact, I followed this debate very closely and it turns out the trivial gain of just one protein-protein binding site being generated for the non-living HIV virus, that the evolutionists were ‘crowing’ about, came at a staggering loss of complexity for the living host it invaded (People) with just that one trivial gain of a ‘leaky cell membrane’ in binding site complexity. Thus the ‘evolution’ of the virus clearly stayed within the principle of Genetic Entropy since far more functional complexity was lost by the living human cells it invaded than was ever gained by the non-living HIV virus. A non-living virus which depends on those human cells to replicate in the first place. Moreover, while learning HIV is a ‘mutational powerhouse’ which greatly outclasses the ‘mutational firepower’ of the entire spectrum of higher life-forms combined for millions of years, and about the devastating effect HIV has on humans with just that one trivial binding site being generated, I realized if evolution were actually the truth about how life came to be on Earth then the only ‘life’ that would be around would be extremely small organisms with the highest replication rate, and with the most mutational firepower, since only they would be the fittest to survive in the dog eat dog world where blind pitiless evolution rules and only the ‘fittest’ are allowed to survive.

    further notes;

    Engineering and Science Magazine – Caltech – March 2010
    Excerpt: “Without these microbes, the planet would run out of biologically available nitrogen in less than a month,” Realizations like this are stimulating a flourishing field of “geobiology” – the study of relationships between life and the earth. One member of the Caltech team commented, “If all bacteria and archaea just stopped functioning, life on Earth would come to an abrupt halt.” Microbes are key players in earth’s nutrient cycles. Dr. Orphan added, “…every fifth breath you take, thank a microbe.”
    http://www.creationsafaris.com.....#20100316a

    Planet’s Nitrogen Cycle Overturned – Oct. 2009
    Excerpt: “Ammonia is a waste product that can be toxic to animals.,,, archaea can scavenge nitrogen-containing ammonia in the most barren environments of the deep sea, solving a long-running mystery of how the microorganisms can survive in that environment. Archaea therefore not only play a role, but are central to the planetary nitrogen cycles on which all life depends.,,,the organism can survive on a mere whiff of ammonia – 10 nanomolar concentration, equivalent to a teaspoon of ammonia salt in 10 million gallons of water.”
    http://www.sciencedaily.com/re.....132656.htm

    The Paradox of the “Ancient” Bacterium Which Contains “Modern” Protein-Coding Genes:
    “Almost without exception, bacteria isolated from ancient material have proven to closely resemble modern bacteria at both morphological and molecular levels.” Heather Maughan*, C. William Birky Jr., Wayne L. Nicholson, William D. Rosenzweig§ and Russell H. Vreeland ;
    http://mbe.oxfordjournals.org/...../19/9/1637

    etc.. etc..

  122. bornagain77,

    The shroud of Turin has been clearly and repeatedly demonstrated to be of 14th century, human origin. See here for an overview and links to additional research.

    I’m going to bow out of the religious artifacts sub-thread now, since it’s a distraction from the core topic of positive evidence for ID. There are sufficient online resources for anyone interested in learning how these have been debunked.

  123. Despite repeated requests by both myself and DrBot, there has still been no empirical evidence that addresses the who, what, when, where, and how questions that immediately arise from the claim that intelligent agency was involved in biological evolution. A few commenters have claimed to have provided such evidence in the past, but no references have been forthcoming.

    The closest any ID proponent has come to providing empirical evidence, that I have seen, is gpuccio in the this series of threads hosted by Mark Frank. The pertinent parts of that discussion revolved around how to calculate CSI and its various derivatives. Since CSI has been raised in this thread as well, we might be able to make some more progress.

    I’m very interested in testing the claim that CSI can only be the result of intelligent agency. In order to do so, I need the help of ID proponents to rigorously define CSI and demonstrate how to objectively measure it. That will allow me, hopefully, to use an evolutionary simulation to determine whether or not known evolutionary mechanisms are capable of generating CSI.

    In Mark’s threads we discussed both Tom Schneider’s ev and Tom Ray’s Tierra. Apropos of a recent thread here on UD, I’ll also toss into the pot the various GA solutions to the Steiner problem. What I would like to understand from the ID proponents here is exactly how to calculate CSI, or one of its variants, for the digital organisms that evolve in these simulations.

    In the interests of time, it would help if anyone willing to respond would read the thread on Mark Frank’s blog so that we can avoid going over issues that were already discussed.

    Thank you in advance for your assistance.

  124. 124

    Despite repeated requests by both myself and DrBot, there has still been no empirical evidence that addresses the who, what, when, where, and how questions that immediately arise from the claim that intelligent agency was involved in biological evolution.

    You’ve been answered, you just refuse the answer. Refusing an answer is not the same as not getting one – it simply demonstrates gamesmanship.

  125. Upright BiPed,

    Despite repeated requests by both myself and DrBot, there has still been no empirical evidence that addresses the who, what, when, where, and how questions that immediately arise from the claim that intelligent agency was involved in biological evolution.

    You’ve been answered, you just refuse the answer. Refusing an answer is not the same as not getting one – it simply demonstrates gamesmanship.

    That is simply not true. In fact, some respondents have explicitly said that such answers cannot be provided.
    To be fair, though, I may have missed the answers you claim have been provided. Please provide references to the empirical evidence for the existence of an intelligent agent that has intervened in biological evolution, the empirical evidence that shows exactly what this agent did, the empirical evidence that shows when this intervention took place, the empirical evidence that shows where this intervention happened, and the empirical evidence that shows how the intervention was accomplished.

  126. MG:

    You have been more than answered, more than I can repeat again.

    As tothe calculations on Ev and the like, it is plain that these “evolved” organisms are intelligently directed products of an intelligently designed process, so from the outset your attempt to estimate the scale of dFSCI involved is pointless.

    What is the file size of Ev or Avida? Is it beyond 125 bytes?

    If so, the programs are beyond the credible reach of chance and blind mechanical necessity, on the gamut of our observed cosmos. Thus the presence of dFSCI in the programs already warns us what we know by more direct means: they are intelligently designed.

    dFSCI, AGAIN, SHOWS ITS RELIABILITY AS AN INDEX OF DESIGN.

    As to the outputs that we are invited to pretend were not placed on a stage and put though their paces for us, for a purpose, by designers — in a context where they have neatly set up “fitness landscapes” that are already in or on islands of function, when the real issue is to get to shores of such islands — these outputs simply show that within an island of function, hill climbing is possible.

    Something that not even young earth creationists dispute.

    MG, please, wake up and face up to the big truths that are so overbearing in significance that you find them hard to accept. And, please eschew the big lies just exposed on the likes of ev that are so blatant that it is hard to think that they could be put forward if they were not true.

    You’ve been rooked . . .

    Please, think again.

    GEM of TKI

  127. PS: MG, the questions that you have no direct answers to are the ones that you have selectively hypersksptically begged the question on.

    Do you accept that where we can do direct empirical testing, dFSCI in particular turns out to be a very reliable sign of design, as with say posts on this thread?

    Can you show one exception where through a truly chance and necessity only, unintelligently directed process — ev etc do not count, as has been explained ad nauseum — dFSCI of at least 1,000 bits has been credibly observed? 25 or so bytes [200 bits] of info HAS been observed through the infinite monkeys tests done — spaces of 10^50 or so — but a space of 10^300 or bigger is simply beyond the capacity of our observed cosmos.

    The uniformity principle, we must use to probe the remote unobservable past. That is, based on processes and causal factors and forces active in the present and their reliable signs, we may explain traces of the past and seek to reconstruct key aspects scientifically. But such knowledge is even more provisional and tentative in warrant than operations scientific work where we may make direct observations today. The common pretence and projection of utter confidence in results is completely without warrant.

    When we look at dFSCI in DNA, and when we see that DNA is in the heart of core life processes for the cell, we see that it was credibly present form the very beginnings of cell based life.

    DNA, of course, is chock full of dFSCI, a reliable sign of design.

    As to methods, we have pointed out — you just willfully ignored it — that anything from a sufficiently advanced version of Dr Craig Venter’s lab on up could reasonably have been involved, as a candidate designing agent. But, on the sign of DNA as dFSCI, we do not have a basis for inferring to a specific designer on empirical data.

    When — as was again pointed out earlier this morning — we lift our eyes to the heavens and see the credible origin of a fine tuned cosmos that is set up at a knife’s edge point that facilitates C-chemistry cell based life, we see a wider oicture,a nd one that points to a powerful intelligence beyond rthe cosmos. Given the number of things that had to be pretty exactly set to lead to such life, that in turn points to a purpose to create life, as we know from direct experience that intelligences often have purposes.

    Join such power, capacity and purpose together and you see that the candidate to beat is a necessary — thus beyond matter-based — being, with knowledge skill and capacity to found a cosmos friendly to life, who could then have proceeded to create life within that cosmos. Once life is formed, it will reproduce itself by its inbuilt mechanisms, and ultimately will probably colonise the cosmos.

    Other candidates as creators of life are of course possible, but this is the best explanation by far.

    Such an intelligence sounds rather familiar, but that is just how the scientific and common sense reasoning cookies crumble, when we reflect on evidence with an open mind.

    Do you have he courage to boldly face a Big truth, one that may be threatening because of what it may entail?

    If you don’t then you should not ask questions that lead in such directions.

    And if you see such answers and have no cogent replies if you don’t like their import, it seems then that you may need to ask yourself if you are being evasive on motives irrelevant to the weight of the evidence.

    Good day

    GEM of TKI

  128. MathGrrl you state;

    ‘The shroud of Turin has been clearly and repeatedly demonstrated to be of 14th century, human origin.’

    Repeatedly??? Really MathGrrl?? The ONLY solid piece of evidence that ever suggested that the Shroud could have been of medieval origin was the Carbon dating!!! But the carbon dating has now been overturned, by Los Alamos National Laboratory no less!!!

    “Analytical Results on Thread Samples Taken from the Raes Sampling Area (Corner) of the Shroud Cloth” (Aug 2008)
    Excerpt: The age-dating process failed to recognize one of the first rules of analytical chemistry that any sample taken for characterization of an area or population must necessarily be representative of the whole. The part must be representative of the whole. Our analyses of the three thread samples taken from the Raes and C-14 sampling corner showed that this was not the case……. LANL’s work confirms the research published in Thermochimica Acta (Jan. 2005) by the late Raymond Rogers, a chemist who had studied actual C-14 samples and concluded the sample was not part of the original cloth possibly due to the area having been repaired.
    Robert Villarreal
    http://www.ohioshroudconference.com/

    Shroud Of Turin Carbon Dating Overturned By Scientific Peer Review – Robert Villarreal – Press Release video
    http://www.metacafe.com/watch/4041193

    further note;

    New Evidence Overturns Shroud Of Turin Carbon Dating – Joseph G. Marino and M. Sue Benford – video
    http://www.metacafe.com/watch/4222339

    The following is the main peer reviewed paper which has refuted the 1989 Carbon Dating:

    Why The Carbon 14 Samples Are Invalid, Raymond Rogers
    per: Thermochimica Acta (Volume 425 pages 189-194, Los Alamos National Laboratory, University of California)
    Excerpt: Preliminary estimates of the kinetics constants for the loss of vanillin from lignin indicate a much older age for the cloth than the radiocarbon analyses. The radiocarbon sampling area is uniquely coated with a yellow–brown plant gum containing dye lakes. Pyrolysis-mass-spectrometry results from the sample area coupled with microscopic and microchemical observations prove that the radiocarbon sample was not part of the original cloth of the Shroud of Turin. The radiocarbon date was thus not valid for determining the true age of the shroud. The fact that vanillin can not be detected in the lignin on shroud fibers, Dead Sea scrolls linen, and other very old linens indicates that the shroud is quite old. A determination of the kinetics of vanillin loss suggests that the shroud is between 1300- and 3000-years old. Even allowing for errors in the measurements and assumptions about storage conditions, the cloth is unlikely to be as young as 840 years.
    http://www.ntskeptics.org/issu.....oudold.htm

    Rogers passed away shortly after publishing this paper, but his work was ultimately verified by the Los Alamos National Laboratory:

    Carbon Dating Of The Turin Shroud Completely Overturned by Scientific Peer Review
    Rogers also asked John Brown, a materials forensic expert from Georgia Tech to confirm his finding using different methods. Brown did so. He also concluded that the shroud had been mended with newer material. Since then, a team of nine scientists at Los Alamos has also confirmed Rogers work, also with different methods and procedures. Much of this new information has been recently published in Chemistry Today.
    http://shroudofturin.wordpress.....s-of-time/

    MathGrrl you state ‘REPEATEDLY shown to be of medieval origin’ and yet I can think of no solid evidence to support your position now that the carbon dating fiasco has been overturned, whereas I can provide several pieces of evidence for ancient origination in first century Jerusalem;

    THE SHROUD AS AN ANCIENT TEXTILE – Evidence of Authenticity
    http://www.newgeology.us/presentation24.html

    Shroud Of Turin – Sewn From Two Pieces – 2000 Years Old – video
    http://www.metacafe.com/watch/4109101

    The Sudarium of Oviedo
    http://www.shroudstory.com/sudarium.htm

    ,, it should be noted that one of the primary reasons that Mathgrrl argues for a medieval origin in the first place is because of the inexplicable photographic negative/3-Dimensional properties of the image,, shroud skeptics think that perhaps a ‘mad genius’ could have forged the image :) ,,,

    The Turin Shroud – Comparing Image And Photographic Negative – interactive webpage (Of note: The finding of a photographic negative image on the Shroud is still as much a mystery today as when it was first discovered by Secondo Pia in 1898.)
    http://www.shroud.com/shrdface.htm

    Shroud Of Turin’s Unique 3 Dimensionality – video
    http://www.metacafe.com/watch/4041182

    Shroud Of Turin – Photographic Negative – 3D Hologram – The Lamb – video
    http://www.metacafe.com/watch/5664213/

    etc.. etc…

    How Did The Image Form On The Shroud? – video
    http://www.metacafe.com/watch/4045581

  129. MathGrrl you state;

    ‘The shroud of Turin has been clearly and repeatedly demonstrated to be of 14th century, human origin.’

    Repeatedly??? Really MathGrrl?? The ONLY solid piece of evidence that ever suggested that the Shroud could have been of medieval origin was the Carbon dating!!! But the carbon dating has now been overturned, by Los Alamos National Laboratory no less!!!

    “Analytical Results on Thread Samples Taken from the Raes Sampling Area (Corner) of the Shroud Cloth” (Aug 2008)
    Excerpt: The age-dating process failed to recognize one of the first rules of analytical chemistry that any sample taken for characterization of an area or population must necessarily be representative of the whole. The part must be representative of the whole. Our analyses of the three thread samples taken from the Raes and C-14 sampling corner showed that this was not the case……. LANL’s work confirms the research published in Thermochimica Acta (Jan. 2005) by the late Raymond Rogers, a chemist who had studied actual C-14 samples and concluded the sample was not part of the original cloth possibly due to the area having been repaired.
    Robert Villarreal
    http://www.ohioshroudconference.com/

    Shroud Of Turin Carbon Dating Overturned By Scientific Peer Review – Robert Villarreal – Press Release video
    http://www.metacafe.com/watch/4041193

    further note;

    New Evidence Overturns Shroud Of Turin Carbon Dating – Joseph G. Marino and M. Sue Benford – video
    http://www.metacafe.com/watch/4222339

    The following is the main peer reviewed paper which has refuted the 1989 Carbon Dating:

    Why The Carbon 14 Samples Are Invalid, Raymond Rogers
    per: Thermochimica Acta (Volume 425 pages 189-194, Los Alamos National Laboratory, University of California)
    Excerpt: Preliminary estimates of the kinetics constants for the loss of vanillin from lignin indicate a much older age for the cloth than the radiocarbon analyses. The radiocarbon sampling area is uniquely coated with a yellow–brown plant gum containing dye lakes. Pyrolysis-mass-spectrometry results from the sample area coupled with microscopic and microchemical observations prove that the radiocarbon sample was not part of the original cloth of the Shroud of Turin. The radiocarbon date was thus not valid for determining the true age of the shroud. The fact that vanillin can not be detected in the lignin on shroud fibers, Dead Sea scrolls linen, and other very old linens indicates that the shroud is quite old. A determination of the kinetics of vanillin loss suggests that the shroud is between 1300- and 3000-years old. Even allowing for errors in the measurements and assumptions about storage conditions, the cloth is unlikely to be as young as 840 years.
    http://www.ntskeptics.org/issu.....oudold.htm

    Rogers passed away shortly after publishing this paper, but his work was ultimately verified by the Los Alamos National Laboratory:

    Carbon Dating Of The Turin Shroud Completely Overturned by Scientific Peer Review
    Rogers also asked John Brown, a materials forensic expert from Georgia Tech to confirm his finding using different methods. Brown did so. He also concluded that the shroud had been mended with newer material. Since then, a team of nine scientists at Los Alamos has also confirmed Rogers work, also with different methods and procedures. Much of this new information has been recently published in Chemistry Today.
    http://shroudofturin.wordpress.....s-of-time/

    MathGrrl you state ‘REPEATEDLY shown to be of medieval origin’ and yet I can think of no solid evidence to support your position now that the carbon dating fiasco has been overturned, whereas I can provide several pieces of evidence for ancient origination in first century Jerusalem;

    THE SHROUD AS AN ANCIENT TEXTILE – Evidence of Authenticity
    http://www.newgeology.us/presentation24.html

    Shroud Of Turin – Sewn From Two Pieces – 2000 Years Old – video
    http://www.metacafe.com/watch/4109101

    The Sudarium of Oviedo
    http://www.shroudstory.com/sudarium.htm

  130. ,, it should be noted that one of the primary reasons that Mathgrrl argues for a medieval origin in the first place is because of the inexplicable photographic negative/3-Dimensional properties of the image,, shroud skeptics think that perhaps a ‘mad genius’ could have forged the image :) ,,,

    The Turin Shroud – Comparing Image And Photographic Negative – interactive webpage (Of note: The finding of a photographic negative image on the Shroud is still as much a mystery today as when it was first discovered by Secondo Pia in 1898.)
    http://www.shroud.com/shrdface.htm

    Shroud Of Turin’s Unique 3 Dimensionality – video
    http://www.metacafe.com/watch/4041182

    Shroud Of Turin – Photographic Negative – 3D Hologram – The Lamb – video
    http://www.metacafe.com/watch/5664213/

    How Did The Image Form On The Shroud? – video
    http://www.metacafe.com/watch/4045581

    etc.. etc…

    Myself, I am very impressed that this was recently found by holographic imaging,,

    Turin Shroud Hologram Reveals The Words ‘The Lamb’ – short video
    http://www.metacafe.com/watch/4041205

    So MathGrrl, according to your unsubstantiated hypothesis, a ‘mad genius’ knew about photography 500 years before it was invented, knew about holography almost 600 years before it was invented, and hid the words ‘The Lamb’, that could only be decoded through holography??? Moreover your ‘mad genius’ accomplished all this using techniques that still can’t be replicated to this day,,,

    “The shroud image is made from tiny fibres that are (each) 1/10th of a human hair. The picture elements are actually randomly distributed like the dots in your newspaper, photograph or magazine photograph. To do this you would need an incredibly accurate atomic laser. This technology does NOT exist (even to this day).”
    Kevin Moran – Optical Engineer

    ,,, moreover this ‘mad genius’ accomplished this unrivaled feat of technological prowess without revealing any of his secrets to his peers??? :) ,,, Oh what a tangled web we weave MathGrrl!

  131. MathGrrl

    I think there are two issues that need clear seperation (from your comments I suspect you are aware of them but I thought I would spell it out for the sake of clarity):

    1 Generating CSI from a pre-biotic perspective.
    2 Generating CSI from a post-biotic perspective.

    Using a GA is only relevant to the second case because it is a model of reproducing systems (Reproduction with mutation and selection) – as KF would put it, it is exploring an island of functionality.

    In case 1 a GA is not an appropriate model because you do not have reproduction, just complex chemical processes – Reactions and persistence if you will. Again, as KF would put it, it is about finding an island of functionality.

    Some ID proponents here seem to feel that even in case 2 a GA cannot generate CSI (I disagree, I’ve worked with GA’s!)

    Case 1 is far more interesting and is basically the OOL problem – can natural forces generate self replicating systems.

    Of course we also have case 3 if we want to push the goalposts back that far – Is the universe designed or not – but I’m not sure how to devise an experiment to test that.

    It would be noce if everyone could focus on one of these questions first, I suggest the one I believe MathGrrl is asking about (Case 2):

    I’m very interested in testing the claim that CSI can only be the result of intelligent agency. In order to do so, I need the help of ID proponents to rigorously define CSI and demonstrate how to objectively measure it. That will allow me, hopefully, to use an evolutionary simulation to determine whether or not known evolutionary mechanisms are capable of generating CSI.

    Of course we can always move the agency back (to case 3) and say that a GA can produce CSI but an intelligence was required to create a universe in which evolution can take place ;)

    Lets stick to investigating case 2 for the moment though – can evolution generate (or perhaps increase) CSI.

  132. Dr Bot:

    It’s very simple: while hill-climbing algors can move around within islands of function, they cannot jump the sea of non-function to get to new body plans.

    Without a root, OOL, the Darwinian tree of life has no basis to stand. The smallest genomes of unicellular life are north of 100 k bases, and 1 million is more reasonable for first life as the smaller ones are parasitic on more complex life for key nutrients.

    Anyway use 100 k bases and generously treat it as having enough redundancy to make that equivalent to 100 k bits dFSCI. You are trying to find islands of function in a space of 2^100 000 = 9.99 *10^30,102 configs.

    With only 10^150 Planck-time states to work with on the gamut of our observed cosmos, that is hopeless, a practical zero.

    To get to novel body plans you have to add 10′s or 100′s or more of millions of bases, dozens of times over. 2^10, 000,000 = 9,05 *10^3,010, 299.

    Dozens of times over, on just this earth you have to bridge that sort of sea, to jump from one island to another, just to get TO a shoreline of function in the archipelago of life.

    But, the tree of life model means there is just one continent with peninsulas, so once you start you just move from mountain 5range to range, peninsula to peninsula emerging!

    Big problem: the tree of life model exists only on paper, the evidence shows strongly that from protein folds up to body plans, the architectures of functional life forms is discontinuous. No wonder, there is an overwhelming pattern in the fossils of discrete body plans emerging and then either still being around or dying out.

    A neat branching pattern smoothly graded fitness landscape, we plainly have not got.

    And the gaps between islands of function in config spaces that are that large are well beyond bridgeable. You cannot transform hello world into ev, one tiny functional change at a time. Nor can you transform “See spot run. Spot catches the ball,” one step at a time at random, functional all the way, into the contents of a library.

    Intelligence is the only empirically known, causally sufficient source of dFSCI such as we find in the library, on the web or in DNA.

    This (as demonstrable a claim as any empirical fact is) is beginning to look like one of Ilion’s Big Truths, truths so big and heavy with significance, that — regardless of their degree of warrant — they are very hard for those locked into the present order to accept or acknowledge, or even seriously entertain.

    I am beginning to think the real issue is now this backed up by this, and with this lurking in the background as the biggest unwelcome Big truth of all, not the balance of merits on evidence or logic.

    GEM of TKI

  133. I’m just about to rush off so no time to respond except to this:

    It’s very simple: while hill-climbing algors can move around within islands of function, they cannot jump the sea of non-function to get to new body plans.

    On what evidence have you determined that different body plans are not on the same island (or perhaps continent) of functionality?

  134. 134

    Mathgrrrl,

    That is simply not true. In fact, some respondents have explicitly said that such answers cannot be provided.

    To be fair, though, I may have missed the answers you claim have been provided. Please provide references to the empirical evidence for the existence of an intelligent agent that has intervened in biological evolution, the empirical evidence that shows exactly what this agent did, the empirical evidence that shows when this intervention took place, the empirical evidence that shows where this intervention happened, and the empirical evidence that shows how the intervention was accomplished.

    Like I said, this is just gamesmanship. It certainly isn’t a search for clarity, truth, or understanding. You know very well that ID cannot tell you when and where. ID cannot tell you these things because the observable evidence offers no way to know them. This fact gives ideologues like yourself the desperately-needed respite to say “Aha!” there’s nothing there. Of course, the questions you ask are BS from the start. I may just as easily ask these exact same questions of you regarding your preferred explanation, and you will have the exact same answers as I do. So, it’s nothing but a game.

    Although neither camp can answer these particular questions, that doesn’t mean however that the camps are equal. They are certainly not equal; they are as different as their explanations. The ID camp says that an act of volitional agency is necessary for the sequencing of chemical symbols within DNA, whereas the materialist camp says that the sequencing happened as a byproduct of chance and physical necessity. One explanation is purely mechanistic while the other is specifically not. Surely a Sous Chef can tell you the mechanistic steps for baking a cake, but the existence of that cake is not reducible to those steps. Conversely, the materialist’s explanation says the cake baked itself. One explanation has nothing but material causes to explain the origin of the evidence, while the other does not propose a material origin to begin with. It proposes an act of volition instead.

    This is not the only way in which the two camps are unequal. For instance, the ID camp has man’s universal experience of the natural world at the core of its explanation. It is our universal experience (throughout all history) that digitally-encoded information is the product of an agent. The ID camp is completely congruent with that empirical fact, while the materialist’s camp simply ignores it.

    So you see, it’s nothing but (illogical) gamesmanship for you to repeatedly ask for a material origins checklist from an explanation that doesn’t posit a material origin. Do you not understand this?

    A tornado is a purely material object; no reference to an act of volition is necessary in order to explain it. A red plastic ball is material as well, however, an act of volition is necessary to explain it because there is nothing in the material (the plastic) that would cause it to form a sphere and dye itself red. So the origin of a tornado is completely explicable by referencing its material composition, but it is impossible to do that with a red plastic ball.

    To ask “how” and “when” and “where” from the plastic, is simply ridiculous.

    Even so, it is your camp that claims that purely mechanistic forces can create the sequencing in DNA – and that it can be no other way. Not only does your camp make this claim, but they also marshal national organizations and legal teams to enforce their unsupported beliefs upon others.

    So, where’s your checklist? Of course, we all know you don’t have one – which is precisely why you play these games.

  135. Dr Bot

    Did you read or just skim?

    For, a significant part of the post above pointed to just the reasons why we are looking at such islands of function, and major discontinuities at body plan level. Remember, a new body plan at phylum or sub-phylum level is a new way of doing business for an organism, requiring new tissues, new organs and new organisation, thus new cell times and regulatory programs. The 10 m bases estimate is a low estimate for the amount of re-organisation to achieve that.

    Let me cite an excerpt in my always linked, which addresses this issue in Section C:

    _______________

    >> The Cambrian explosion represents a remarkable jump in the specified complexity or “complex specified information” (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6 [Meyer, The Origin of Biological Information and the Higher Taxonomic Categories, PBSW, ed Sternberg. And despite many talking points to the contrary this did pass proper peer review by "renowned scientists" ] >>
    ______________

    This is just one aspect of the problem, which starts with the need to code for foldable, active proteins and the regulatory systems to control their expression and use.

    GEM of TKI

  136. F/N: it is also to be noted that there is no one standard or smoothly varying embryological development pattern. Even in quite similar animals there can be radically divergent development paths, and homologous structures notoriously come about by drastically different embryological processes and sources.

    GEM of TKI.

  137. Hi again Mathgrrl

    ” Please provide references to the empirical evidence for the existence of an intelligent agent…”

    This could prove an intelligent agency. Once I played 3 chess games against the bear . I won 2:1.

    Now seriously , do you have any ideas for real experiment?

  138. —mathgirl: “Please provide references to the empirical evidence for the existence of an intelligent agent that has intervened in biological evolution, the empirical evidence that shows exactly what this agent did, the empirical evidence that shows when this intervention took place, the empirical evidence that shows where this intervention happened, and the empirical evidence that shows how the intervention was accomplished.”

    On the one hand, you say ID paradigms are not precise enough, or narrow enough, or well-defined enough to be measured. On the other hand, you claim that ID should abandon the narrow focus that makes precision possible, broaden the investigation, and start asking new questions about who did the designing, when it happened, and how it happened, none of which could ever hoped to measured with the paradigms being used. Frankly, I don’t know how anyone could be as confused as you appear to be.

  139. Pedant [110]:

    So, it seems that the ID squad here is promoting the proposition that miracles (events that run counter to the usual course of observation) occur.

    Miracles like the Shroud of Turin—contra MathGrrl’s assertions, the Shroud has NOT be proven to be a fake; rather, a theory has been presented and tested by one of the original scientists making the C14 dating, and accepted as a probable explanation for the later dating of the Shroud via C14 methods—are simply a way of pointing out the presuppositions that many bring to the whole question of evolutionary mechanisms. When MathGrrl insists on who, what, how, when and where, these miracles provide all of the answers (more or less) in what is right before our eyes, unlike any mechanism that the Designer may have used.

    Let’s note that it is quite simple to turn this question back at MathGrrl: I want to know how, when, where, and what caused evolution to occur. Can she provide those answers? No. She can try and provide neo-Darwinian explanations, which faintly have relevance in these matters, and which fails terribly at answering the more necessary and fundamental questions we may pose.

    I have spent much time and much energy searching everywhere for some plausible material explanation for evolution. Simply answer: it cannot be found. It’s nowhere to be found. Meanwhile, you can find many sources demonstrating the diminished capacity of Darwinian mechanisms to explain the progressive complexity life presents to us. I came to UD via Panda’s Thumb, arguing there about the inadequacies of Darwinian theory long before I became familiar with ID tenets. No one at PT could satisfactorily point to a plausible explanation.

    But, I digress . . .

    MathGrrl:

    I’m very interested in testing the claim that CSI can only be the result of intelligent agency. In order to do so, I need the help of ID proponents to rigorously define CSI and demonstrate how to objectively measure it. That will allow me, hopefully, to use an evolutionary simulation to determine whether or not known evolutionary mechanisms are capable of generating CSI.

    From my own experience, you need to tackle No Free Lunch, and really make an effort at understanding Dembski’s notion of rejection regions. There are all kinds of tricky aspects to it. I’ve had discussions with Ph’D's in probability theory and even they don’t understand Dembski correctly—mainly because they’re not interested in understanding him; they simply dismiss him.

    Another fine source is Sir Fred Hoyle’s book, The Mathematics of Evolution. He develops there his own brand of population genetics which is completely adequate and persuasive in pointing out the impossibility of Darwinian mechanisms doing what they are claim to do. Hoyle was an atheist. His answer to life’s complexity was to, like Paul Davies, believe in panspermia. So, from a completely irreligious point of view, the inadequacies of Darwinian theory are made rather clear. Look, e.g., at his treatment of Cytochrome C.

  140. kairosfocus,

    As tothe calculations on Ev and the like, it is plain that these “evolved” organisms are intelligently directed products of an intelligently designed process, so from the outset your attempt to estimate the scale of dFSCI involved is pointless.

    I’m confused by this statement. Are you saying that dFSCI is by definition only produced by intelligence or are you saying that it is impossible to model any observed mechanisms to determine if they generate dFSCI (or both)?

    What is the file size of Ev or Avida? Is it beyond 125 bytes?

    If so, the programs are beyond the credible reach of chance and blind mechanical necessity, on the gamut of our observed cosmos. Thus the presence of dFSCI in the programs already warns us what we know by more direct means: they are intelligently designed.

    Clearly the ev, Tierra, and Steiner Problem simulators are designed. I’m talking about the digital organisms that evolve within those simulators. Many of those exceed 25 bytes in length — would you agree that they may possess dFSCI?

    From one of your followup posts:

    Do you accept that where we can do direct empirical testing, dFSCI in particular turns out to be a very reliable sign of design, as with say posts on this thread?

    No. Thus far neither dFSCI nor any other CSI variant have been rigorously defined and no demonstrations of how to calculate these metrics for real world biological systems have been provided. Given that level of definition, they are essentially meaningless terms.

    I am very interested in running some simulations to measure the generation of CSI via evolutionary mechanisms. I hope that at least one or two ID proponents here share that interest. Any assistance you can provide would be greatly appreciated.

  141. DrBot,

    I think there are two issues that need clear seperation (from your comments I suspect you are aware of them but I thought I would spell it out for the sake of clarity):

    1 Generating CSI from a pre-biotic perspective.
    2 Generating CSI from a post-biotic perspective.

    Using a GA is only relevant to the second case because it is a model of reproducing systems (Reproduction with mutation and selection) – as KF would put it, it is exploring an island of functionality.

    Thank you for the clarification. I am, indeed, discussion only case 2.

    Some ID proponents here seem to feel that even in case 2 a GA cannot generate CSI (I disagree, I’ve worked with GA’s!)

    This is exactly what I hope to model, with the assistance of any interested ID proponents to rigorously define CSI such that we can measure it objectively.

  142. Upright BiPed,

    Of course, the questions you ask are BS from the start. I may just as easily ask these exact same questions of you regarding your preferred explanation, and you will have the exact same answers as I do.

    Hundreds of thousands of peer-reviewed articles detailing research done over the past century and a half show your assertion to be blatantly incorrect. Look in any issue of Science, Nature, or hundreds of other scientific journals and you will see the results of scientists answering the what, when, where, and how questions that you refuse to answer for ID.

  143. Eugen,

    Now seriously , do you have any ideas for real experiment?

    Normally it is the responsibility of the proponents of an hypothesis to provide testable entailments that could serve to falsify it. If that can’t be done, the hypothesis is unfalsifiable and hence non-scientific.

    That being said, I am interested in running some simulations to determine whether or not known evolutionary mechanisms can create CSI. In order to do that, I need ID proponents to rigorously define CSI and show how to measure it. Are you willing to help?

  144. MG:

    Re 140, do let me know, is it not commonly known that Ev et al are programs written by individuals? is it not known that these programs have in them digitally coded, symbolic functional info beyond 125 bytes? Does not that show that again the FSCI criterion reliably identifies such an entity as designed, per the design inference, confirmed by direct knowledge?

    In short, your remark just above comes across as willfully obtuse.

    I think you need to account to me for how you tried that rhetorical gambit before any reasonable discussion can be possible.

    G’day

    GEM of TKI

  145. In short, MG, the time for rhetorical gamesmanship through stunts like you carried out at 140 is over.

  146. MathGrrl:

    Look in any issue of Science, Nature, or hundreds of other scientific journals and you will see the results of scientists answering the what, when, where, and how questions that you refuse to answer for ID.

    Science, and Nature give us results that are found in the lab or found in nature today. But they give us no real understanding or knowledge of what happened, when it happened, where it happened or how it happened in the past.

    We have only guesses and speculations, and mere assertions. The reality is that what they allege happened, using known genetic processes cannot come close to explaining what they assert it does. This is the problem. Their answers are not coherent, and they don’t want to admit to it. Then they run under a cloud of obfuscation. The entire field of biology is so vast and so complicated, that no one will take the time to connect all the dots. In surveys, most scientists simply “assume” that Darwinian concepts have been proven. Yet close inspection has these same mechanisms falling on their faces.

    This is the current state of things. Darwinism is fading away. But many want to hold onto it—and, for metaphysical reasons.

  147. KF:

    All computational models (simulations) have to be written by people, this ought to be obvious.

    If I write a simulation of a chemical system I can use this to establish if the system under study can produce certain effects. This does not mean that the system being studied can only produce those effects because of design. It means I’ve designed a simulation that models real systems.

    Similarly you can construct a simulation of the various mechanisms of evolution – drawn from measurement and observation of reality – You can simulate an environment with resources, agents with bodies, genotypes that map to phenotypes. What MathGrrl is doing is perfectly reasonable as a scientific investigation:

    Create a realistic model of self replicating agents that produce variable copies, which operate in an environment where their phenotypic traits (their behavior) affect their reproductive success. Then calculate CSI for agents in subsequent generations as the simulation plays out.

    By saying ‘the simulation was designed’ you are just shifting goalposts about, the purpose here is to test the basic evolutionary process (descent, modification and selection) to see if CSI can increase over sucessive generations. It is a test of what the physical universe can do within its known laws.

    If you want to argue that the universe (or a simulation of part of the universe) has to be the result of design then I have no problem with that but MathGrrl is addressing a different question – ‘what can the universe do?’, not ‘was it designed or not?’.

    MathGrrl, perhaps a better approach would be to assume, for the sake of the experiment, that the universe, and simple self replicators were the result of design. Now lets see if designed simple replicators, in a designed universe, can show an increase in CSI when subject to mutation and selection over many generations!

  148. 148
    Of course, the questions you ask are BS from the start. I may just as easily ask these exact same questions of you regarding your preferred explanation, and you will have the exact same answers as I do.

    Hundreds of thousands of peer-reviewed articles detailing research done over the past century and a half show your assertion to be blatantly incorrect. Look in any issue of Science, Nature, or hundreds of other scientific journals and you will see the results of scientists answering the what, when, where, and how questions that you refuse to answer for ID.

    My point that you can’t answer the same questions you pose to ID is “blatantly incorrect?”

    Really? Are you sure about that?

    It has now been openly demonstrated that you ask for answers from ID which you yourself cannot answer. Its gamesmanship being pawned off as a placemat for real inquiry. Yet, in response to having your hypocracy put on display, you’ve done what any good and faithful ideologue would do; you’ve chosen to double-down.

    You now want to pretend that there’s a stack of research papers “detailing” the answers. Not only do you want to draw this picture for yourself, but apparently you want others to follow you into denial.

    I am more than happy to take that bet, although I am left to wonder how low your standards for “answers” will suddenly become once it is you that must provide them.

    Please point to a single one of the “hundreds of thousands of peer-reviewed articles” that answers the “how” and “when” and “where” of Life’s origins.

  149. MG:

    Further glancing down the list of posts since 140, it becomes evident that you are being willfully obtuse. CSI has long since been defined, and quantified, as you can see from even the UD weak argument correctives, top right this and every UD page.

    FSCI, is a very simple metric, the same sort of functional bits you use every time you say this file is 169 kbytes.

    The only special consideration is that once we see a functionally specific piece of information beyond 1,000 bits storage capacity, that means that we are looking at a potential configuration space with 1.07*10^301 or more possible arrangements.

    As has been repeatedly pointed out to you above — but willfully ignored — and as has been pointed out to you many times over over months at least to my certain knowledge, that is vastly more states than the whole observed cosmos, changing state every Planck time for its credible lifespan, could undergo. That is, the space cannot be searched, on the gamut of our cosmos.

    So, it is not credible that such functionality can be reached by unintelligent processes of chance and/or mechanical necessity. Unsurprisingly, as infinite monkeys tests affirm, there are no cases of FSCI beyond the 1,000 bit threshold being created by such forces. Those who claim or imply that that is empirically possible and credible as the source of the dFSCI in life need to SHOW that.

    The have not, nor is such in prospect.

    That seems to be the underlying motive for the rhetorical turnabout games you have resorted to in this thread.

    For, we see here a strong empirical and analysis support for the common sense observation that FSCI — SUCH AS POSTS IN THIS THREAD — IS ON UNIFORM OBSERVATION, THE PRODUCT OF AND A RELIABLE SIGN OF INTELLIGENCE.

    That seems to be a big, unwelcome truth. But, that does not change the fact that it is an abundantly supported empirical fact.

    But, it strongly points to design as the most credible explanation for the dFSCI in the heart of the living cell, and thus in the origin of the cell, given the central role of DNA in the cell’s life.

    Enough, is enough.

    We have reached the reductio ad absurdum of blatant denial of facts plain to all, and of pretence that the plain and empirically well-warranted is obscure and dubious.

    Good day, madam.

    GEM of TKI

  150. Dr Bot:

    Thank you for affirming how computer programs with prescriptive information, are produced by intelligences.

    In DNA, we see . . . discrete state, prescriptive information, in a program.

    So, we should conclude, therefore . . . ?

    GEM of TKI

    PS: The precise problem with ev et al as has already been pointed out, is that they put the contained small scale random variation on an island of function and set on the task of hill climbing. Through a bait and switch, this — not disputed by anyone — is then offered as evidence that we can get to the shores of such an island of function by the same means. Not so. In short, question-begging and bait and switch.

  151. PaV,

    You wrote,

    In a recent paper, a “de novo” gene was being touted. Guess what? It turns out that a portion of a “non-coding” gene and its flanking element was involved in the manufacturing of this “de novo” gene.

    Could you supply a reference for this? Thanks.

    J

  152. Thank you for affirming how computer programs with prescriptive information, are produced by intelligences.

    It is remarkable that you would believe that anyone would think otherwise. We know the origin of computers, and computer programes – we observe them, but unlike life we have never observed computers reproducing on their own with variety.

    The precise problem with ev et al as has already been pointed out, is that they put the contained small scale random variation on an island of function and set on the task of hill climbing. Through a bait and switch, this — not disputed by anyone — is then offered as evidence that we can get to the shores of such an island of function by the same means. Not so. In short, question-begging and bait and switch.

    No bait and switch. MathGrrl was quite specific with her question – Can a genetic algorithm (and by implication Evolution) generate CSI. Evolutionary processes only apply to replicating systems (descent, modification and selection) so the question – quite explicitly stated – is about whether evolution as it operates on living systems can generate CSI, not about whether life can arise without intelligent causes.

    You said this:

    The precise problem with ev et al as has already been pointed out, is that they put the contained small scale random variation on an island of function and set on the task of hill climbing.

    This looks like an acknowlegement that evolution can generate CSI – If you are on the shore of an island of functionality and you climb to a hill on that island you are increasing complexity and/or functionality. If this is the case then with regard to MathGrrl’s question you are answering Yes – evolutionary processes can genererate CSI.

    Don’t keep trying to move the goalposts – this is a specific question to be addressed: Can core evolutionary processes (descent, modification and selection) increase CSI – YES or NO.

    ‘We don’t know’ is a fine and admirable answer – MathGrrl is asking for help devising an experiment that will start to turn a ‘we don’t know’ into a tentative yes or no – This should be welcomed, not subject to derision and dismissal.

  153. Jonathan M:

    I’ve searched for the paper, but can’t find it. I don’t know if it’s Coyne’s or not.

    I remember not having access to the paper directly—which likely means either Science or Nature—but having access to the supplemental material. It was there that the overlay of sequences were shown. But I can’t find any of that now. Sorry.

  154. Jonathan M:

    I think this is the citation:

    “Plasticity of Animal Genome Architecture Unmasked by Rapid Evolution of a Pelagic Tunicate” Science vol. 330, 3 Dec 2010 Denoeud, et al.

    The last sentence of the abstract speaks of “mechanisms of intron gain.”

    While this isn’t exactly a “de novo” gene, it is tantamount to it.

    Hope this helps.

  155. Drbot and MG,

    If such an experiment were done, how would we know when we succeeded? I mean, with the monkey/typewriter experiement we would know when we got Shakespear. But that refers to information that we have already assembled and assigned meaning to it. How would we find the “meaning” in the CSI of the experiment?

  156. Dr Bot:

    Can you transform a hello world, one step at a time — while preserving function — into an operating system?

    That is the true problem.

    GEM of TKI

  157. PS: Actually, that is a bit generous: consider how much background functionality has to be in place in a PC system for a hello world to work. Moving around in an island of function has to do more with specialisation, niches and small losses that do not destroy overall function, than with getting to an overall context of function to begin with.

  158. I’ve looked at the supplementary material, and the Denoeud, et. al., article was not the one I came across.

    When I have time, I’ll look some more.

  159. F/N: Dr Bot, please recall: logically speaking, chance can produce any continent outcome. The problem, being that logical possibility is not to be confused with empirical credibility. For, when the config spaces get large enough — and 1,000 bits [ or 1.07 * 10^301 possibilities] [is a good threshold -- special configs or clusters [i.e. the FSCI ones] then run into the issue that a search even on the gamut of the whole observed cosmos 10^150 possible Planck-time states, this being 10^20 times faster than a strong nuclear force interaction] is so maximally unlikely to scan enough of the space to make a difference, that the only reasonable, empirically well supported explanation of FSCI is intelligence. That is why, sight unseen, you explain this post by intelligence, not lucky noise on the Internet.

  160. FR/N 2: Also, please note that hill climbing algorithms are just that: intelligently designed algorithms. That would also hold for life forms:they are obviously designed to adapt within certain limits, and the controlled random search used by the immune system is a paradigm of how that can use chance as a part of the algorithm.

  161. kairosfocus,

    Re 140, do let me know, is it not commonly known that Ev et al are programs written by individuals? is it not known that these programs have in them digitally coded, symbolic functional info beyond 125 bytes? Does not that show that again the FSCI criterion reliably identifies such an entity as designed, per the design inference, confirmed by direct knowledge?

    In short, your remark just above comes across as willfully obtuse.

    I would suggest that your remarks come across as projection. I was very clear in my post to distinguish between the simulator and the digital organisms that arise in the simulation. The ev, Tierra, and Steiner Problem simulators are obviously designed by programmers. The digital organisms that evolve within them are the product of simplified versions of the evolutionary mechanisms we observe in real world biological systems. Understanding the difference is essential to rationally discuss the results of these GAs.

    I would further note that you have completely ignored my very clear response about CSI and it’s variants. I’ll copy it again here for your convenience: Thus far neither dFSCI nor any other CSI variant have been rigorously defined and no demonstrations of how to calculate these metrics for real world biological systems have been provided. Given that level of definition, they are essentially meaningless terms.

    Would you care to rigorously define CSI (or your FSCI variant) such that I can measure it in an evolutionary simulation?

  162. kairosfocus,

    In short, MG, the time for rhetorical gamesmanship through stunts like you carried out at 140 is over.

    Your tone and implication go beyond uncivil to positively rude. There is no gamesmanship on my part — I am genuinely interested in learning how to objectively measure CSI. You can read the thread on Mark Frank’s blog to see how much time I’ve invested in attempting to do so.

    If you don’t wish to assist me, simply say so, but please keep your baseless insults to yourself.

  163. Upright BiPed,

    Please point to a single one of the “hundreds of thousands of peer-reviewed articles” that answers the “how” and “when” and “where” of Life’s origins.

    As I thought I made clear, I’m discussing biological evolution, not abiogenesis. There is extensive literature documenting the research into what, when, where, and how of biological evolution, as anyone who participates in discussions such as this should know.

    Abiogenesis is also a fertile research field, albeit less mature than that of biological evolution. At least two papers on this topic have been referenced here on UD in the past few days. To find more, go to Pubmed and enter “biogenesis” in the search field. I see over fifteen thousand papers in the results.

    Can you provide references to any empirical evidence detailing the who, what, when, where, and how of ID?

  164. kairosfocus,

    Further glancing down the list of posts since 140, it becomes evident that you are being willfully obtuse.

    What is evident is that you are being willfully uncivil to someone who is not just interested in understanding the positive evidence for ID, but who is also willing to go to the effort of testing the concepts. That’s hardly the response one expects from proponents who have confidence in their theory.

    From this point forward I will be ignoring your rudeness and focusing on the empirical evidence. That doesn’t mean I don’t notice your behavior.

    CSI has long since been defined, and quantified, as you can see from even the UD weak argument correctives, top right this and every UD page.

    FSCI, is a very simple metric, the same sort of functional bits you use every time you say this file is 169 kbytes.

    If this is so, please demonstrate how to objectively calculate CSI for a real biological system and talk me through how I could make this measurement for digital organisms in evolutionary simulations. I have read all the references provided and had a long discussion with gpuccio on Mark Frank’s blog without getting this simple question answered.

    I’m willing to test your hypothesis, but I need rigorous definitions of your terms to do so. This is not an unreasonable request. I’m frankly surprised that examples aren’t readily available, given how much CSI and its variants are touted as indicators of intelligent agency.

  165. kairosfocus,

    The precise problem with ev et al as has already been pointed out, is that they put the contained small scale random variation on an island of function and set on the task of hill climbing. Through a bait and switch, this — not disputed by anyone — is then offered as evidence that we can get to the shores of such an island of function by the same means. Not so. In short, question-begging and bait and switch.

    I have made it very clear in my posts to this thread that I am discussing biological evolution, not abiogenesis. The only “bait and switch” allegation that could be made is against someone who, either deliberately or through careless reading, tries to change the topic to abiogenesis.

    DrBot has clarified this as well, so I’m sure that my position was stated clearly. To be sure I understand your position, please let us know if you believe that CSI can be generated by known evolutionary mechanisms or not.

  166. Collin,

    If such an experiment were done, how would we know when we succeeded? I mean, with the monkey/typewriter experiement we would know when we got Shakespear. But that refers to information that we have already assembled and assigned meaning to it. How would we find the “meaning” in the CSI of the experiment?

    This is exactly why I’m asking for a mathematically rigorous definition of CSI. The ID claim appears to be that CSI of more than a certain amount cannot be generated by known evolutionary mechanisms. I would like to test that claim, but thus far no one has defined CSI with sufficient rigor that I can implement a measurement of it in software. Are you able to help?

  167. MG:

    You are simply insistently repeating long since cogently answered talking points.

    For instance, the commonly encountered metric of functionally specific bits can be very simply assessed for protein coding DNA, at 2 bits per base; a 300 AA protein coding region in DNA then uses 3 * 300 * 2 = 1,800 bits of functionally specific information; there are hundreds or thousands of such zones in a typical genome. At the next level up, Durson et al — as already discussed and linked above — have given measured values in FITS fro 35 protein families. This was published in the per reviewed literature in 2007.

    But, since many do not know that, this talking point can still be used persuasively.

    Since you were already warned again above, it is plain that you are not inclined to follow the truth.

    A sobering conclusion to have to draw, but one that is well-warranted.

    When you show signs of responsiveness to the truth, then there can be progress.

    G’day

    GEM of TKI

  168. F/N: Biological evolution, onlookers, is a neatly question-begging and ambiguous term.

    First, the question of the root of Darwin’s tree of life is begged, but we can neatly set aside the problem of getting to a metabolising, self-replicating organism by making the datum line that we will not go there. But, without that root, the tree of chance plus necessity driven evolution has no basis to stand on.

    By now it is blatant that the only empirically credible explanation for the FSCO/I to do that in the simplest life forms is intelligent design.

    Once that is seen to be designed, there is no reason to try to insist that at later stages, the major body plans were products of chance variation and natural selection. Common design would at once remove all the unnecessary problems with the naturalistic theory of origins, but the problem is that there is an acting prejudice, for a designer of life inferred on the FSCI in life, is seen as opening the door to an intelligent designer who may be beyond the cosmos. (Never mind that he evidence of the fine tuning of the cosmos — also pointed out above and pointedly ignored by the willfully obtuse — has blasted that door off its hinges long since.)

    Evolution in the sense of small empirically observed changes of already functioning organisms in populations, is a non-controversial fact. Indeed, Young Earth creationists accept it and see it as part of the design impressed by the Creator.

    Where the problem comes up is the unwarranted extrapolation to the claimed origin of novel body plans, dozens of times over across the history of life. As I excerpted and discussed yesterday, we are talking 10′s to 100′s of millions of bases of new functional, integrated information that has to work starting from the embryo. We are invited — with absolutely no direct empirical evidence, and every type and degree of counter-evidence dismissed — to take this as having happened by simple accumulation of micro changes.

    Why?

    because of Lewontinian a priori evolutionary materialism.

    That is not science, it is closed minded ideology.

    When we see ev and the like, we see these are parallel to micro evo, and we are being invited to simply swallow the extrapolation, never mind the issue of getting to novel islands of function, rather than simple hill-climbing per a designed algorithm within an island of function.

    If the simulators could show us — notice, this is a stated empirical test — an ev [Sun 4 binary 409 k bytes, a bit bigger than but comparable in storage capacity scale to a unicellular genome] that originated by chance variation and trial and error selection, then was able to move on to optimise a functioning system by hill climbing that would be different.

    But they cannot and on solid thermodynamics analysis, they will credibly never be able to do that.

    GEM of TKI

  169. F/n 2: genome sizes for bacteria and kin:

    The size of Bacterial chromosomes ranges from 0.6 Mbp to over 10 Mbp, and the size of Archael chromosomes range from 0.5 Mbp to 5.8 Mbp . . . .

    The smallest Archae genome identified thus far is from Nanoarchaeum equtans, a tiny obligate symbiont with a genome size of 0.491 Mbp (491 Kbp). This organism lacks genes required for synthesis of lipids, amino acids, nucleotides, and vitamins, and hence must grow in close association with another organism which provides these nutrients.

    The smallest Bacterial genome identified thus far is from Mycoplasma genitalium, an obligate intracellular pathogen with a genome size of 0.58 Mbp (580 Kbp). M. genitalium is restricted to the intracellular niche because it lacks genes encoding enzymes required for amino acid biosynthesis and the peptidoglycan cell wall, genes encoding TCA cycle enzymes, and many other biosynthetic genes.

    In contrast to such obligate intracellular bacteria, free-living bacteria must dedicate many genes toward the biosynthesis and transport of nutrients and building blocks. The smallest free-living organisms have a genome size over 1 Mbp . . . . prokaryotes tend to have very little junk DNA (typically less than 15% of the genome) and eukaryotes have substantial amounts of junk DNA.

  170. Hi Kairos

    I was looking at codon table and noticed something interesting.

    a– The redundancy of codon to amino acid mapping is so typical of forward error correcting (FEC) methods. These methods are absolutely critical in modern digital data transfer and streaming. They are used in one way data transfers and that is exactly what we have in this case.

    b—There is variable strength to error correction capability in codon to amino acid map. Some amino acids are assigned 6 codons as if there was need to make sure these are properly translated. It’s possible some amino acids are more important or critical than others. There is some optimization at work here.

    c— Property of the group of amino acids that uses the most codons is polar and small sized. There has to be a good reason why they get the largest share of codon assignments but I don’t know it now.

    d— If error rate is high there could be another layer of redundancy to this system as several amino acids have the same property .Possibly any amino acid of the same property will do the same job in the chain which will make protein.

    e— If error rate is low than this layer of redundancy by group property could be used to fine tune protein by modulating folding tension. This could be done by selecting different sized amino acid from the same property group.

    One more thing. I wonder why was I moderated out couple days ago. I know I did not say anything inappropriate.

  171. PS: Of course, over the past few years, a lot of junk dna is turning out not to be so junky anymore, but to be involved in regulatory functions. Bricks by themselves do not a house make.

  172. PS 2: Wiki on genome size:

    Genome size correlates with a range of features at the cell and organism levels, including cell size, cell division rate, and, depending on the taxon, body size, metabolic rate, developmental rate, organ complexity, geographical distribution, and/or extinction risk (for recent reviews, see Bennett and Leitch 2005;[8] Gregory 2005[9]). Based on completely sequenced genome data currently (as of April 2009) available, log-transformed gene number forms a linear correlation with log-transformed genome size in bacteria, archea, viruses, and organelles combined whereas a nonlinear (semi-natural log) correlation in eukaryotes (Hou and Lin 2009 [10]). The nonlinear correlation for eukaryotes, although claim of its existence contrasts the previous view that no correlation exists for this group of organisms, reflects disproportinately fast increasing noncoding DNA in increasingly large eukaryotic genomes. Although sequenced genome data are practically biased toward small genomes, which may compromise the accuracy of the empirically derived correlation, and the ultimate proof of the correlation remains to be obtained by sequencing some of the largest eukaryotic genomes, current data do not seem to rule out a correlation.

    Sounds like a body-plan complexity linked scaling of regulatory software to me.

  173. 173

    Mathgrrl at 163,

    As I thought I made clear, I’m discussing biological evolution, not abiogenesis.

    Your attempt to extricate yourself from the entaglement of your gameplaying is duly noted. Unfortunately for you the actual text of this conversation does not allow it.

    Here is the exact wording of the question you sought to have answered when you joined this thread:

    “Can you explain, in empirical, observational evidence backed up steps, how the organised, functionally specific complexity of the flagellum, the eyes and immune systems originated per the design thesis?”

    Please note the emphasized word.

    Along with this, you did also state (repeatedly) that ID claims a designer “intervenes” in the course of evolution of life on this planet. You were flatly told that ID does not make that claim.

    However, you were entirely unimpeded by that correction, and went on making the statement over and over again.

    So on the one hand, you ask for evidence of a claim that ID doesn’t make, then on the other hand you ignore the evidence for the claim that ID does make, and bewteen the two you can always feign being misunderstood.

    One has to wonder, why should you be taken seriously at all?

  174. Sorry, UB, not UP.

  175. I have no idea why the comment before 174 was not accepted but its pointless letting 174 through without don’t you think

  176. Eugen:

    This, from the always linked:

    ____________

    >> even more interesting is the observation by Hurst, Haig and Freeland, that the actual protein-forming code used by DNA is [near-] optimal. As Vogel reports (HT: Mike Gene) in the 1998 Science article “Tracking the History of the Genetic Code,” Science [281: 329]:

    . . . in 1991, evolutionary biologists Laurence Hurst of the University of Bath in England and David Haig of Harvard University showed that of all the possible codes made from the four bases and the 20 amino acids, the natural code is among the best at minimizing the effect of mutations. They found that single-base changes in a codon are likely to substitute a chemically similar amino acid and therefore make only minimal changes to the final protein.

    Now [circa 1998] Hurst’s graduate student Stephen Freeland at Cambridge University in England has taken the analysis a step farther by taking into account the kinds of mistakes that are most likely to occur. First, the bases fall into two size classes, and mutations that swap bases of similar size are more common than mutations that switch base sizes. Second, during protein synthesis the first and third members of a codon are much more likely to be misread than the second one. When those mistake frequencies are factored in, the natural code looks even better: Only one of a million randomly generated codes was more error-proof. [3] [Emphases added]

    As the pseudonymous Mike Gene then summarises, when various biosynthetic pathway restrictions [the codes seem to come from families sharing an initial letter] and better metrics of amino acid similarity are factored in, it is arguable that the code becomes essentially optimal. So, he poses the obvious logical question:

    . . . the take home message from these studies, and several others, is that nature’s code is very good at buffering against deleterious mutations. This theme nicely fits with many other findings that continue to underscore how cells have layers and layers of safeguards and proof-reading mechanisms to ensure minimal error rates. Thus, contrary to Miller’s assertion, the “universal code” is easily explained from an ID perspective – if you have designed a code that is very good at buffering against deleterious mutations, why not reuse it again and again?

    In short, not only is the DNA code a code that functions in an algorithmic context, but of the range of possible code assignments, the actual one we see seems very close to optimal against the impacts of random changes. Further, the codons themselves fall into a highly structured pattern, as “amino acids from the same biosynthetic pathway are generally assigned to codons sharing the same first base.” [Taylor and Coates 1989, cited, Freeland SJ, Knight RD, Landweber LF, Hurst LD. 2000 in "Early fixation of an optimal genetic code." Mol Biol Evol 17(4):511-8. (HT: MG.)] That is, the DNA code itself is significantly non-random in how it assigns base pairs to amino acids.

    (But also, I must note that such suggests an inference: (a) the coding assignments are not driven by the mechanical necessity of the underlying chemistry of chaining either nucleic acids or proteins, and (b) they are not a matter of random chance. An observation of (c) a structured coding pattern tied to the one-stage-removed chemistry of synthesis of the amino acids that are components to be subsequently chained to form proteins therefore strongly supports that (d) the code is an intelligent act of an orderly-minded, purposeful designer. For, of the three key causal factors, if neither chance nor necessity is credibly decisive, that lends itself to the conclusion that intentional choice (here, tied to a prior component assembly stage!) is at work. In short, intelligent design.)

    Gene tellingly concludes:

    . . . there are two very good (and obvious) reasons for a designer to have employed the same code in bacteria and eukaryotes: 1) The code is extremely good at preventing deleterious amino acid substitutions and; 2) the shared code allows for the lateral transfer of genetic material and facilitates symbiotic unions. That Miller thought ID incapable of explaining the code, and Pace thought the shared code proved the common descent of bacteria and eukaryotes, only shows how an a priori commitment to non-teleological explanations creates a large intellectual blind spot. >>
    _____________

    Food for thought.

    GEM of TKI

  177. kairosfocus,

    For instance, the commonly encountered metric of functionally specific bits can be very simply assessed for protein coding DNA, at 2 bits per base;

    If this is your definiition of CSI, known evolutionary mechanisms are demonstrably capable of generating it in both real and simulated environments. Consider the specification of “Produces X amount of protein Y.” A simple gene duplication, even without subsequent modification of the duplicate, can increase production from less than X to greater than X. By your definition, CSI has been generated by a known, observed evolutionary mechanism with no intelligent agency involved.

    Schneider’s ev uses the specification of “A nucleotide that binds to exactly N sites within the genome.” Using only simplified forms of known, observed evolutionary mechanisms, ev routinely evolves genomes that meet the specification. The length of the genome required to meet this specification can be quite long, depending on the value of N. By your definition, CSI has been generated by those mechanisms. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.)

    Ray’s Tierra routinely evolves digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes. By your definition, CSI has been generated by known, observed evolutionary mechanisms with no intelligent agency required.

    The Steiner Problem solutions described at the site linked above use the specification “Computes a close approximation to the shortest connected path between a set of points.” The length of the genomes required to meet this specification depends on the number of points, but can certainly be hundreds of bits. By your definition, these GAs generate CSI via known, observed evolutionary mechanisms with no intelligent agency required.

    By the standard you set here, CSI is by no means an indicator that an intelligent agent is involved in the creation of a particular artifact.

  178. MG:

    Why do you insist on twisting the words of those you deal with? [I now have to seriously ask: are you simply playing the troll? If your level of behaviour does not improve shortly, I will -- regretfully -- have to resort to "don't feed dah trollz."]

    Above, I pointed out that the type of bits measure we commonly encounter in dealing with ICTs is a measure of functionally specific bits.

    For instance, I just uploaded a gif of 5 kbits to my blog. That picture of the Alice programming icon is quite specific and functional thank you — took some searching.

    The simple FSCI metric extends this — as can be seen above, and in the always linked and in the UD weak arguments correctives as you have been pointed to but have just as often ignored — by inserting a threshold, 500 – 1,000 bits. Beyond that point, we are dealing with config spaces of 2^1,000 or more possibilities. That is, 1.07*10^301, where the whole cosmos we live in of 10^80 atoms would only go through 10^150 Planck time states across its thermodynamic lifespan, where a Planck time is 10^20 times faster than the fastest nuclear interactions.

    In short, the cosmos search capacity relative to such a space is an effective zero, as is discussed here and here, the latter of which will give you a specific definition of CSI, not just the easier to work with FSCI. If the cosmos cannot search the space worth more than an effective zero, then we can be confident that the only credible source of a functionally specific config in that space will be intelligence.

    It is blatantly obvious that neither you nor any of your fellow evo mat advocates can show us a case where FSCI beyond that threshold is produced by undirected chance plus necessity.

    GA’s playing as evolution sims, like ev etc, all are designed and do hill climbing exercises in very carefully designed sandboxes: moving around INSIDE islands of function, not getting to them.

    Why are such then triumphantly announced to the world — through a bait and switch sales tactic — as though they show how chance plus necessity can create functional info and support the claims of macroevolution on chance variation and natural selection?

    BECAUSE YOU HAVE NO REAL EVIDENCE.

    That is the take-home message, MG.

    Don’t you feel ashamed of associating yourself with such shoddy salesman tactics to promote what plainly cannot stand on its merits?

    GEM of TKI

  179. KF,

    MathGrrl is asking if an evolutionary process can move you around on an island of functionality, and if moving up to a hill on that island by means of an evolutionary process means that the organism generates CSI.

    You seem to accept that, but instead of simply saying – yes, moving up a hill of functionality involves and increase (the generation of) CSI and an evolutionary mechanism can achieve this – you try and switch the question to one about finding an island of functionality, and lace your replies with ad hominems.

    MathGrrl is not addressing the question of finding islands of functionality, she is addressing the question of travel ON islands of functionality. Please can you address MathGrrls points on their merits!

    I will spell out the question being debated again. Can an evolutionary mechanism generate ANY NEW CSI in a population of self replicating organisms or does the creation of ANY NEW CSI always require the intervention of a designer – This directly addresses the question of whether CSI is always and reliably an indicator of design.

    UB:

    Along with this, you did also state (repeatedly) that ID claims a designer “intervenes” in the course of evolution of life on this planet. You were flatly told that ID does not make that claim.

    Plenty of ID proponents believe that gaps in the fossil record are best explained by intervention – e.g. the Cambrian Explosion. Those that promote and persue ID as a theory, and who believe this, are therefore making that very claim.

  180. kairosfocus,

    Why do you insist on twisting the words of those you deal with?

    Why do you insist on replying rudely rather than addressing the points I am making? Is the civility requirement here not applicable to ID proponents?

    I am doing you the courtesy of taking your claims seriously enough to spend time testing them. In order to do that, I need to know the mathematically rigorous definition of CSI. You replied that CSI it can be measured simply as the number of bits in the string exhibiting the specified function.

    Based on your definition, I demonstrated how both biological evolution and simulations of biological evolution can generate CSI.

    If you disagree with my conclusions, please demonstrate how you would calculate the CSI from the four scenarios I described (gene duplication leading to increased protein production, ev evolving binding sites, Tierra evolving parasites, and GAs evolving solutions to the Steiner Problem). That will provide more details on how to calculate the metric that you claim is indicative of intelligent agency.

    If you don’t wish to assist me with this effort, simply say so. Politely, if you can.

  181. DrBot,

    I will spell out the question being debated again. Can an evolutionary mechanism generate ANY NEW CSI in a population of self replicating organisms or does the creation of ANY NEW CSI always require the intervention of a designer – This directly addresses the question of whether CSI is always and reliably an indicator of design.

    Thank you, that is exactly the question I would like some assistance in answering.

  182. MG:

    Please look at your behaviour above, when you have repeatedly been informed on the relevant definitions.

    You are now indulging in projection.

    As to whether an explicit or implicit map of an island of function, joined to a hill climbing algorithm that runs you up the hill is a creation of new information that did not previously exist, I think this summary answers the point. And, this has been also said over and over by various persons.

    Once you define an objective function and an optimising algorithm, you have loaded in a lot of information. You are simply using a random number generator to do your hill climbing.

    Again, where you need to really go is to show us an ev or a tiera or an avida that write themselves, algorithms, data structures and codes, out of lucky noise; i.e. get to the island of function by chance plus necessity without intelligent guidance.

    Moving around in a pre-set up environment on a designed algorithm simply show the power of design, and how designs can incorporate a random search element.

    We will give you the PC instead of asking you to get it out of a tornado through the Dell Plant at Round rock.

    GEM of TKI

    PS: And Dr Bot, that is your answer too, an answer you have been refusing to face.

  183. Kairos

    Thanks for the link
    http://www.angelfire.com/pro/k.....#dna_optim

    I’m using wrong terminology because I only try to explain it mechanistically.

    I’m wondering if DrBot or Mathgrrl see forward error correction in the process described in #170?

  184. PS: In short we are asking on body plan level evo, not OOL. When you answer at this level we can go on to getting the PC out of a tornado at Round rock, comparable to Darwin’s warm little pond.

  185. Mattgrrl,

    I wish I could come up with a mathematically rigorous definition of CSI. I thought that Dempski attempted that at one time. I’ll admit that I haven’t read any of his books.

    But I wold also ask if anyone has ever come up with a mathematically rigorous definition of natural selection. It just seems like a heuristic to me.

  186. 186

    Bot,

    UB:
    Along with this, you did also state (repeatedly) that ID claims a designer “intervenes” in the course of evolution of life on this planet. You were flatly told that ID does not make that claim.

    Plenty of ID proponents believe that gaps in the fossil record are best explained by intervention – e.g. the Cambrian Explosion.

    There are those in the ID camp who think the Cambrian could have been the result of intervention. There are others that think the Cambrian is the result of front-loading. Others very likely think something else entirely. So what? None of these arguments are central to the thesis that design can be detected in nature.

    Those that promote and pursue ID as a theory, and who believe this, are therefore making that very claim.

    What shall we say of a materialist who writes in his or her next profound best-seller that biological observation x provides evidence that no designer God is necessary to explain life? Shall we re-write the definition of the Theory of Evolution? Will the Theory itself need to be reformulated to say “change in allele frequencies over time, plus observation x” ?

    Or, is the theory one thing, and the speculations which extend from that theory another thing?

    - – - – - – -

    In any case, my comments to Mathgrrl stand unchanged. She wants to dismiss and repeat the mistakes in her assumptions, she wants to ignore being corrected, she wants to ask something from ID that she cannot answer herself, and more than anything, she wants to make conclusions about the output of a system while ignoring that it is the system itself that must be explained.

    What do we say about someone who (for the sake of rhetoric) pursues an almost trivial question, while demonstrating incorrect assumptions, and blatantly ignoring the critical issues at hand?

  187. Dr Bot:

    I will spell out the question being debated again. Can an evolutionary mechanism generate ANY NEW CSI in a population of self replicating organisms or does the creation of ANY NEW CSI always require the intervention of a designer – This directly addresses the question of whether CSI is always and reliably an indicator of design

    1- You are equivocating by using “evolutionary mechanism”- ID is not anti-evolution. Replace evolutionary with blind watchmaker.

    2- No intervention required. Did dawkins intervene with his “weasel” program once it started? No.

    So there is the problem, right there. You obviously don’t understand ID and think the way to leasrn about it is by visiting blogs and asking random questions.

    And the biggest concern is they don’t even need to consider ID. If ID didn’t exist they still wouldn’t have any positive evidence for their position.

    But I guess they like trolling…

  188. Mathgrrl:

    I’m not a biologist, but if you’re looking for a mathematically rigorous definition of CSI, may I suggest that you check out the scientific articles listed at this Website of mine:

    http://www.angelfire.com/linux/vjtorley/ID.html

    If you’d prefer a paper by a scientist who isn’t in any way sympathetic to ID, try this one:

    http://www.pnas.org/content/104/suppl.1/8574.full
    (Hazen, R.M.; Griffin, P.L.; Carothers, J.M.; Szostak, J.W. 2007, Functional information and the emergence of biocomplexity, Proc Natl Acad Sci U S A, 104 Suppl 1, 8574-81.)

    The mathematical articles by Professor Dembski should give you a clear understanding of the logic behind the design inference.

    You asked for details of “who, what, when, where, and how.” Some of these requests are reasonable; others are not. If scientists found an advanced technical artifact buried under the Antarctic ice, their first conclusion would be THAT it was designed. The “who, what, when, where, and how” questions would take a lot longer to answer. It’s hardly surprising that we don’t have good answers to these questions yet. Nevertheless I shall have a go at answering them.

    Who: Some Intelligent Being outside this cosmos. (I’d be happy to call this Being God, myself.)

    Evidence: the fact that the cosmos itself is fine-tuned for life. For an up-to-date presentation of the fine-tuning argument, see:

    The Teleological Argument: An Exploration of the Fine-Tuning of the Universe by Robin Collins. In The Blackwell Companion to Natural Theology. Edited by William Lane Craig and J. P. Moreland. 2009. Blackwell Publishing Ltd. ISBN: 978-1-405-17657-6.

    For background reading on the fine-tuning argument, see:

    Universe or Multiverse? A Theistic Perspective by Robin Collins. Well worth reading. See especially Part VI on the beauty of the laws of Nature.

    God and the Laws of Nature by Robin Collins. (Scroll down and click on the link.)

    What: See the inside flap of The Edge of Evolution, by Professor Michael Behe, where he lists the various features of the universe which appear to have been fine-tuned, in decreasing order of generality. I’ve highlighted the biological ones. Here goes:

    Laws of nature
    Physical constants
    Ratios of fundamental constants
    Amount of matter in the universe
    Speed of expansion in the universe
    Properties of elements such as carbon
    Properties of chemicals such as water
    Location of solar system in the galaxy
    Location of planet in the solar system
    Origin and properties of Earth/Moon
    Properties of biochemicals such as DNA
    Origin of life
    Cells
    Genetic code
    Multiprotein complexes (see here)
    Molecular machines
    Biological kingdoms
    Developmental genetic programs
    Integrated protein networks
    Phyla
    Cell types
    Classes

    Here are three more that Behe thinks the Intelligent Being might or might not have designed:

    Orders
    Families
    Genera

    Regarding cell types: in his book, The Edge of Evolution, Professor Michael Behe points out that classes of vertebrate differ in the number of distinct cell types they have: “Although amphibians have about 150 cell types and birds about 200, mammals have about 250″ (2008, Free Press, paperback edition, p. 199). Each cell type is quite distinct from the other types in its group. For instance, the cells of the mammary, lacrimal and ceruminous glands share the property of being specialized for secretion through ducts (exocrine secretion), but the substances they secrete are very different: milk, tears and ear wax respectively. Professor Behe argues that the gene regulatory network that is required to specify each cell type is irreducibly complex. There is an old Chinese proverb that a picture is worth a thousand words, and it’s certainly true in this case. Readers who want to see what a gene regulatory network looks like for a tissue type called endomesoderm, in simple sea urchins, can click here. It’s well worth having a look at. The resemblance to a logic circuit is striking, and the impression of design overwhelming. Behe estimates that the number of protein factors involved in the gene regulatory network for each cell type is about ten, and argues that it appears to be irreducibly complex. On the basis of scientific observations of a very large number of mutations in Nature (especially in the parasite Plasmodium falciparum, the HIV virus and the bacterium Escherichia coli), Behe calculates that during the history of life on earth, Darwinian evolution would be unable to generate a system with more than three inter-dependent components. He concludes that the cell types that characterize a class of organisms are very likely to be designed (ibid., pp. 198-199).

    When: Front-loading supporters argue that it could have been as far back as the Big Bang. While this is true for the laws of Nature, I do not think this is possible for the emergence of life on Earth. I used to be a front-loader myself, but after reading physicist Robert Sheldon’s thought-provoking article, The Front-Loading Fiction (July 1, 2009). I now believe that the Intelligent Designer has manipulated DNA and proteins on millions of occasions in the Earth’s history. (I say millions, because there are millions of kinds of proteins in Nature, and the other biological tasks listed above seem to have required far fewer acts of manipulation by the Intelligent Designer.) And if each and every family of organisms was designed by the Intelligent Being, as Behe seems to believe, then the last act of manipulation could have been no more than 10 million years ago – say, back in the Miocene. (I don’t know of any families that have appeared since then.)

    “Millions of manipulations” might sound like a lot of work for a Designer, but I would argue that:

    (i) front-loading would have been even more work, as it would have involved designing everything in Nature to a ridiculously high level of precision to ensure that some billiard ball-style collisions between atoms shortly after the Big Bang would lead to the emergence of life on Earth ten billion years later. Nature isn’t that precise; length is quantized in units of 1.616 x 10^-35 meters (a Planck length).

    (ii) the Earth is billions of years old, so millions of manipulations still only works out at one every 1,000 or so years. Hardly difficult for a Deity.

    Where: Depends on which act of manipulation we’re talking about. Can any evolutionary biologist tell me where the first bird or mammal originated?

    How: By manipulating genes. Sorry I can’t be much more specific than that, but I’m not a biologist, and I would hardly expect to know the M.O. of aliens whose minds were millions of years ahead of mine – much less the modus operandi of a cosmic Designer.

    Hope that answers some of your questions.

  189. Dr Torley,

    Neat nodes, arcs and interfaces diagram . . .

    How soon do you think it would take for the network shown to run past 1,000 bits of information capacity [= basic yes/no questions] to restate as a net list?

    (In short, FSCO/I, so designed. Some aspects are probably going to be IC too, which would be a related inference to design. And of course this is about organism feasibility, so again, an issue of getting to islands of function. )

    The cell’s metabolic reactions network here [warning, fat chart] — Fig I.2 here (as repeatedly linked) — is also a similarly impressive illustration of a complex, functionally specific network that is well past 1,000 bits of specifying information.

    I see your who what where when how stuff, and say, that is one way of reading it.

    There are other ways, doubtless, and as I pointed out to MG long since, any sufficiently sophisticated nanotech lab several generations beyond Venter could have done it.

    But of course, once we lift our eyes to the heavens and understand the complex fine tuned functional organisation to make a C-chemistry cell based life supporting cosmos, we see that we already have a serious reason to infer to an extra cosmic necessary being [thus immaterial as matter is radically contingent a la E = m*c^2 . . . as already pointed out], with power, knowledge skill and intent to design and implement such a cosmos.

    With such in hand, it is very reasonable indeed to infer to a design of original life, and of subsequent body plans up to and including our own.

    It is ideology, not science that stands in the way of general acceptance of this sort of frame of thought.

    However, in no wise does a who what where when why how model detract form the basic key design inference, from signs of deisng to the credible conclusion that something with these signs was designed. Different who what where when models could fit the one and same inference.

    That tweredun is logically prior to whodunit, when or how. (Someone — above? — said we don’t go looking for murderers if we have reason to accept that a death was natural.)

    GEM of TKI

  190. PS: in case it is needed again, here is a summary of the cosmological inference on fine tuning, with a particular focus on the implications of H2O and the rule of C in light of the C/O resonance as underscored by Hoyle.

  191. Hi kairosfocus,

    Thank you for your posts. I was bowled over by your metabolic diagram. It certainly is a “fat chart.” I’m afraid I can’t supply you with an estimate of how long my network would take to reach the 1000-bit threshold.

    You hit the nail on the head when you wrote: “That tweredun is logically prior to whodunit, when or how.” Answers to what/when/where/how questions are bound to be provisional, and a wide range of answers is certainly possible, in the light of what we know.

    Having said that, it would be a feather in the cap of the ID movement if it could make retrospective predictions about the order (and perhaps timing) of appearance of certain organisms in the fossil record, based purely on considerations relating to FCSI. Just a thought.

  192. Dr Torley:

    1,000 bits is of course 125 bytes or 143 or so ASCII characters.

    It is manifest that a net list of the embryological development regulatory network charts, or the overall metabolism of life chart — I understand a wall size version is given pride of place in a front office in many Biochem Depts at unis — will very rapidly run past the 1,000 bit threshold.

    The sort of complex, integrated functional systems we are looking at reek of design, save to the utterly indoctrinated. That is why my fig I.2 compares a petroleum refinery. (cf this chart here)

    G

  193. KF.

    Excellent, we have made some progress. I hope now we can agree the following general principles:

    1: A system with CSI can aquire new CSI under certain conditions. Those conditions include:

    A: Intelligent intervention
    B: Descent with modification and selection

    2: The extent to which CSI can increase under B is not clear and may be severely restricted.

    Now we can make an observation about another way that CSI can increase (and also decrease) and add it as cause C above: Function is defined by context, therefore a change in context can change function.

    Take the text string:
    jsiengoga0
    It is functional if it is the password for some on-line service, functionality of the string is defined by that context – if the pasword is changed then that text string becomes non-functional.
    One single bit change to the following:
    jsiengoga1
    takes us from 80 bits of functional information to zero BUT, if the passcode (the thing that checks for a correct password) flips a bit in the same char then that one bit change takes us from zero function to full function – but the text string has not changed, only the context within which we measure functionality.

    Now we can conceive of an organism in an environment, and that organism has never been exposed to a particular chemical with toxic properties, but it has as a byproduct of some other metabolic feature an ability to resist the chemicals toxic effects. This (I assume) is not functional as a feature – it doesn’t do anything, removing it does not affect the creature.

    But then its environment (the context) changes, erosion caused by a river unlocks a deposit of this toxic chemical which begins seeping into the river. Now this previously non functional trait gives the organism an advantage – function has been aquired but the creature did not change.

    The question here is weather the organism has increased its CSI (or perhaps better to say FCSI) without changing its self but by the fact of its environment – the thing that helps define context – changing instead?

    If a scenario like this can increase FCSI then we ought to add category C to the principles above:

    A: Intelligent intervention.
    B: Descent with modification and selection.
    C: A change in the context that defines function.

    And from this recognize that under certain conditions FCSI can increase due to pure chance events, and further (looking back at the password example) single bit changes of a configuration do not always equate to single bit changes in the measurable FCSI of that configuration.

    And to pre-empt you response – this is not a problem for the ID hypothesis, it is all about changes in FCSI in systems that already have FCSI. It is important to be rigorous when critically analyzing these things. If we can agree on what I have outlined above for already living systems then we have a good basis to move the discussion on to talk about the origin of FCSI.

  194. Dr Bot:

    It seems we are going in circles, right from your outset.

    How many ways do we need to say:

    THERE IS NO FREE INFORMATIONAL LUNCH?

    (Maybe, I could invite you to explain how a network of algorithmic processes that is reflexive and exhibits complex integrated specific function, as we may see here in Figs G.8(a) – (c) and G.9, assembles itself out of lucky noise filtered for on only trial and error without intelligent direction? Perhaps that will help you see what I am getting at when I speak of the centrality of the problem of getting to the shores of an island of function?)

    I repeat: the GAs all set up the comparable algorithmic flowcharts [program execution flow and signalling networks . . . ], putting us on deeply isolated islands of function.

    Through intelligent design.

    Within that island of function, there is an implicit or explicit contour map, or an equivalent function that incrementally generates such a map.

    Then, they start from some point or other low on the map, and feed in a carefully measured quantum of randomness. (Walk a few steps in any direction and see if that puts you up or down slope. How do you know you are on a slope? How do you deal with real life where minor mutations accumulate to embrittle the genome and the few that may move you uphill are battling an overwhelming slow deterioration of the genome pointing to embrittlement and extinction?)

    A subroutine picks the best performing results — the ones that are higher [already, we are assuming the island rises from a sea level to a central hill or at least a cluster of such hills].

    Blend in various techniques to move uphill, and repeat.

    But, all along we have the preloaded contour map, or the function that gives us the contour map incrementally on demand.

    All we have added is a technique for moving around on the map and picking up where the map climbs.

    (And this does not materially change if you insert a mechanism for adjusting the map midstream.)

    All of this is designed and purposeful.

    The controlled injection of random variation could be replaced by running a specified ring on a search pattern grid and picking up the same warmer/colder oracular signals, and the result would not be materially different as the driving force in the result is the map, not the means to move about on it. And, underneath, the fact that you have already placed the entity on the island of function.

    So, no, you are not creating new information, you are only making it explicit in a way that uses some randomness and creates the misleading impression that chance variation is generating functional information.

    Again, when you have an ev or avida or the like that assembles itself out of lucky noise and sets up its mapping function the same way, then we can talk.

    Otherwise, you would simply be showing what is not in dispute, functional entities created by intelligent designers can be designed to adapt within limits to changes in their environment, or to even seek desirable optima.

    Modern Young Earth Creationists would accept this.

    That is how far removed the issue is from what is really at stake.

    So, first, let us un-beg a few questions . . .

    [And after you show us an ev or avida that assembles itself out of digital noise filtered for performance, then we can talk about he real problem, assembling the PC for it to run on by passing a tornado through the Dell plant at Round Rock TX.)

    GEM of TKI

  195. PS: Recall, too, the FSCI threshold limit, beyond about 500 – 1,000 bits. As you will recall form discussions on the infinite monkeys theorem, a space of about 10^50 elements is empirically shown to be searcheable. One of 10^300 or more, is not. (In short, you are strawmannising FSCI above, too.)

  196. KF:

    It seems we are going in circles, right from your outset.

    Not at all. I am going in a straight line, slowly and methodically. Everyone else apart from MG keeps shooting off at tangents. This is most frustrating, I wish you would stay on topic and address the issues on their merits.

    What I have been trying to do here is take a systematic and critical look at the claims being made about FCSI so that we come to a detailed understanding of what can and can’t generate FCSI, and under what conditions. I am being scientific. I make no apologies for insisting on rigour, I would demand the same from ANYONE discussing a scientific theory.

    In order to get to a discussion about the origins of FCSI we need to have a thorough look at if and how things that already have FCSI can increase or decrease it, what generates new FCSI and under what constraints. WHEN we have mapped out this space – when we have tested the ideas by applying the to things we know – we have a solid foundation to build on and we can move on to the question of the origins of FCSI.

    You keep talking about tornados in factories and demand that I explain how a computer could be made under those circumstances – let me spell out for you AGAIN things I have said plenty of times before:

    I do not believe natural ool theories can account for the origin of life. I believe an intelligence is required.

    Now can you go back and read my comment at 193 and give me your opinions about the conditions and limitations under which FCSI can increase in a system that already has FCSI.

    So, no, you are not creating new information, you are only making it explicit in a way that uses some randomness and creates the misleading impression that chance variation is generating functional information.

    I’m not sure this makes sense – think about it – You are saying that moving from the shore of an island of functionality to a hill (by definition an increase in functional specified complexity) does not involve any additional information because there exists an island of functionality.

    The island of functionality is just a space of possible configurations that can be reached by small changes to any specific configuration on the shore of that island. If we explicitly design a system that is on that shore, then adjust the design so it is now on the top of a hill then by your definition we have not added any new information. This actually extends to the whole search space – we are talking about a space of all possible configurations of matter. It consists of seas of non-function populated by islands of functionality. If, as you imply, finding and moving locations in this search space does not entail the creation of information (because the search space exists) then it is impossible for us as intelligent agents to create any new information when we design something because by definition new information is outside the space of possible configurations, and that places it outside the space of configurations possible in the entire universe.

    What we are actually talking about is a space of possibile configurations most of which don’t exist. When a designer, or a natural process, changes a system in a way that moves it up a hill of functionality to a part of the space that has so far not been explored then I would say that information has beed added – something that didn’t exist (but could in theory) now exists.

    Within that island of function, there is an implicit or explicit contour map, or an equivalent function that incrementally generates such a map.

    We are ultimatly talking about biology. Ths island of function is a space of possible self replicators traversable in small steps. What defines the shape of the island are the physical properties of the creature achievable in small steps and the environment is exists in. Changes to the creature (by accident or design) alter its ability to function. Changes to the environment (also by accident or design) als alter its ability to function.

    But, all along we have the preloaded contour map, or the function that gives us the contour map incrementally on demand.

    All we have added is a technique for moving around on the map and picking up where the map climbs.

    (And this does not materially change if you insert a mechanism for adjusting the map midstream.)

    The world exists, yes. We can observe that it changes due to the actions of intelligent agents (us) and natural forces. Some natural processes that exist in our world are also mechanisms that allow for movement across functional islands.

    If you are arguing that some natural processes – “a technique for moving around on the map and picking up where the map climbs.” – are the product of design because the universe was designed then I would agree because I believe the universe was designed – but when it comes to talking about whether natural processes or intelligence can generate FCSI you are basically saying that any natural process is the result of design – so by implication anything produced by natural process is designed. This renders the debate meaningless because ultimatly there are no natural processes – everything is ultimatly design, even erosion because god created water, and any OOL scenario where life emerges without explicit design is actually design because the universe was designed.

    This is why I am insisting on rigour – lets understand what can be done within the laws of the designed universe (natural forces), and from that what can’t be done and would therefore require intelligence.

    Back to the question again: To what degree, and in what circumstances can natural forces increase the FCSI of something that already has FCSI?

    Hill climbing (evolution) – YES
    Functional context shifting – YES

    FCSI CAN increase due to natural forces in something with pre-existing FCSI over a threshold. If we can agree on that then we can move on to the second question – can natural forces generate any FCSI if no FCSI already exists?

  197. KF I would also like your thoughts on my other observation above:

    single bit changes of a configuration do not always equate to single bit changes in the measurable FCSI of that configuration.

    Flipping a bit in a computer memory can render a piece of software non-functional – the software may have 1Mb of functional information but flip one bit and it has zero bits of functional information – is this correct or have I misunderstood?

  198. Dr Bot:

    Pardon, but the questions ARE being begged.

    Once we have a known, causally sufficient source for functionally specific, complex information, and analytical reasons — backed up by the infinite monkeys tests — to doubt that the other major source of high contingency (i.e. chance) is causally sufficient, we have a good reason to see that such FSCI is a good sign of intelligent cause.

    When it comes to GAs and the like, I repeat, I am sorry, but the question is being begged.

    Again, you have a complex, functionally specific system, and you incorporate in it a sandbox level (comparatively speaking) random element that you use to walk around and hill-climb. Problem is, you have an intelligently designed sub function that is mapping that hill.

    [Here is a related question: does the Mandelbrot set plot contain more functional information than is in the program that specifies it? If you were to use a random walk to pick your points to test, would that change the amount of information in the set plot? After all, the interesting part of the set is really a function map.]

    You are not creating new function, you are playing with parameters within existing function, seeking a peak. And obviously, since Newton and so on, we have been using algorithms that seek peaks, and can incorporate random walks as part of the algorithms.

    Next, you are sharply constraining the degree of variability — you are operating within a sub-space that is in a hot or target zone. You are not addressing a relevant config space gamut: you are not dealing with 10′s or MB of variable information, or 100′s of k bits.

    So, we are looking at a strawman, comparable at best to specialisation and adaptation [say, how a Tomcod becomes adapted to Dioxin in its river but in so adapting loses a certain degree of vigour so that except in that river, the ordinary variety prevails], not to origin of novel body plans. And certainly not the algorithms and control mechanisms, as well as system organisation to get to the relevant body plans.

    You will notice that somewhere above, I pointed out that ev was at 409 kB. This is comparable to a body plan origin challenge. So, again, let us see a case of an ev writing itself out of noise filtered by functionality testing then we can talk seriously.

    GEM of TKI

  199. F/N: un-begging the question is simple to do, as with the second law of thermodynamics and perpetuum mobiles of the second kind: empirically set up an infinite monkeys test, in silicon or in vivo or in vitro, and let’s see how it goes. Remember, the issue is to get to at least 1,000 bits of functional information by chance and mechanical necessity without intelligent direction.

    Here is the wiki article on the results to date:

    The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation.

    One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[19]

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    Due to processing power limitations, the program uses a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator “detects a match” (that is, the RNG generates a certain value or a value within a certain range), the simulator simulates the match by generating matched text . . .

    In short, a space of order 10^50 is searchable, but we are talking about spaces of order 10^300, in a cosmos that could have perhaps 10^150 states.

  200. F/N 2: The other problem is the migration from one general function to the next by small steps, maintaining function all the way.

    For three letter words, this is easy enough:

    man –> can –> tan –> tam –> tat –> cat –> mat –> bat –> bit –> bot . . .

    (Such a simple space is indeed spannable by a branching network of stepping stones.)

    But, when we put up to the level of say this post and ask for an incremental change that keeps function all the way and then writes a new text, we begin to look at impossibilities. Texts of sufficient length naturally come in islands of function. (The same holds for computer programs.)

    Maybe, we can do a duplicate then let the duplicate tag along to the function and vary at will. But then, we are looking at needing to cross the functionality threshold in an increasingly large config space, and for the need for integrated function.

    Another empirical search space barrier emerges.

    And so on.

    Islands of function are credibly the natural situation, and pose a serious challenge.

  201. kairosfocus,

    As to whether an explicit or implicit map of an island of function, joined to a hill climbing algorithm that runs you up the hill is a creation of new information that did not previously exist, I think this summary answers the point. And, this has been also said over and over by various persons.

    Nothing you have said answers the very simple question: Can CSI, by your yet to be detailed definition, be generated by known evolutionary mechanisms, even in principle?

    I tried to use your definition from 167, but you refused to explain why my conclusions based on that definition were incorrect. I ask again, if you disagree with my conclusions, please demonstrate how you would calculate the CSI from the four scenarios I described (gene duplication leading to increased protein production, ev evolving binding sites, Tierra evolving parasites, and GAs evolving solutions to the Steiner Problem). That will provide more details on how to calculate the metric that you claim is indicative of intelligent agency.

  202. Collin,

    I wish I could come up with a mathematically rigorous definition of CSI. I thought that Dempski attempted that at one time. I’ll admit that I haven’t read any of his books.

    I have read Dembski’s writings on CSI and, to my knowledge, he has never shown how to calculate it for real biological artifacts taking into consideration observed evolutionary mechanisms.

    But I wold also ask if anyone has ever come up with a mathematically rigorous definition of natural selection. It just seems like a heuristic to me.

    Mathematical models are used extensively in various subdisciplines of biology. I recommend looking up “population genetics” for some very interesting examples.

  203. vjtorley,

    Thank you for your courteous response.

    I’m not a biologist, but if you’re looking for a mathematically rigorous definition of CSI, may I suggest that you check out the scientific articles listed at this Website of mine:

    http://www.angelfire.com/linux/vjtorley/ID.html

    I’ve bookmarked your page since it contains a good selection of ID papers. Unfortunately, I have read those that deal with CSI and none of them provide a mathematically rigorous definition that I could use to model and test some of the claims being made here.

    I also very much appreciate your attempt to answer the who, what, when, where, and how questions. Again, unfortunately, I don’t see any empirical observations that objectively indicate that intelligent agency is required for biological evolution. Behe’s examples are the closest, but as has been pointed out by several reviewers, his concept of irreducible complexity fails to take into consideration known evolutionary mechanisms. Evolution can proceed by adding components, removing components, or modifying components. Behe ignores the second two processes, which is why exaptation manages to construct “irreducibly complex” structures.

    Typically a scientific hypothesis is based on some observations that are not adequately explained by the prevailing theories. In the case of ID, my understanding prior to actively participating here at UD was that CSI was observed in real biological systems. It turns out that not only can no ID proponent show that to be the case, there isn’t even a rigorous mathematical definition of CSI or any of its variants that would allow such a measurement to be made.

    I am genuinely interested in testing the claim that CSI beyond a certain level is an indicator of intelligent agency. Without that rigorous definition, I can’t do so. Further, without that definition no one can make any claims about what CSI does or does not indicate.

  204. kairosfocus,

    How many ways do we need to say:

    THERE IS NO FREE INFORMATIONAL LUNCH?

    You keep saying that, but until you define your terms it is impossible to test your claims.

    I repeat: the GAs all set up the comparable algorithmic flowcharts [program execution flow and signalling networks . . . ], putting us on deeply isolated islands of function.

    You are still missing the distinction between the simulator itself and the model being simulated. Are you claiming that it is impossible, even in principle, to test evolutionary mechanisms in digital simulations? By the same logic, are you arguing that the weather is a product of intelligent agency since meteorologists make extensive use of intelligently designed simulations?

    That’s not a facetious question. I would genuinely like to know if you think that any testing of models is inherently impossible.

  205. MathGrrl:

    I have read Dembski’s writings on CSI and, to my knowledge, he has never shown how to calculate it for real biological artifacts taking into consideration observed evolutionary mechanisms.

    In NFL (No Free Lunch), Dembski performs a calculation for a biological system. Are you familiar with that calculation?

  206. Here’s the correct post:

    MathGrrl:

    I have read Dembski’s writings on CSI and, to my knowledge, he has never shown how to calculate it for real biological artifacts taking into consideration observed evolutionary mechanisms.

    In NFL (No Free Lunch), Dembski performs a calculation for a biological system. Are you familiar with that calculation?

  207. PaV,

    Yes, I am familiar with No Free Lunch. The explanation of CSI there is not sufficiently mathematically rigorous to test the claims being made here. Interestingly, Tom Schneider of ev fame commented on the same problem, but he attempted to calculate CSI based on Dembski’s explanation. His result is here:
    http://www-lmmb.ncifcrf.gov/~t.....exity.html
    I suspect that most ID proponents will not agree with his conclusions, hence my request here for a more rigorous formulation.

  208. What a joke:

    MathGrrl:
    Are you claiming that it is impossible, even in principle, to test evolutionary mechanisms in digital simulations?

    It is only possble to corretly simulate things you understand.

    By the same logic, are you arguing that the weather is a product of intelligent agency since meteorologists make extensive use of intelligently designed simulations?

    Only because we have a pretty good understanding of weather patterns.

    Yes, I am familiar with No Free Lunch. The explanation of CSI there is not sufficiently mathematically rigorous to test the claims being made here.

    Strange- CSI has more mathematical rigor than anything your position has to offer. People in glass houses, and all.

    Interestingly Tom Schneider’s ev has been shown to b a targeted search and he doesn’t like that at all.

  209. MathGrrl:

    Here’s a quote from the site you linked:

    Although there is no information about how I got these, since both have a pattern and both have nice significant sequence logos, we have to conclude that both have “Specified Complexity”. According to Dembski, we must conclude that they both were generated by an “intelligent designer”.

    This equivocal use of “pattern” seems to be the typical mistake your side makes. A simple specification—a simple string of letters—of amino acids is, in terms of Dembski’s formulation, meaningless. For a “pattern” to truly exist it must possess some kind of meaning or value.

    E.g., you roll a die a thousand times, per Shannon information, and per Dembski, this would be highly “complex”; but it would have NO “specificity”. In the example your friend is using, if you compare the two “patterns”, they don’t match at all. But there is also this other huge difference: the one on the right is a REAL amino acid sequence corresponding to a functioning protein, whereas the one on the right is simply some kind of a sequence (a monkey could have typed it). This is not specificity.

    It seems most opponents of Dembski fail to understand what Dembski means about “specificity”, AND, they “specifically” (pun intended) fail to understand why Dembski presumes that biological patterns (sequences) are “specified”. Isn’t it obvious that a coding sequence found in nature has meaning or value, and that it contains true information? The analogue, of course, is input sequences that one would make to a computer: you input a list of letters and symbols and punctuation marks, and lo, and behold, the computer does something. But change that sequence—make and error—and nothing happens: the famous “junk in, junk out.”

    I’ll leave off here and await your response.

  210. MG:

    Again, DNA functions in living forms,and that function is based on a highly specific organisation of bases. Similarly, proteins specified by DNA fall into fold domains, and carry out specific functions based on the order of their AAs.

    At simple level this can be readily measured in functionally specific bits, as with any type of file.

    At the next level, Durston et al have published in the peer reviewed literature measurements of FSC in FITS, for 35 protein families.

    All of this was pointed out already.

    All of this you pass over in silence, in your haste to try ti impugn Dembski’s CSI metric as described here, esp pp 17 – 24. (Cf UD WAC 27 here. And BTW, the Durston et al FSC metric is related to this.)

    You go on to say it has never been applied to a biosystem, then when you are corrected, you say not acceptable by your standards.

    Do you not see that the above comes across as an exercise in selective hyperskepticism on your part?

    GEM of TKI

  211. F/N: On ev et al, I have taken abundant time to explain why there is no informational free lunch, and why the whole procedure used begs key questions. I need not repeat myself at this time.

  212. F/N 2: MG, you should not put words in my mouth that do not belong there, twisting what I have repeatedly said into a strawman.

    Observe, I have pointed out that the challenge is not to move around and climb hills inside islands of function with nice little fitness functions, but to get the high degree of functionally specific complex information requires to set up a system on such an island.

    That is why I have spoken of the 409 kbytes in the ev binary file, and compared it to the quantum of information to get a body plan. I then suggested an exercise: write an ev by chance and mechanical necessity, then allow it to move around in an island of function once it has been so formed.

    That would indeed be possible as a simulation exercise, and it would indeed be comparable as an informational exercise to the origin of a body plan by chance plus necessity.

    You have raised the issue of weather simulations and whether such sims imply designed weather. Weather is simulated, and such sims have a fair degree of success. But weather is not the result of claimed spontaneous origin of information systems that work on reading and executing coded information. Instead we are dealing with dynamical systems modelled by sets of differential equations and boundary conditions.

    That is exactly what life is based on.

    That you would even try to compare the two inadvertently speaks volumes on the failure to understand the issue of digitally coded information in living systems and its significance.

    That is a sufficiently important distinction to break the analogy you were trying to draw.

    GEM of TKI

  213. 213

    “Nothing you have said answers the very simple question: Can CSI, by your yet to be detailed definition, be generated by known evolutionary mechanisms, even in principle?

    No. It requires the system, and the system cannot be accounted for by evolutionary processes – by definition.

    FSCI is a subset of information. Information requires both semiotic representations and rules in order to exist at all.

    THAT is the very simple answer to the “very simple question” you ask. You may ignore it, but that does not make it go away.

  214. Joseph,

    It is only possble to corretly simulate things you understand.

    Evolutionary mechanisms are observed and documented. We understand them sufficiently to model them in simulations. Thus far no one has defined CSI rigorously enough to say the same about it.

    Interestingly Tom Schneider’s ev has been shown to b a targeted search

    I’m familiar with ev, to the extent that I’m working on a variant of it in my spare time. I assure you that there is no explicit target in the simulation. Where do you get the idea that there is?

  215. PaV,

    I’ll leave off here and await your response.

    I appreciate that you took the time to reply, but there isn’t enough to respond to in your post. You seem to disagree with Dr. Schneider’s assessment, but you haven’t provided a rigorous definition of CSI to counter his claims. I would be interested to see how you would calculate CSI for the four scenarios I described above.

  216. Upright BiPed,

    Thank you for the straightforward answer. Unfortunately, I find it more confusing than enlightening. Many ID proponents, including Dembski and kairosfocus, indicate that CSI can be calculated for real world genomes. Do you disagree?

    If not, where does the “system” come into the calculation? Why, exactly, is it impossible in principle for evolutionary mechanisms, which are known to modify genomes in populations, to generate some level of CSI?

  217. MathGrrl:

    The counter to Dr. Schneider’s assessment is twofold: (1) his understanding of what a pattern consists of is flawed; and (2) given his example, this flawed understanding is on display.

    How do I “prove” it is flawed? Well, I simply point to No Free Lunch and how Dembski defines and describes it there. The onus is for those scientists/mathematicians who want to disagree with Dembski to understand him correctly. Just read the book.

    Secondly, you could have disputed my claim that a pattern, to be “specified”, has to have meaning or value, and that in the case of biological function this meaning or value is implicit. You chose not to dispute that. Since you did not dispute that, then, very clearly, unless Schneider can point to some kind of “function” (outside of a computer) that his sequence performs, than my criticism of him stands.

  218. MathGrrl [177]:

    Ray’s Tierra routinely evolves digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes. By your definition, CSI has been generated by known, observed evolutionary mechanisms with no intelligent agency required.

    This statement really surprises me.

    You say you’re familiar with Dembski’s work, and, in particular, No Free Lunch. Yet anyone with a fleeting acquantince with Dembski’s work would know that, no matter how well something is ‘specified’, it does not constitute CSI unless its “complexity” is of the order 10^150. In terms of bits, that’s around 400. You say 22 bytes, and I’ll assume it’s 8 bits to the byte, or, 176 bits. This is well below the required 400 bits. You should just know this. Why don’t you?

    As to Tierra, I looked at a pp slideshow Ray Tierra provides. Excuse me, but it’s not impressive in the least. And how is it that he was looking for “parasites”? Because “parasites” feed off other organisms, and that was the only way he could end up with anything at all (i.e., he couldn’t really produce novel function, but only one thing piggy-backing on another thing’s function—said function being inputted). And what was the process by which this wonderful creature, this parasite, “evolved”? By wholesale elimination of some subroutines, or some such thing: that is, by the ELIMINATION of information! This is, of course, exactly what Behe proposes that “evolution” does when faced with a challenge (see his QRB paper).

    These programs all have hidden information in them. Dembski and Marks are doing wonderful work pointing all this out. In the meantime, you keep insisting on the who, what, where, when an d how, the Designer designed, and insisting yoube given a mathematically rigourous definition of CSI.

    As to the “what, where, when and how”, let me ask you a very simple question: we know that the Cambrian Explosion happened (the fossil record bears this out), please, then, tell me exactly how, where, using what and when did the great proliferation of body-plans (basically phylla) come about? To tell me that “Darwinian evolution” did it is to tell me nothing—for you first assume these mechanisms and then simply turn around and posit them as an explanation. Please give me extensive evidence that documents just how Darwinian processes brought this about. If you can’t, then I think all your protestations here have been highly disingenious.

    As to the definition, Dembski’s definition, if properly understood, is sufficiently useful and sufficiently rigorous.

    As to Schneider, he has no conception at all of what a “specification” is.

    As to why mathematicians of supposed worth have such difficulty understanding it . . . well, one is only left to surmise that it is because they don’t care to understand it. I suspect you’re really and truly in this group.

    I had a go around online with none other than Jeffrey Shallit about such things, and he, too, had no conception whatever of what a “specification” is. In fact, when I asked him for what he considered to be a specification, the first example he tossed in as ‘coming off the top of his head’ turned out to lead to an almost impossibility, forcing him to redefine his original specification. But his revised specification also had its problems—which I very happily pointed out to him. This was after I pointed out his error in an example he gave as an instance of a very simple algorithm actually producing CSI.

    The problem your side has is that it doesn’t understand what a “specification” is. So far, I’ve only seen one person who seems to really understand it (I think it was Sober), and he didn’t question Dembski about the triviality that most on your side understand to be a “specification” , but, rather, questioned Dembski’s assumption that biologically functioning entities are “specified”. To some degree this was a fair criticism; but a fair-minded reader would not, I don’t think, see this as much of a problem.

    One final thing, as to Shallit, I gave him two 150-long bit strings: one randomly generated, the other humanly generated (CSI). He couldn’t tell the difference. Here’s what a “specification” truly is: once I tell you which of the two strings possesses CSI, and ‘how’ it was generated, then you can “see” the “pattern”. IOW, the bit-string can be “translated”, or “converted”; whereas, a random bit-string is meaningless, and “converts” into nothing.

  219. PaV,

    The counter to Dr. Schneider’s assessment is twofold: (1) his understanding of what a pattern consists of is flawed; and (2) given his example, this flawed understanding is on display.

    How do I “prove” it is flawed? Well, I simply point to No Free Lunch and how Dembski defines and describes it there. The onus is for those scientists/mathematicians who want to disagree with Dembski to understand him correctly. Just read the book.

    I would argue that the onus is on Dembski and other ID proponents to clearly articulate their arguments and provide detailed examples of how to calculate their metrics.

    In that vein, could you please explain, in your own words, exactly where in Schneider’s CSI calculation he errs?

    It would also be very helpful if you could address the four scenarios I described above and show how to calculate CSI in each case. These kinds of worked examples are sorely lacking in the ID literature.

  220. PaV,

    You say you’re familiar with Dembski’s work, and, in particular, No Free Lunch. Yet anyone with a fleeting acquantince with Dembski’s work would know that, no matter how well something is ‘specified’, it does not constitute CSI unless its “complexity” is of the order 10^150. In terms of bits, that’s around 400. You say 22 bytes, and I’ll assume it’s 8 bits to the byte, or, 176 bits. This is well below the required 400 bits. You should just know this. Why don’t you?

    This is the problem with the lack of rigor in the definition of CSI. You seem to be using a similar definition to that used by kairosfocus, namely the number of bits required to describe the genome under consideration. This ignores the provenance of the genome, essentially assuming that it came into existence de novo. That completely ignores the known evolutionary mechanisms that lead to it.

    If we ignore that issue and apply your definition to the other three scenarios I described, we find far more than 400 bits of CSI in each of those genomes. Do you therefore admit that known evolutionary mechanisms can generate CSI? If not, please clarify your definition so that I can understand your objection.

  221. MathGrrl:

    Evolutionary mechanisms are observed and documented. We understand them sufficiently to model them in simulations.

    That is incorrect. Ya see we still don’t know what mutations cause what changes to body plans nor systems. Heck we don’t even know what makes an organism what it is.

    So how can we simulte anything given that poor understanding?

    I’m familiar with ev, to the extent that I’m working on a variant of it in my spare time. I assure you that there is no explicit target in the simulation. Where do you get the idea that there is?

    I provided you a link to the peer-reviewed paper that says it is.

    Do you think that ignoring the paper makes it go away?

    I would argue that the onus is on Dembski and other ID proponents to clearly articulate their arguments and provide detailed examples of how to calculate their metrics.

    And yet all YOU have to do to refute ID is to actually step up and start producing positive evidence for your position.

    IOW it seems the onus is on you. And that is because according to the EF YOUR position gets the first cracks at solving the casal relationship(s) for the thing being investigated.

    This ignores the provenance of the genome, essentially assuming that it came into existence de novo. That completely ignores the known evolutionary mechanisms that lead to it.

    And that ignores the fact that there isn’t any data to support DNA arising via blind, undirected chemical processes.

    IOW to refute CSI just demonstrate that nature, operating freely can account for it.

    Ya see even if ID didn’t exist your position still wouldn’t have anything.

  222. Joseph,

    Evolutionary mechanisms are observed and documented. We understand them sufficiently to model them in simulations.

    That is incorrect. Ya see we still don’t know what mutations cause what changes to body plans nor systems. Heck we don’t even know what makes an organism what it is.

    You are ignoring a vast amount of peer-reviewed research. We don’t know everything, obviously, but we do know enough to be able to model some known evolutionary mechanisms. Dr. Schneider’s ev, for example, reflects his research on real world organisms.

    I’m familiar with ev, to the extent that I’m working on a variant of it in my spare time. I assure you that there is no explicit target in the simulation. Where do you get the idea that there is?

    I provided you a link to the peer-reviewed paper that says it is.

    I just looked through this thread and didn’t see a reference. Could you please supply it again?

    If it says that ev has an explicit target, it is wrong.

    IOW to refute CSI just demonstrate that nature, operating freely can account for it.

    I’ll get to work on that just as soon as some ID proponent provides a mathematically rigorous definition of CSI so that I can objectively measure it.

  223. MathGrrl:

    You are ignoring a vast amount of peer-reviewed research.

    Nope.

    I just looked through this thread and didn’t see a reference. Could you please supply it again?

    I provided it and so did bornagain 77. You ignored it- bad form on your part:

    A Vivisection of the ev Computer Organism:
    Identifying Sources of Active Information

    ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have
    claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis
    shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make
    the search successful. Search algorithms mine active information from these resources, with some search algorithms
    performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge
    in ev include a Hamming oracle and a perceptron structure that predisposes the search towards its target.
    The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm
    finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently.

    Peer-review.

    If it says that ev has an explicit target, it is wrong.

    Doesn’t need an explicit target.

    I’ll get to work on that just as soon as some ID proponent provides a mathematically rigorous definition of CSI so that I can objectively measure it.

    You don’t need to know anything about CSI. Just start supporting teh claims of your position as oopsed to atacking ID. Attacking ID will not provide positive evidence for your position.

  224. You link to information and binding sites? Are you serious? Do you think binding sites make an organism what it is?

    This is a joke- it has to be.

    Also it has yet to be demonstrated that all evolutionary mechanisms are blind watchmaker mechanisms.

    IOW MathGrrl it appears you don’t understand the debate- so either you are a troll or yu are just here to flail about.

  225. Joseph,

    You are ignoring a vast amount of peer-reviewed research.

    Nope.

    If you refuse to even do a simple Pubmed search to investigate the claims you are making before you make them, there’s really no point in discussing the issue with you. Whether you want to recognize it or not, observed evolutionary mechanisms are well enough understood to be modeled. CSI is not sufficiently well defined to do the same.

    A Vivisection of the ev Computer Organism: Identifying Sources of Active Information

    I do remember reading that paper. Within the first two pages I noted two significant problems (invalid claims about the applicability of the No Free Lunch theorems in the introduction and an incorrect statement about binding sites being fixed in the second section of the second page). To properly respond to you, I was going to do a full review of the paper, but a quick search turned up a response from Dr. Schneider himself, linked from the ev blog. He covers the two errors I noticed and found the far more fundamental problem that the authors fail to even mention one of the information measures that is essential to ev. It seems clear that the authors did not have a good understanding of what ev does or how it does it.

  226. Joseph,

    I’ll get to work on that just as soon as some ID proponent provides a mathematically rigorous definition of CSI so that I can objectively measure it.

    You don’t need to know anything about CSI.

    CSI as an indicator of intelligent agency is a significant claim by ID proponents. I am frankly astounded at how difficult it has been to get a rigorous mathematical definition of this essential metric. If you can’t define it, it can’t be tested and therefore ID proponents are not justified in using it to support their arguments.

    Can you or can you not provide a rigorous mathematical definition of CSI?

  227. Joseph,

    You link to information and binding sites? Are you serious? Do you think binding sites make an organism what it is?

    I provided four scenarios for which I would like to measure CSI. Are you or are you not capable of demonstrating how to calculate CSI for those scenarios?

  228. 228

    Mathgrrl at 216

    I would hope to think you are not seriously questioning the importance of symbols to a system which requires symbols in order to function. As I said earlier, FSCI is a subset of information, and all information requires symbols and rules in order to exist at all. I would like to draw your attention to the wording “exist at all”.

    Take the symbol UCU. Within the conventions of the Genetic Code, UCU is the code for the amino acid Serine. We can make a single substitution and result in UAU, the symbol for Tyrosine. The information that exist here, only exist as a result of the mapping between the symbol (the triplets) and the object (the amino acid). You are concerned to say that the mechanism which flips the nucleotide from “C” to “A” has created information. This is a fundamental error. The mechanism did not create the mapping, and therefore cannot create the information.

    Until you can identify the source of the symbols and rules, you cannot make claims about mechanisms within the system in terms of being the originator of that information.

  229. MathGrrl:

    If you refuse to even do a simple Pubmed search to investigate the claims you are making before you make them, there’s really no point in discussing the issue with you.

    I have done the research. Your position doesn’t have anything.

    If it did I would still be an evolutionist. If it did long-time atheist Anthony Flew would not have switched sides.

    CSI as an indicator of intelligent agency is a significant claim by ID proponents.

    Yup and to refute that claim just demonstrate that blind, undirected processes can account for what we claim to be CSI.

    If you can’t define it, it can’t be tested and therefore ID proponents are not justified in using it to support their arguments.

    CSI has more rigor behind it than anything your position has to offer.

    So according to you your position is not justified in anything it claims.

    As for errors- my bet is that computer people know more about computing than a biologist does.

    I provided four scenarios for which I would like to measure CSI.

    Must have missed them.

    But anyway I can provide plenty of examples your position can’t explain. You want to go there?

  230. MathGrrl:

    That completely ignores the known evolutionary mechanisms that lead to it.

    The scream you hear is me pulling out my hair.

    What KNOWN evolutionary mechanisms?!!!!!

    Do you have any idea how incapable populations genetics is when it comes to explaining known changes in genomes? You simply presume that it explains everything, when, in fact, the only thing it can explain are found in Behe’s Quarterly Review of Biology article and in his Edge of Evolution. Read Kimura’s The Neutral Theory of Evolution and find out how little the neo-Darwinian “Synthesis” can explain.

    You’ve asked for what, when, where and how. I’ve asked you twice now to provide exactly that for Darwinian mechanisms; to which, you’ve twice demurred.

    You simply have religious belief in Darwinism. You haven’t examined its claims; you’ve simply accepted them whole-hog. You simply accept that these putative evolutionary algorithms don’t import information into them without first attempting to critique Dembski and Marks’ work at all. (The information comes via the constraints that are applied to the program. It’s as simple as that. And, just as inescapable. It’s basically specifying a domain and a set of equations, and looking for some output.)

    I just can’t take your remarks seriously any longer. You might not realize it, but you’re far from being sincere in your effort at arriving at the truth.

  231. Upright BiPed,

    As I said earlier, FSCI is a subset of information, and all information requires symbols and rules in order to exist at all.

    You keep using the term “FSCI” as if it has meaning, but you have yet to provide a mathematically rigorous definition for it and you have not shown how to calculate it for the four example scenarios I provided. Based on that, at the moment the term is literally meaningless.

    Could you please just provide a definition and examples of how to calculate it?

  232. Joseph,

    CSI has more rigor behind it than anything your position has to offer.

    In that case it should not be a problem for you to provide a mathematically rigorous definition of CSI and demonstrate how to calculate it for the four scenarios I describe in post 177 above.

  233. PaV,

    I just can’t take your remarks seriously any longer. You might not realize it, but you’re far from being sincere in your effort at arriving at the truth.

    I am absolutely sincere in my desire to understand CSI well enough to be able to test the claim that it is a reliable indicator of the involvement of an intelligent agent. However, I cannot get any ID proponent to provide a rigorous mathematical definition of the term nor to demonstrate how to calculate it for the four scenarios I provided.

    This is a major component of ID theory. Why won’t you simply answer my questions about it?

  234. MathGrrl:

    First you answer my question about the Darwinian explanation for the Cambrian Explosion. Give me the when, where, what, and how.

    When you do that, then I’ll take you seriously.

    And, for the third time: read No Free Lunch. I’ve read it. I’ve digested it. I understand it. It is quite clear what CSI is.

    And Schneider and Shallit don’t understand it. Neither do you, apparently. That’s your problem. Not mine.

    So stop playing this little game of yours.

    For any onlookers, our wonderful “MathGrrl” is probably some graduate student for someone who thinks he knows something about CSI, NFL, and evolutionary algorithms, and he’s sending his little protege off to the UD board to ask “discomfiting” questions. We try to be both accommodating and responsive. And this lets them play their little games, which is to point out what they consider to be a deficiency in the ID position, when all the while they’re simply coming from, and commenting from, a place of ignorance.

    Go away little girl.

  235. PaV,

    First you answer my question about the Darwinian explanation for the Cambrian Explosion. Give me the when, where, what, and how.

    I’m not going to play that game. I haven’t made any claims about the Cambrian Explosion in this thread. In fact, aside from correcting some misconceptions about the ev GA, I have been very focused on trying to get an answer to what should be a very simple question for ID proponents: What is the mathematically rigorous definition of CSI?

    I’ve provided four very straightforward scenarios to allow an ID proponent to demonstrate how to calculate CSI so that I can understand it in sufficient detail to test the claim that CSI is a reliable indicator of intelligent agency.

    We’re now over 200 posts into this thread and no one has provided this information about a core ID metric. What would be a reasonable conclusion to draw at this point?

  236. 236

    Haha! I actually read this entire thread…in one sitting. I’m convinced that this thread is now the best evidence we have of the lack of scientific merit of Intelligent Design.

    “That was designed”.
    “How do you know”.
    “Because it contains a lot of CSI” “What is CSI and how do you measure it”
    “Go away”.

    Awesome.

    From now on, rather than linking to the Kitzmiller decision, I’m going to link straight here.

  237. 237

    Mathgrrl,

    You claim to be in a desperate battle to understand what complex specified information is, yet by your non-response to my last post, I find that hard to believe.

    Are you indeed refusing to acknowledge that the term you want to understand is a subset of a larger phenomenon (information) and as such, it shares certain qualities with all other examples of information?

    Does CSI share certain qualities with all other examples of information, or not?

  238. Upright BiPed,

    It’s not a “desperate battle”, it’s just a bit frustrating. If I were talking to one of my colleagues and she said “I have an algorithm Foo that I can use to characterize my datasets as X and non-X.” and I asked “Could you please define Foo in detail and show me how to calculate it for a few examples?”, she’d be so eager to do so that I’d be lucky to get home before it was time to leave for work again.

    When I ask the same question here, I get non-responses like yours that ask questions using terms that I have already requested that you define. The difference is telling.

  239. PaV said,

    “Go away little girl.”

    I thought better of you.

  240. 240

    So yet once again, you seemingly just refuse to acknowledge that the term you wished to have defined, is a subset of a larger phenomenon, and that it shares specific qualities with all other examples within that larger set.

    Unfortunately, I am left once again to forego your dismissal and repeat the question:

    Does CSI share certain qualities with all other examples of information, or not?

  241. MathGrrl

    You wrote (#231):

    You keep using the term “FSCI” as if it has meaning, but you have yet to provide a mathematically rigorous definition for it…

    I have already directed you to a paper by David L. Abel, entitled The Capabilities of Chaos and Complexity , which was on the ID page I recommended for your perusal. In section 3, you will find a mathematically rigorous definition of Functional Sequence Complexity. And there’s also this paper: “Measuring the functional sequence complexity of proteins” (at http://www.tbiomed.com/content/4/1/47 ) by Kirk K. Durston, David K.Y. Chiu, David L. Abel and Jack T. Trevors. In Theoretical Biology and Medical Modelling 2007, 4:47. doi:10.1186/1742-4682-4-47.

    I have to say that if these definitions are not rigorous enough for you, I don’t know why you would swallow the hand-waving explanations provided by neo-Darwinists as to how functional information arose in living things.

    I am also unimpressed by Schneider’s attempted rebuttal to Professor Dembski’s “No Free Lunch”, which was written a decade ago. A lot has happened since then. Not once on his Web page does Schneider mention Specification: The Pattern That Signifies Intelligence by Professor Dembski, which is generally considered to be his finest paper on the subject.

    The simple fact is that in the past few years, the ID movement has made a genuine and serious effort to provide a mathematically rigorous definition of CSI and FCSI. Sadly, you have failed to keep up with the current literature. Time to do some reading, I think.

  242. Upright BiPed,

    So yet once again, you seemingly just refuse to acknowledge that the term you wished to have defined, is a subset of a larger phenomenon, and that it shares specific qualities with all other examples within that larger set.

    Just start by rigorously defining CSI and we can progress from there. Claiming that an undefined term is “a subset of a larger phenomenon” is not particularly compelling. Show me how to compute CSI for the four scenarios I described and we can go from there.

  243. vjtorley,

    You keep using the term “FSCI” as if it has meaning, but you have yet to provide a mathematically rigorous definition for it

    I have already directed you to a paper by David L. Abel, entitled The Capabilities of Chaos and Complexity , which was on the ID page I recommended for your perusal.

    I have read that paper and it does not discuss CSI. The explanation of “functional sequence complexity” is not equivalent to Dembski’s CSI.

    I am also unimpressed by Schneider’s attempted rebuttal to Professor Dembski’s “No Free Lunch”, which was written a decade ago. A lot has happened since then. Not once on his Web page does Schneider mention Specification: The Pattern That Signifies Intelligence by Professor Dembski, which is generally considered to be his finest paper on the subject.

    I have also read that paper and it does not provide a rigorous mathematical definition of CSI. If you believe it does, please state that definition and apply it to the four scenarios I described above.

  244. MathGrrl (#243):

    (1) Which four scenarios are you talking about? You haven’t provided a reference to any post in this (already very long) thread. If you think I’m going to read through it all to locate your needle in a haystack, think again.

    (2) Your assertion that Abel’s functional sequence complexity is not equivalent to CSI is completely beside the point. I take it from your silence that Abel’s definition is sufficiently rigorous to suit your requirements. Fine. If you don’t like the term CSI, then let’s dispense with it and simply talk about functional sequence complexity. This is a recurring feature of living organisms. Please provide an evolutionary mechanism that is able to account for the very high degree of functional sequence complexity we observe in even the simplest living thing. The ball’s in your court.

    (3) In any case, functional sequence complexity is a subset of CSI. It is perfectly legitimate to use a general term whose definition is intuitively clear but not able to be captured in a single formula, if subsets of that term can be defined rigorously, and their over-arching similarity is obvious.

    I suggest that you engage the issues at hand.

  245. 245

    Mathgrrl,

    The “I” in CSI stands for information. If you want to understand CSI, then your personal dismissal of this fact is not a search for truth.

    It’s a manuever.

  246. vjtorley,

    Which four scenarios are you talking about? You haven’t provided a reference to any post in this (already very long) thread.

    The scenarios are described in post 177 which I referenced again in 232.

    Your assertion that Abel’s functional sequence complexity is not equivalent to CSI is completely beside the point.

    No, it is exactly the point. ID proponents in this thread and elsewhere make the claim that CSI is a reliable indicator of the involvement of intelligent agency. However, no ID proponent, to my knowledge, has provided a rigorous mathematical definition of CSI nor shown how to calculate it for any scenarios such as those I’ve described.

    Are you claiming that Abel’s functional sequence complexity metric is a reliable indicator of the involvement of intelligent agency?

    Are you admitting that there is no rigorous mathematical definition of CSI?

  247. Upright BiPed,

    If you can’t define your metric with mathematical rigor and can’t demonstrate how to calculate it for a few simple scenarios, it is useless and any claims based on it are unfounded.

    I look forward to continuing the discussion with you when you have provided the necessary mathematical rigor and examples.

  248. Surely the easiest thing to do would be for someone to exercise some charity, humor MathGrrl and provide the calculations for her examples. Refusing to do so makes ID look bad.

    I would have a go myself but I’m afraid my advanced math skills are rather too weak to do a good enough job.

  249. Dr Boy=t and mathGrrl-

    Until either of you provide a dfined metric with matematical rigor for your poition thre isn’t any need for you to whine about CSI.

    Ya see until we know t you accept yourwhining is meaningless and makesyou both look like little cildren who can’t get their way.

    So have at it- no one is impressed with your whining.

    We need somthing that you accept from your position we can compare CSI to.

    Are you up to it?

    My prediction is you are not and will continue to whine- that prediction is based on the fact I have asked mny times and have not received.

    And that tells me your complaints are without merit.

  250. MathGrrl 17:
    A simple gene duplication, even without subsequent modification of the duplicate, can increase production from less than X to greater than X.

    How was it determined that gene duplications are blind watchmaker pocesses?

    You do realize that a duplicated ene is nothing without the proper binding sites- right? And even then all you have is another protein that you already have- if it doesn’t have a place to go, it is useless and can jus get in the way of existing proteins.

  251. Mathgrrl and DrBot,

    Functional information and the emergence of bio-complexity:
    Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak:
    Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define ‘functional information,’ I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions.
    http://genetics.mgh.harvard.ed.....S_2007.pdf

    Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video
    http://www.metacafe.com/watch/3995236
    Entire video:
    http://vimeo.com/1775160

    Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors
    Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families. Considerations were made in determining how the measure can be used to correlate functionality when relating to the whole molecule and sub-molecule. In the experiment, we show that when the proposed measure is applied to the aligned protein sequences of ubiquitin, 6 of the 7 highest value sites correlate with the binding domain.
    http://www.tbiomed.com/content/4/1/47

    Intelligent Design: Required by Biological Life? K.D. Kalinsky – Pg. 10 – 11
    Case Three: an average 300 amino acid protein:
    Excerpt: It is reasonable, therefore, to estimate the functional information required for the average 300 amino acid protein to be around 700 bits of information. I(Ex) > Inat and ID (Intelligent Design) is 10^155 times more probable than mindless natural processes to produce the average protein.
    http://www.newscholars.com/pap.....rticle.pdf

    Three subsets of sequence complexity and their relevance to biopolymeric information – Abel, Trevors
    Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).
    http://www.ncbi.nlm.nih.gov/pm.....MC1208958/

    Estimating the prevalence of protein sequences adopting functional enzyme folds: Doug Axe:
    Excerpt: Starting with a weakly functional sequence carrying this signature, clusters of ten side-chains within the fold are replaced randomly, within the boundaries of the signature, and tested for function. The prevalence of low-level function in four such experiments indicates that roughly one in 10^64 signature-consistent sequences forms a working domain. Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences. http://www.ncbi.nlm.nih.gov/pubmed/15321723

  252. Moreover, though current mathematical measures of the information in life are all ultimately based on extreme improbability against functionality arising from purely material processes, there is actually a more satisfactory proof against neo-Darwinism, as far as ‘measuring’ information is concerned,,, It is now shown that ‘higher dimensional’ information is its own unique entity! A entity which is completely separate from matter and/or energy,,,

    The Failure Of Local Realism – Materialism – Alain Aspect – (Quantum Entanglement) video
    http://www.metacafe.com/w/4744145

    Moreover this higher dimensional ‘transcendent’ information, which is not reducible to a material basis, in fact this higher dimensional information which falsifies the very materialistic presuppositions which undergird the neo-Darwinism framework, is found to extend into molecular biology;

    Quantum Information/Entanglement In DNA & Protein Folding – short video
    http://www.metacafe.com/watch/5936605/

    Further evidence that quantum entanglement/information is found throughout entire protein structures:
    http://www.uncommondescent.com.....ent-373214

    It is simply ludicrous to appeal to the materialistic framework, which undergirds the entire neo-Darwinian framework, that has been falsified by the very same quantum entanglement effect that one is seeking an explanation to! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! Probability arguments, which have been a staple of the arguments against neo-Darwinism, simply do not apply!

    further notes;

    Quantum entanglement holds together life’s blueprint – 2010
    Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford.
    http://neshealthblog.wordpress.....blueprint/

    Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
    Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
    http://journals.witpress.com/journals.asp?iid=47

    Another ‘mathematical’ measure for information, which I find to be a more accurate measure for ‘total’ information content in a cell, is,

    Information theory. Relation between information and entropy.
    Excerpt: the total information content (of a bacterial cell) is then 1.3 x 10^12 or, in round numbers, 10^12 bits.
    http://www.astroscu.unam.mx/~a.....ecular.htm

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

    Also of interest; ‘genetic entropy’, the true principle for all biological adaptations, has never been violated;

    Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video
    http://www.metacafe.com/watch/3995248

    Evolution Vs Genetic Entropy – Andy McIntosh – video
    http://www.metacafe.com/watch/4028086

    “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
    Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net ‘fitness gain’ within a ‘stressed’ environment i.e. remove the stress from the environment and the parent strain is always more ‘fit’)
    http://behe.uncommondescent.co.....evolution/

  253. If it is Shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.- CJYMan

  254. 254

    I think it is pretty clear that Mathgrrl doesn’t have anything. That point has been made abundantly clear. So, let’s get on with showing her that our side has the rigorous calculations she says we don’t.

  255. bornagain77,

    None of your references address CSI and the calculations related to them show that they are not the same metric that Dembski describes.

    Are you asserting that one of these other metrics is an indicator of intelligent agency?

    Are you admitting that there is no mathematically rigorous definition of CSI that you can reference?

  256. Joseph,

    If it is Shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.- CJYMan

    Tom Schneider’s ev demonstrates how simple evolutionary mechanisms can generate Shannon Information. Do you therefore agree that those evolutionary mechanisms can generate CSI?

  257. MathGrrl,

    You obviously have serious issues. My quote of CJYMan did not say anything about generating Shannon Information. Shannon didn’t really care about information:

    The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.- Warren Weaver, one of Shannon’s collaborators

    Also you are still confused- CSI argues against BLIND WATCHMAKER mechanisms- and your use of evolutionary mechanisms is nothing more than an equivocation.

    That said:

    Until either of you provide a defined metric with matematical rigor for your poition thre isn’t any need for you to whine about CSI.

    Ya see until we know what you accept your whining is meaningless and makes you both look like little cildren who can’t get their way.

    So have at it- no one is impressed with your whining.

    We need somthing that you accept from your position we can compare CSI to.

    Are you up to it?

    My prediction is you are not and will continue to whine- that prediction is based on the fact I have asked mny times and have not received.

    And that tells me your complaints are without merit.

  258. MathGrrl you state;

    ‘None of your references address CSI and the calculations related to them show that they are not the same metric that Dembski describes.’

    And sequences that demonstrate functionally are fundamentally (metrically) different from Dembski’s CSI exactly how???

    you then state;

    ‘Are you asserting that one of these other metrics is an indicator of intelligent agency?’

    Actually It is much easier than the math I cited. You see mathgrrl in your short post you have generated more functional information than anyone has ever seen generated by purely material processes:

    In your post,,,,

    bornagain77,

    None of your references address CSI and the calculations related to them show that they are not the same metric that Dembski describes.

    Are you asserting that one of these other metrics is an indicator of intelligent agency?

    Are you admitting that there is no mathematically rigorous definition of CSI that you can reference?

    ,,, not counting punctuation, spacing, capital letters, and all,, you have approx. 282 alphabetic letters (the length of a fairly average protein). How many possible arrangements of those letters: 26^282 or approx. 10^1562 possible combinations. How much ‘functional information’ is that? Well if I recall right from this video,,,

    Stephen Meyer – Functional Proteins And Information For Body Plans – video
    http://www.metacafe.com/watch/4050681/

    ,,,,

  259. it is every 10^14 sequences that will produce a meaningful word in the English language,,, You can use that for a ballpark figure to ascertain information (Functional information bits) from Szostak’s equation,,I(Ex)= -log2 [F(Ex)] ,,, but since we are dealing with proteins, you must find rarity of proteins in sequence space,,, to which I referred to Axe’s work. But none the less Mathgrrl the crushing point or ‘information’ to you, which you will deny the validity of anyway because of your atheistic bias, is that quantum entanglement is found in molecular biology!

  260. correction,, it is every 10^14 sequences that will produce a meaningful 10 LETTER word in the English language

  261. MathGrrl, put it this way, you want to establish the legitimacy of neo-Darwinism right? And neo-Darwinism is built on materialistic presuppositions right?? And materialism (local realism) is falsified by quantum entanglement right??? And quantum entanglement is found in molecular biology right??? Thus neo-Darwinism cannot be the explanation for quantum entanglement in molecular biology!!! Probability calculations, on which neo-Darwinism would depend if it were true, do not even apply for this ‘highest dimensional’ information displayed by entanglement!!

  262. Joseph,

    I’ll simply repeat what I said to Upright BiPed: If you can’t define your metric with mathematical rigor and can’t demonstrate how to calculate it for a few simple scenarios, it is useless and any claims based on it are unfounded.

    I look forward to continuing the discussion with you when you have provided the necessary mathematical rigor and examples.

  263. bornagain77,

    None of your references address CSI and the calculations related to them show that they are not the same metric that Dembski describes.

    And sequences that demonstrate functionally are fundamentally (metrically) different from Dembski’s CSI exactly how???

    It is CSI that is claimed by ID proponents to be an indicator of intelligent agency. It is therefore CSI that I am interested in understanding to a sufficient level of detail in order to test those claims.

    If you and other ID proponents agree that one of these other metrics is also an indicator of intelligent agency, I will be happy to look at it. I would much prefer, however, to use Dembski’s metric since that is the one referenced most often in these discussions.

    I don’t understand the difficulty in getting a response to what I believe are very reasonable questions. I just want to know the definition of CSI and see some examples of how to calculate it for the scenarios I detailed. Please assist me if you can.

  264. MathGrrl-

    We say we have met your challenge. That you refuse to accept that is your problem, not ours.

    And to prove my claim that we have met your challenge you have failed to produce something you position offers that also fulfills your requirements so that we can compare.

    That we we can see if you are just a troll or do you really have an valid criticism.

    Also even if CSI didn’t exist as a concept you still wouldn’t have any positive evidence for your position- just look at this thread- the best you have is T-URF 13, which is next to nothing.

    So until you provide something from your position as a standard you will always be in the position to say “that just ain’t good enough”, which is childish.

  265. MathGrrl:

    vjtorley @ 241 has provided the link to “Specification: The Pattern That Signifies Intelligence.” This is all one needs to read to understand CSI. I’m surprised it took so long for this link to be provided. My own understanding of CSI is based on that paper, and the calculations and explanations I provide in the following links are based on the concept of CSI as found in the that paper.


    Calculating CSI of a protein


    Continued defense and explanation of CSI

    Showing no CSI in a chaotic pattern


    Further Discussion of CSI

    I apologize that the discussions are so long, but that is the depth that needs to be provided sometimes. Also, that is why I am linking to those discussions. It could take just as long to explain everything all over again. If you are seriously interested in understanding CSI, read through, understand and seriously engage in the math and examples provided by Dembski in “Specification: The Pattern That Signifies Intelligence” and then compare that to my above linked calculations and discussions above.

    A non-complicated definition and breakdown of CSI is provided at the beginning of my first link.

    I am quite busy studying at the moment, so I apologize if I take a while getting back to you to answer any questions. However, now you should have enough material to peruse through for a while and hopefully most of your questions will be answered.

  266. MathGrrl, I gave you a rigorous probability calculation for determining functional information bits in a sequence! That you want to falsely say it is ‘metrically’ different than Dembski’s CSI matters not one iota for me, for ‘functional information’ as defined in the equation of Szostak, does in fact contain complexity, and specification, in its arrival of functional information bits (FITS). You have been given papers that derive FITS for various protein families in various scenarios! But your blatant unreasonableness, just to support your atheism, is all beside the point anyway for it is IMPOSSIBLE, even in principle, for neo-Darwinism to explain the ‘higher dimensional’ information of quantum entanglement in biology! That you ignore this central point clearly demonstrates that you are not concerned with finding the truth of the matter in the least, but are instead primarily concerned with trying to establish the legitimacy of your atheistic/Darwinistic beliefs no matter how many deceptive tactics you have to repeatedly employ!!! And exactly what for MathGrrl??? Do you think that your shallowness is not clear for all to see? Do you somehow think that hiding in lies will make life better for you??? I just don’t get, why in the world would you put your eternal soul in so much jeopardy with such childishness ???

  267. MathGrrl,

    I also re-join the discussion within my first link at comment #94. Please try to skim/read through the full provided comments from where I start each link. There is a lot of good back-and-forth discussion.

  268. Thanks CJYman, for a more detailed definition of CSI, and my apologies to you Mathgrrl for ‘venting’ on your unreasonableness. As CJYman shows, there is a far more nuanced way to determine CSI than Szostak’s ‘rough’ measure and so I was wrong to think that it was ‘enough’ for you MathGrrl.

  269. MathGrrl,

    Also, within the last link I provided above is my continued response (especially comment #116 and #223) to the comment at the end of the first link.

  270. Hello bornagain77,

    I have briefly looked over Szoztak’s method for calculating functional information and upon first inspection it actually appears to be very similar to Dembski’s calculation for CSI.

    However, I think Dembski’s calculation is a little more detailed, since it measures functional information against both sequence space (as Szostak does) and a universal probability bound.

  271. Thanks again CJYman, I will reference your work for future reference.

  272. MathGrrl (#246)

    Thank you for providing a reference (#177) to the four scenarios whereby known evolutionary mechanisms allegedly generate information in real and simulated environments. The four mechanisms you propose are: gene duplication leading to increased protein production, ev evolving binding sites, Tierra evolving parasites, and Genetic Algorithms evolving solutions to the Steiner Problem. Let’s look at each of them.

    (1) Gene duplication.

    Please see the following papers:

    New Peer-Reviewed Paper Challenges Darwinian Evolution by Jonathan M.

    Does Gene Duplication Perform As Advertised? by Jonathan M.

    Jonathan M. discusses a paper entitled “Is gene duplication a viable explanation for the origination of biological information and complexity?,” by Joseph Esfandier Hannon Bozorgmeh, in the journal Complexity. The author defines a gain in exonic information as “[t]he quantitative increase in operational capability and functional specificity with no resultant uncertainty of outcome.”

    The paper concludes that:

    Gene duplication and subsequent evolutionary divergence certainly adds to the size of the genome and in large measure to its diversity and versatility. However, in all of the examples given above, known evolutionary mechanisms were markedly constrained in their ability to innovate and to create any novel information. This natural limit to biological change can be attributed mostly to the power of purifying selection, which, despite being relaxed in duplicates, is nonetheless ever-present.

    Moreover,

    …the various postduplication mechanisms entailing random mutations and recombinations considered were observed to tweak, tinker, copy, cut, divide, and shuffle existing genetic information around, but fell short of generating genuinely distinct and entirely novel functionality.

    (2) ev evolving binding sites,
    (3) Tierra evolving parasites, and (4) Genetic Algorithms evolving solutions to the Steiner Problem.

    On ev, please see:

    Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism by Winston Ewert, William A. Dembski and Robert J. Marks II.

    Key definitions:

    A. Information Measures

    To assess the performance of a search, we use the following information measures [4], [5], [7].

    1) The endogenous information is a measure of the difficulty
    of a search and is given by
    I_omega = ?log2(p) (1)
    where p is a reference probability of a successful unassisted random search.

    2) Let the probability of success of an assisted search under the same set of constraints be q. Denote the exogenous information of a search program as
    I_s := ?log2(q).

    3) The difference between the endogenous and exogenous information is the active information.
    I+ := I_omega ? I_s = ?log2(p/q).

    From the abstract:

    Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it. Some search algorithms use prior knowledge better than others. For the Avida digital organism, a simple evolutionary strategy generates the Avida target in far fewer instructions using only the prior knowledge available to Avida.

    See also

    EV Ware: Dissection of a Digital Organism by Baylor Bear.

    Regarding Tierra, I suggest you re-read PaV’s original comment at http://www.uncommondescent.com.....ent-366926 in the earlier thread, http://www.uncommondescent.com.....say-weasel (a long thread in which you mysteriously bowed out of the discussion about mid-way – exams?):

    MathGrrl:

    I’ve looked at a powerpoint presentation of Tierra.

    First, we have intelligent agents intelligently trying to duplicate what they see in life on a logical framework. It is interesting that when it comes to trying to imitate life, so much logical thought is involved.

    Second, the entire program is setup in a very simplified way–very little complexity, and then it is set up so that nothing can really die, or, worse yet, cause the program itself to come to a halt. So, this program will live come hell or highwater.

    Third, from their results it looks like the only thing that has happened are: (1) the size of the program diminishes with time [akin to a loss of complexity], and (2) a “parasite” evolves. But the parasite is simply an organism that has lost its ability to copy its own program and so must rely on some other ‘cell’ to duplicate its ‘genome’. If one compares the “parasite” to the original “ancestor”, half of its instructions have been lost. So it seems that the upshot of this experiment in life via Darwinian processes results in a loss of size and a loss of complexity. This, of course, fits in perfectly with ID’s claims that claimed examples of Darwinian evolution generally amount to a loss of information and never a gain.

    I don’t see anything there in Tierra land that is of much interest. And apparently T.Ray doesn’t either since he worked on it from 1990-2001 and then quit.

    Finally, I suggest you have a look at the mathematically rigorous paper, Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information by William A. Dembski and Robert J. Marks II. The authors use the same definitions of information as used in the paper above by Ewert, Dembski and Marks (Dissecting a Digital Organism). Some excerpts:

    Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them… Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case.

    Let’s be clear where our argument is headed. We are not here challenging common descent, the claim that all organisms trace their lineage to a universal common ancestor. Nor are we challenging evolutionary gradualism, that organisms have evolved gradually over time. Nor are we even challenging that natural selection may be the principal mechanism by which organisms have evolved. Rather, we are challenging the claim that evolution can create information from scratch where previously it did not exist. The conclusion we are after is that natural selection, even if it is the mechanism by which organisms evolved, achieves its successes by incorporating and using existing information.

    Mechanisms are never self-explanatory. For instance, your Chevy Impala may be the principal mechanism by which you travel to and from work. Yet explaining how that mechanism gets you from home to work and back again does not explain the information required to build it. Likewise, if natural selection, as operating in conjunction with replication, mutation, and other sources of variation, constitutes the primary mechanism responsible for the evolution of life, the information required to originate this mechanism must still be explained. Moreover, by the Law of Conservation of Information, that information cannot be less than the mechanism gives out in searching for and successfully finding biological form and function.

    It follows that Dawkins’s characterization of evolution as a mechanism for building up complexity from simplicity fails…

    Conservation of information therefore points to an information source behind evolution that imparts at least as much information to the evolutionary process as this process in turn is capable of expressing by producing biological form and function. As a consequence, such an information source has three remarkable properties: (1) it cannot be reduced to purely material or natural causes; (2) it shows that we live in an informationally porous universe; and (3) it may rightly be regarded as intelligent. The Law of Conservation of Information therefore counts as a positive reason to accept intelligent design. In particular, it establishes ID’s scientific bona fides.

    Just as information needs to be imparted to a golf ball to land it in a hole, so information needs to be imparted to chemicals to render them useful in origin-of-life research. This information can be tracked and measured. Insofar as it obeys the Law of Conservation of Information, it confirms intelligent design, showing that the information problem either intensifies as we track material causes back in time or terminates in an intelligent information source. Insofar as this information seems to be created for free, LCI calls for closer scrutiny of just where the information that was given out was in fact put in. (Emphases mine – VJT.)

    You also ask:

    Are you claiming that Abel’s functional sequence complexity metric is a reliable indicator of the involvement of intelligent agency?

    Yes.

    Are you admitting that there is no rigorous mathematical definition of CSI?

    I didn’t say that, although I’d say it’s less rigorous than Abel’s definition of functional sequence complexity, because CSI assumes two kinds of complexity – probabilistic complexity and descriptive complexity – the latter of which is difficult to quantify. But in any case, Dembski didn’t rely on his definition of CSI to demonstrate his Law of the Conservation of Information in the paper I cited. He used mathematically unobjectionable formulations. So your objection that CSI isn’t rigorously defined seems to be irrelevant.

    I strongly suggest you read the paper, Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information by William A. Dembski and Robert J. Marks II. It’s ideal for someone with a mathematical background, and it should clear up your difficulties.

    As far as I can tell, none of the four scenarios you have provided generate new information.

    Well, I’m a philosopher, not a mathematician (although I completed a maths degree about 30 years ago) and certainly not a biologist. I hope what I’ve uncovered is of assistance to you. Anyway, I think I’ve done quite enough sleuthing for this evening, and it’s now 2:05 a.m. Time to catch 40 winks.

  273. P.S. The question marks in front of log2 in my previous post are minus signs. I don’t know why they turned out like that. Sorry.

  274. 274

    Virtually all of the recent descriptions of CSI added to this conversation were already covered (in one way or another) by KF earlier in the thread.

    The indulgence on display here is is not about definitions. This should be obvious to anyone.

  275. CJYman,

    Thank you for your very detailed response. I read through the threads on the links you provided (hence my delay in replying). I found the “What is intelligence?” UD thread a bit frustrating, since it petered out just as the questions were getting interesting.

    Based on your calculations for titin, it seems to me that your calculation of CSI suffers from a similar problem as does that suggested by kairosfocus (two raised to the power of the length of the artifact). As I said in my response to him, if this is your definiition of CSI, known evolutionary mechanisms are demonstrably capable of generating it in both real and simulated environments.

    I’ll repeat the rest of that response for convenience here:

    [begin repetition]
    Consider the specification of “Produces X amount of protein Y.” A simple gene duplication, even without subsequent modification of the duplicate, can increase production from less than X to greater than X. By your definition, CSI has been generated by a known, observed evolutionary mechanism with no intelligent agency involved.

    Schneider’s ev uses the specification of “A nucleotide that binds to exactly N sites within the genome.” Using only simplified forms of known, observed evolutionary mechanisms, ev routinely evolves genomes that meet the specification. The length of the genome required to meet this specification can be quite long, depending on the value of N. By your definition, CSI has been generated by those mechanisms. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.)

    Ray’s Tierra routinely results in digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes. By your definition, CSI has been generated by known, observed evolutionary mechanisms with no intelligent agency required.

    The Steiner Problem solutions described at the site linked above use the specification “Computes a close approximation to the shortest connected path between a set of points.” The length of the genomes required to meet this specification depends on the number of points, but can certainly be hundreds of bits. By your definition, these GAs generate CSI via known, observed evolutionary mechanisms with no intelligent agency required.
    [end repitition]

    Could you help me to understand how to calculate CSI by taking me through how you would do so for each of these four scenarios? I appreciate your time.

  276. vjtorley,

    Please see the following papers:

    New Peer-Reviewed Paper Challenges Darwinian Evolution by Jonathan M.

    Does Gene Duplication Perform As Advertised? by Jonathan M.

    I’m afraid your response doesn’t address the issue. What, specifically, is wrong with the specification I provide for the gene duplication scenario? If there is nothing wrong with the specification, what is the exact definition of CSI that precludes it from being generated by such an event?

    On ev, please see:

    Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism by Winston Ewert, William A. Dembski and Robert J. Marks II.

    That paper is actually about Avida, a different GA platform. In any case, none of the excerpts you provide address the core questions, namely:

    Can GAs in principle generate CSI?

    If so, how can we measure it?

    If not, what is the definition of CSI that precludes it?

    Regarding Tierra, I suggest you re-read PaV’s original comment at http://www.uncommondescent.com…..ent-366926

    That comment does not address the core questions either.

    inally, I suggest you have a look at the mathematically rigorous paper, Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information by William A. Dembski and Robert J. Marks II.

    I’ve read that paper, but it isn’t pertinent to this conversation. The claim from ID proponents is that CSI is an indicator of intelligent agency. I would like to test this claim. To do so, I need a rigorous mathematical definition of CSI and some examples of how to calculate it. That information has been surpisingly difficult to obtain. References like these are interesting, but do not address my core questions.

    Are you claiming that Abel’s functional sequence complexity metric is a reliable indicator of the involvement of intelligent agency?

    Yes.

    Thank you for the direct response. If a few other ID proponents agree with you, I’ll look into testing that claim. In the meantime I’ll continue to focus on CSI since that is the more widely recognized metric. I hope you can understand that I’m hesitant to spend much time on analyzing Abel’s metric if it is possible that after testing it my results will be dismissed with “That’s not what we mean by CSI.”

    I strongly suggest you read the paper, Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information by William A. Dembski and Robert J. Marks II

    I’ve read it. I would be happy to discuss the issues I have with it once we’ve reached some resolution on the CSI issue. Unfortunately, I don’t have an unlimited amount of time for discussions such as these.

  277. This is a strange conversation, almost bizarre. I think it´s very easy to show Mathgrrl a few examples of CSI, so why don´t you just do it? Before it starts to look as if there would not be some easy calculations I will better go ahead.
    So, Mathgrrl, if you persist and continue to ask for calculations at some point you will probably be told something like this:

    1AFOK1HI917ZHG0LQBMNHJI4FGHE67HZ82HJT5RT8U54FV
    How would you want to elvolve this? Impossible. And still it is the WPA2 password of my wireless network. It therefore has digital specified functional complex information (DSFCI) which is easily above the probability threshold. No evolutionary algorithm would be able to compute this code. (in a reasonable time, say 6000 years).

    At this point you will probably answer that replicating organisms don´t have to match a specific key. KF will tell you that life sits on sparse islands of functionality, which you will dispute based on papers which show binding action in random libraries of proteins.
    Slowly the discussion will move away from the original question. Be prepared to be asked where the proteins come from and what´s up with the fine tuning of our universe.

  278. 278

    Indium,

    “This is a strange conversation, almost bizarre.”

    Okay, I admit to it … I was the prompter of the bizarre.

    :)

    I am just bizarre enough to realize that the symbols under discussion, changing in sequence as they must, mean absolutely nothing (mathematically or otherwise) without them being mapped to the objects they represent. Without those associations, which cannot be taken for granted under any rigorous examination, the entire issue falls apart and becomes moot. That includes the calculations she is attempting to make.

    After seeing that Mathgrrl had already been given the calculations she sought, I was prepared to bring this fact (as in “empirical reality”) to her fleeting attention, but she referenced her Darwinian Flowchart and has since responded that such empirical realities are not important.

    I thought they were. My mistake.

  279. After seeing that Mathgrrl had already been given the calculations she sought

    Correct me if I am wrong, but the calculations that were given to her were two raised to the power of the length of the genome under question. If, so that is fine to meet the challenge. However, she did raise a question that I would like to see the answer to. If CSI is calculated as two to the power of the genome length AND a gene duplication event can increase the length of the genome, then it would seem a known evolutionary mechanism can create CSI?

    Thanks, guys!

  280. jon specter- If you have a copy of a dictionary and I give you aother copy of the same dictionary, do you have twice the information or just two dictionaries with the same information?

    That said there is’t any evidence that gene duplications are blind watchmaker processes.

    MathGrrl doesn’t understand that.

    And if gene duplications are part of the prgramming, then they don’t increase the information- they are part of it. If I can find the page in “Signature in the Cell” that goes over that I will post it.

  281. If you have a copy of a dictionary and I give you aother copy of the same dictionary, do you have twice the information or just two dictionaries with the same information?

    If the measure of information is two raised to the power of the number of characters in the dictionary, then yes there is more information in two dictionaries. Are you sure you are not confusing information and meaning.

    That said there is’t any evidence that gene duplications are blind watchmaker processes.

    Well, gene duplications have been observed. Has anyone seen a big finger come out of the sky at the same time? LOL.

    I’m on your side here, but this analogy doesn’t help.

  282. 282

    Jon spencer

    “…then it would seem a known evolutionary mechanism can create CSI?”

    I am certainly prepared to have the point disregarded, yet again.

    To say that something can create information is to say that something can create the symbolic represenations (mapping of symbol to object) necessary for information to exist in the first place. You may attack this argument by showing any example from anywhere in the cosmos where information exists by any means other than through symbolic representation.

    If I build a box, fill it with the letters of the alphabet, and give it to potential to spit them out in abundant combinations. At the point it spits out a combination that is recognizable as a word, you cannot then say that it “created information”.

    It doesn’t have the capacity to do so, and neither does the genome.

  283. MathGrrl:

    A word of advice about good manners. When someone spends several hours of their valuable time trying to answer another person’s query, it is extremely discourteous of the person making the query to say in reply: “I’m afraid your response doesn’t address the issue,” or “I’ve read that paper, but it isn’t pertinent to this conversation.” (Another word of advice: when making a sweeping claim, one should be prepared to substantiate it. It would add to your credibility if you could explain why you thought it was not pertinent, by citing a passage from the paper itself. Casual put-downs can backfire; they make you sound like you’re bluffing.) Finally, to add insult to injury, you write: “I don’t have an unlimited amount of time for discussions such as these.” Gee, thanks. I’m not a mathematician, I’m doing this research as an act of courtesy, and I really don’t appreciate being kicked in the teeth. Speaking of time, how long did it take you to dash off that response? Ten minutes?

    The fact is that you and I have different perceptions of what “the issue” is. For me, the issue is a simple one: has Professor Dembski provided a mathematically rigorous definition of “information” which shows that Darwinian evolution is incapable of creating information? That was your original question. The answer I gave you was: Yes. I referred you to Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information by William A. Dembski and Robert J. Marks II in my previous post. I even copied out the definition of information that Dembski was using, so you wouldn’t have to look it up. By any normal standard of discourse, my response was a helpful one.

    You have chosen to turn your nose up at this response, however, simply because the definition that the paper in question employs isn’t the same as Dembski’s original definition of CSI. Frankly, I am not amused. Your fixation with this particular definition is puzzling. There are dozens of mathematically rigorous definitions of “information” in the literature. If Dembski can demonstrate his contention that Darwinian evolution is incapable of creating information using any one of those definitions, that should be enough.

    One of the scenarios you listed was gene duplication. I provided you with links to two articles by Jonathan M., showing that it doesn’t result in an increase of information. Your response? “What, specifically, is wrong with the specification I provide for the gene duplication scenario?” Que?

    Since you are fixated with CSI, then I shall refer you to Dembski’s paper, Specification: The Pattern that Signifies Intelligence . Let’s see how much you really know.

    On page 24 of his paper, Dembski defines the specified complexity Chi (minus the context sensitivity) as -log2[(10^120).Phi_s(T).P(T|H)],
    where T is the pattern in question, H is the chance hypothesis and Phi_s(T) is the number of patterns for which agent S’s semiotic description of them is at least as simple as S’s semiotic description of T.

    Now suppose a gene in an organism gets duplicated. Humans have about 30,000 genes. Duplication of just one gene in the human genome will very slightly lengthen the semiotic description of the genome. If we let (AGTCGAGTTC) denote the random sequence of bases along the gene in question, and …….. signify the rest (which are also random, let’s say), then the description of the duplicated genome will be ……..(AGTCGAGTTC)x2 instead of ……..(AGTCGAGTTC). In other words, we’re just adding two characters to the description, which is negligible.

    OK. What about P(T|H)? Given the chance hypothesis, and the length of the human genome (about 3×10^9 base pairs), the probability of a particular genome sequence is 1 in 4^(3,000,000,000). A gene has about 100,000 base pairs, so for a genome with a duplicated gene, P(T|H) is 1 in 4^(3,000,100,000).

    If the original genome is random, then Phi_s(T) will be 4^(3,000,000,000). In section 4, Dembski says that for a random, maximally complex sequence, Phi_s(T) is identical to the total number of possibilities, because the sequence is incompressible. For the duplicated genome, Phi_s(T) will be about the same, since our description is virtually identical (all we added to the verbal description was an “x2″, despite the fact that we now have an extra 100,000 bases in the duplicated genome).

    For the original genome, Phi_s(T).P(T|H) equals 4^(3,000,000,000)/4^(3,000,000,000), which equals 1. For the duplicated genome, Phi_s(T).P(T|H) equals 4^(3,000,000,000)/4^(3,000,100,000), which equals 1/4^(100,000). 4 is about 10^0.60206, so 1/4^(100,000) is approximately 10^-60206.

    For the original genome, Chi = -log2[(10^120).Phi_s(T).P(T|H)] = -log2[(10^120)]. log2(10)=3.321928094887362, so 10^120=(2^3.321928094887362)^120, or 2^398.631371. Hence Chi = -log2[(10^120)] or -398.631371.

    For the duplicated genome, Chi = -log2[(10^120).10^-60206], or -log2[10^-60086], which is -log2[(2^3.32192809488736)^-60086], or -log2[2^-199600], or 199,600.

    Now on page 24, Dembski defines a specification as a pattern whose specified complexity is greater than 1. For such a specification, we can eliminate chance.

    I note that for the duplicated genome, the specified complexity Chi is much greater than 1, so Dembski’s logic seems to imply that any instance of gene duplication is the result of intelligent agency and not chance. And it would be, if we imagine that each extra base in the duplicated genome was added to the original genome, one at a time. For the odds of adding 100,000 bases independently, which just happened to perfectly match the 100,000 bases they were sitting next to, would be staggering. But that’s not how gene duplication works. Rather, a whole gene is copied, “holus bolus”, and the copy usually sits next to the original. The extra bases are not added independently, in separate events, but together, in a single copying event. And although the occurrence of the copying event at this particular point along the human genome may well be random, the actual copying process itself is law-governed, and hence not random.

    I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

    I am therefore not surprised that Jonathan M. used another measure of information in his papers discussing gene duplication: a gain in exonic information, which is defined as “[t]he quantitative increase in operational capability and functional specificity with no resultant uncertainty of outcome.”

    Well, I’ve done the hard work. I hope you will be convinced now that fixating on a single measure of information is unhelpful.

  284. Numbers have to do with “complexity”. There isn’t a problem here with “complexity”, except for a lack of it, generally; that is, most sequences discussed don’t exceed the UPB of 10^150. Even the sequence that Indium gives is around 10^70 in complexity.

    The problem always is with the notion of “specificity”. Not just any pattern will do. Patterns have to be convertible, translatable, functional.

    Take Dembski’s example of the first 100 binary digits appearing as a random sample. What distinguished the sequence, and makes it “patterned”, is that you can “convert”/”translate” it into digits 0-99.

    Indium’s sequence is convertible—into binary form—but this is trivial. It is essentially functional and translatable. Its function is obvious, and its translatable, because his WAN can recognize it, and translate it (under its binary form) into the binary sequence allowable for access. Random patterns are complex, but not specific.

    And MathGrrl’s heroes don’t seem to understand this. Yet, they have no excuse for not fully understanding Dembski’s expose of it in NFL. It’s hard work; but it’s doable. However, if you don’t care to understand, then you will give up before you do.

  285. 285

    UB:

    To say that something can create information is to say that something can create the symbolic represenations (mapping of symbol to object) necessary for information to exist in the first place.

    I do appreciate you going to the effort to explain it, although I did understand what you were saying earlier. But, it seems you are doing the same thing Joe is doing by mixing information and meaning.

    I am kinda new to this, so let me lay out my understanding of what mathgirl is saying and you can correct me where I am wrong. I apologize for the simplistic approach I am taking, but moving forward methodically is my best chance for understanding. In short, the Y/N format is about me, not you.

    1. The CSI of a genome sequence is two raised to the power of the length of the sequence. (Yes/No)

    2. Any two gene sequences of the same length have the same amount of CSI. (Yes/No)

    3. Any two gene sequences of the different length have the different amounts of CSI. (Yes/No)

    4. The longer gene sequence will have more CSI. (Yes/No)

    You may attack this argument by showing any example from anywhere in the cosmos where information exists by any means other than through symbolic representation.

    Believe it or not, I am on your side here. I am not attacking anything, I am only trying to understand. I’d like to see mathgirl slink away with her (prehensile?) tail between her legs as much as the next guy. But, victory is secondary to my understanding. So, if you could indulge my 4 Y/N questions, I would much appreciate it.

  286. Jon Spector:

    If the measure of information is two raised to the power of the number of characters in the dictionary, then yes there is more information in two dictionaries. Are you sure you are not confusing information and meaning.

    Wow- with ID information and meaning are inseparable.

    You are confusing Shannon Information with Complex Specified Information.

    Well, gene duplications have been observed. Has anyone seen a big finger come out of the sky at the same time?

    Do computers come with programmers or programs? Do you need a programmer to do your spellchecking or does the program suffice?

  287. 287

    Joseph:

    Wow- with ID information and meaning are inseparable.

    We may be getting somewhere here. Two questions:

    1. Do you agree that the amount of CSI in a sequence is measured by 2^(length of the sequence)? Y/N

    2. Does non-coding DNA have meaning? Y/N

  288. Joseph:
    Duplication events are often followed by a later divergence of the two siblings. So even if the duplication alone will not create information in your sense, the divergence will do the trick.
    Also, the duplicated part of the genome already starts on one of the famous islands of funtionality and can begin the mutational trajectory from a “working” starting point.
    So, just in principle, the process “gene duplication+divergence” can generate new information. Please note that (at this point) I am not saying that this a reasonable model of evolution. I am just saying here that there are evolutionary process which could *in principle* generate new information.

  289. Indium,

    1- You need to get binding sites with the gene duplication

    2- You need to then get that protein integrated into the existing system, otherwise you just have a protein difusing through the cell able to cause problems.

    3- There isn’t any evidence that blind watchmaker processes can do such a thing

    4- In a design scenario the information is in the pogram and if the program says to dupicate and diverge then it ain’t new nor an increase in information.

  290. jon specter:

    1. Do you agree that the amount of CSI in a sequence is measured by 2^(length of the sequence)? Y/N

    Unfortunately it ain’t that black and white.

    In biology specified complexity referes to function.

    2. Does non-coding DNA have meaning? Y/N

    It is possible.

  291. Joseph,

    I don´t understand your points 1+2. The duplicated gene at first codes for the same protein as the original one. Changes to the gene will sometimes (most of the time, really) not completely destroy the binding efficiency and ultimately sometimes also lead to new binding effects. At the same time it´s no longer a duplicate so you have to take it into account when calculating your DFCSID.
    Anyway: My main point is undisputed by your arguments: Evolutionary processes can *in principle* generate new information. It might be difficult, it might be rare, it might take a long time. But it is *in principle* possible.

    If you want to exclude parts of your sequence which originated from a duplication you would have to check if parts of your original sequence are not ultimately the result of gene duplication, too, to also remove these parts from your calculation. Ultimately, you would have to analyse the evolutionary history of each part of the genome in question to see if there isn´t a relatively simple evolutionary explanation, before applying your 2^N formula.

    Point 3+4 don´t adress my argument at all. I specifically stated that I argue only for one specific thing here: Is it in principle possible that evolutionary processes can generate new information as per your definition. Duplication and divergence easily do the job (in theory).
    In practice, I agree, there might be difficulties! ;-)

  292. 292

    1. Do you agree that the amount of CSI in a sequence is measured by 2^(length of the sequence)? Y/N

    Unfortunately it ain’t that black and white.

    In biology specified complexity referes to function.

    Then why are people upthread calculating CSI by that formula? Are you saying they are wrong? What formula do you use?

    2. Does non-coding DNA have meaning? Y/N

    It is possible.

    An unhelpful answer. To the extent that it is even an answer. Nothing personal, but I think I may just wait for one of the ID scientists to answer.

  293. Indium,

    So, Mathgrrl, if you persist and continue to ask for calculations at some point you will probably be told something like this:

    1AFOK1HI917ZHG0LQBMNHJI4FGHE67HZ82HJT5RT8U54FV
    How would you want to elvolve this? Impossible. And still it is the WPA2 password of my wireless network.

    This is, as I’m sure you know, an excellent example of a fitness landscape that evolutionary mechanisms are not well suited to traverse. I’ve heard cryptographers refer to it as “a needle standing on end in a desert.”

    At this point you will probably answer that replicating organisms don´t have to match a specific key. KF will tell you that life sits on sparse islands of functionality, which you will dispute based on papers which show binding action in random libraries of proteins.
    Slowly the discussion will move away from the original question. Be prepared to be asked where the proteins come from and what´s up with the fine tuning of our universe.

    I find your prognostications interesting and would like to subscribe to your newsletter. ;-)

  294. Joseph,

    If you have a copy of a dictionary and I give you aother copy of the same dictionary, do you have twice the information or just two dictionaries with the same information?

    As Jon has already pointed out, if your definition of “information” is two raised to the power of the length of the sequence under consideration, then the answer is yes. That’s why I asked for an example of how to calculate CSI in that particular scenario.

    Your analogy has another problem, though. Having an additional copy of a gene can result in greater production of a particular protein. That, in turn, can change the chemical reactions that take place in a cell. That is why I included the specification of “Produces X amount of protein Y.” A simple gene duplication, even without subsequent modification of the duplicate, can increase production from less than X to greater than X.

    By the definitions provided thus far, this shows that known evolutionary mechanisms can generate CSI. If you disagree, please show your work.

  295. vjtorley,

    When someone spends several hours of their valuable time trying to answer another person’s query, it is extremely discourteous of the person making the query to say in reply: “I’m afraid your response doesn’t address the issue,” or “I’ve read that paper, but it isn’t pertinent to this conversation.”

    I genuinely appreciate your effort to discuss this with me and I never intended to give any impression otherwise. Please allow me to expand on my reasons for posting those responses to your previous message.

    Throughout this thread I have been attempting to get straightforward answers to what I believe are straightforward, even simple questions. The most basic is “What is the mathematically rigorous definition of CSI?” In order to guarantee that I understand any definition offered, I provided four scenarios (see messages 177 and 275 above) and asked to be shown exactly how to calculate CSI for them.

    As you can see, the only response remotely like a mathematical definition seems to boil down to “two to the power of the length of the genome” and no one has shown how to apply it to my scenarios (until your most recent post, of course).

    In your previous message you clearly spent a lot of time talking about the issues related to CSI. I have read most of the material you referenced and would find it to be an interesting topic of conversation in another context. My concern here, based on lurking on UD for some time prior to participating, is that all of that interesting content could easily derail the core topic I’m trying to reach resolution on, namely the mathematical definition of CSI and how to apply it in my four scenarios.

    I hope you understand that my response was based on my focus on this goal, not on any intention to be rude to you.

    Now, I hope you’re still reading because I’m delighted to get on to the real meat of your response.

    Since you are fixated with CSI, then I shall refer you to Dembski’s paper, Specification: The Pattern that Signifies Intelligence .

    Excellent, I’m familiar with that paper.

    Now suppose a gene in an organism gets duplicated. Humans have about 30,000 genes. Duplication of just one gene in the human genome will very slightly lengthen the semiotic description of the genome. If we let (AGTCGAGTTC) denote the random sequence of bases along the gene in question, and …….. signify the rest (which are also random, let’s say), then the description of the duplicated genome will be ……..(AGTCGAGTTC)x2 instead of ……..(AGTCGAGTTC). In other words, we’re just adding two characters to the description, which is negligible.

    Why are you using “x2″ instead of the actual sequence? Using the “two to the power of the length of the sequence” definition of CSI, we should be calculating based on the actual length. I can see how the Kolmogorov Chaitin complexity might make more sense, but that’s not what I see ID proponents using.

    Now on page 24, Dembski defines a specification as a pattern whose specified complexity is greater than 1. For such a specification, we can eliminate chance.

    I note that for the duplicated genome, the specified complexity Chi is much greater than 1, so Dembski’s logic seems to imply that any instance of gene duplication is the result of intelligent agency and not chance.

    I didn’t double check your math, but the orders of magnitude seem about right. I agree with you that by Dembski’s logic a gene duplication event generates CSI.

    And it would be, if we imagine that each extra base in the duplicated genome was added to the original genome, one at a time. For the odds of adding 100,000 bases independently, which just happened to perfectly match the 100,000 bases they were sitting next to, would be staggering. But that’s not how gene duplication works.

    Absolutely! I actually got to this point with gpuccio over on Mark Frank’s blog. Understanding the actual probabilities of a particular sequence arising requires knowledge of the history of the evolution of that sequence. That’s not a surprise, really.

    The problem, that you have identified as well, is that not even the informally defined versions of CSI used by ID proponents take this into consideration. Dembski doesn’t in No Free Lunch or in this paper, and none of the other participants in this thread have yet done so.

    I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

    I agree completely.

    Well, I’ve done the hard work. I hope you will be convinced now that fixating on a single measure of information is unhelpful.

    Thank you, very sincerely, for your effort. The problem is, I’m not the one you need to convince.

    There are a number of ID proponents who continue to make the claim that CSI is a clear and unambiguous indicator of intelligent agency. They are the ones who need to be convinced to stop making those claims, that you yourself have shown to be unsupported, unless and until someone comes up with an alternative metric.

  296. MathGrrl:
    “Based on your calculations for titin, it seems to me that your calculation of CSI suffers from a similar problem as does that suggested by kairosfocus (two raised to the power of the length of the artifact). As I said in my response to him, if this is your definiition of CSI, known evolutionary mechanisms are demonstrably capable of generating it in both real and simulated environments.”

    … and herein lies the problem. It appears that you have misunderstood almost everything or skipped over a huge portion of what Dembski has written in the aforementioned paper, that KF has very patiently explained above, and that I have both explained and provided calculations for in my provided links.

    I’m pretty sure it’s impossible to miss, based on everything that has been stated and calculated by three sources (Dembski, KF, and myself), that CSI is a measure of the bit length of the pattern/event in question (given a uniform probability distribution) — that you refer to — COMPARED against both *ratio of function to non-function in the search space* and the *universal probability bound.*

    So, yes, discovering the shannon information of the pattern/event in question as you point out is definitely part of the equation, but you are missing the other vital two thirds — search space and probability bound. And I can’t see any excuse for this if you actually did read through Demsdki’s paper, KF’s discussions, and my explanations and calculations. It’s all right there. I don’t mean to sound demeaning in any way, I’m just a little confused as to how you missed that vital two thirds of the equation for and concept of CSI when it has been explained quite thoroughly by three different sources. If you have any more questions, please ask.

    In the end, programming a CSI calculator would not be difficult as the only calculations required are multiplication and logarithm. Asking the right questions for a user to input the proper values in the proper locations might be difficult, though. Also, the difficult part would be in the empirical research necessary to find and defend the use of the numbers that would be plugged into the variables. But as a few people, including myself, have shown … it is possible.

    Furthermore, if you will read through my provided links above again, you will notice that I have no qualm with evolutionary mechanisms generating CSI. Intelligence can use whatever means are at its disposal, including evolutionary algorithms, to generate CSI. The problem is whether or not chance and law, absent intelligence, can generate evolutionary algorithms that produce CSI. According to the recent work done by Dembski and Marks, the answer is that the the EA itself is at least as improbable as the output of that EA. Therefore, if chance and law can’t generate CSI from scratch, then it won’t be able to generate the evolutionary algorithm that can produce that CSI. Again, I have discussed this in the links I provided earlier.

  297. Surprising and very promising. Everybody (except for Joseph who seems to run out of arguments) agrees that evolution can generate CSI. And this on an ID blog! This thread will be a great reference for the future. Maybe you should put this conclusion into the “Glossary” on the front page?

  298. Indium, no one disagrees that, in principle, Darwinian processes can generate ‘Information’. Even Dr. Dembski agrees that they, in principle can. The only caveat is that if Darwinian processes ever generated ‘Information’ it would have to be by a teleological (of a mind) process;

    “LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information”:
    Excerpt: Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms. http://evoinfo.org/publication.....ation-law/

    Yet, the whole issue of whether or not to argue if teleological or non-teleological processes are inherent in nature is not even on the table, for no one has ever witnessed the generation of functional information over and above what was already present in life:

    For a broad outline of the ‘Fitness test’, required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles:

    Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video
    http://www.metacafe.com/watch/3995248

    Michael Behe on Falsifying Intelligent Design – video
    http://www.youtube.com/watch?v=N8jXXJN4o_A

    Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology
    Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution.
    http://www.evolutionnews.org/2.....s_wro.html

    Testing Evolution in the Lab With Biologic Institute’s Ann Gauger – podcast with link to peer-reviewed paper
    Excerpt: Dr. Gauger experimentally tested two-step adaptive paths that should have been within easy reach for bacterial populations. Listen in and learn what Dr. Gauger was surprised to find as she discusses the implications of these experiments for Darwinian evolution. Dr. Gauger’s paper, “Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,”.
    http://intelligentdesign.podom.....4_13-07_00

    The main problem, for the secular model of neo-Darwinian evolution to overcome, is that no one has ever seen purely material processes generate functional ‘prescriptive’ information.

    The Capabilities of Chaos and Complexity: David L. Abel – Null Hypothesis For Information Generation – 2009
    To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.
    http://www.mdpi.com/1422-0067/10/1/247/pdf

    The Sheer Lack Of Evidence For Macro Evolution – William Lane Craig – video
    http://www.metacafe.com/watch/4023134/

    Thus Indium, you may want to take false comfort in the fact that Darwinian evolution is possible in principle, but it would be a very shallow, and dishonest, comfort since you have no violations of genetic entropy to even show a increase of ‘non-trivial’ information above that which was already present!

    —–

    ?’If I find in myself desires which nothing in this world can satisfy, the only logical explanation is that I was made for another world.’ – C.S. Lewis

    Brooke Fraser- “C S Lewis Song”
    http://www.youtube.com/watch?v=GHpuTGGRCbY

    Hope
    http://vimeo.com/10193827

  299. 299

    Indium, no one disagrees that, in principle, Darwinian processes can generate ‘Information’. Even Dr. Dembski agrees that they, in principle can. The only caveat is that if Darwinian processes ever generated ‘Information’ it would have to be by a teleological (of a mind) process

    That is very interesting. How does one differentiate a teleological Darwinian process from one that isn’t?

  300. jon specter, you ask,

    ‘How does one differentiate a teleological Darwinian process from one that isn’t?’

    That is an interesting question that never gets asked of evolutionary processes since neo-Darwinists do not have any examples of evolution that have ever generated information over and above what was already present. In fact all beneficial adaptations I have ever been made aware of always end up occurring because of a loss, or ‘adjustment, of pre-existent functional information.

    “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
    http://behe.uncommondescent.co.....evolution/

    Yet jon specter if an example of evolution ever did occur that generated non-trivial information over and above what was already present, and one were to ask that question, ask if it were teleological evolution or not, you would have to ask a very basic question and ask whether the foundation of reality you are measuring from is teleological in its nature or not. Which in fact turns out to be the case;

    Alain Aspect and Anton Zeilinger by Richard Conn Henry – Physics Professor – John Hopkins University
    Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the “illusion” of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one’s own mind is sure to exist). (Dr. Henry’s referenced experiment and paper – “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 – “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word “illusion” was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; “material reality is a “secondary reality” that is dependent on the primary reality of God’s mind” to exist. Then again I’m not a professor of physics at a major university as Dr. Henry is.)
    http://henry.pha.jhu.edu/aspect.html

    Of note jon this is from the quantum mechanic perspective that we find that God ‘sustains’ reality. But from a General Relativity perspective we find that God ‘created’ the universe; Moreover God created from a baseline of thermodynamics that prevents any ‘future’ space-time materialistic processes of general relativity from ever generating non-trivial functional information on their own;

    The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose
    Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).”

    How special was the big bang? – Roger Penrose
    Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123.
    (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989)
    http://www.ws5.com/Penrose/

    Evolution is a Fact, Just Like Gravity is a Fact! UhOh!
    Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged.
    http://www.uncommondescent.com.....fact-uhoh/

    And as far back in the history of the universe as we look entropy has always been working its work of increasing disorder,, and thus,,,

    “Gain in entropy always means loss of information, and nothing more.”
    Gilbert Newton Lewis – Eminent Chemist

    jon specter, It is also interesting to note that a lot of theistic evolutionists actually argue from a deistic presupposition, and argue that God ‘front-loaded’ information into the initial conditions of the universe. But closer scrutiny reveals this is not possible:

    The Front-loading Fiction – Dr. Robert Sheldon – 2009
    Excerpt: Historically, the argument for front-loading came from Laplacian determinism based on a Newtonian or mechanical universe–if one could control all the initial conditions, then the outcome was predetermined. First quantum mechanics, and then chaos-theory has basically destroyed it, since no amount of precision can control the outcome far in the future. (The exponential nature of the precision required to predetermine the outcome exceeds the information storage of the medium.),,, Even should God have infinite knowledge of the outcome of such a biological algorithm, the information regarding its outcome cannot be contained within the (initial conditions of the) system itself.
    http://procrustes.blogtownhall.....tion.thtml

    further notes:

    The Center Of The Universe Is Life – General Relativity, Quantum Mechanics, Entropy and The Shroud Of Turin – video
    http://www.metacafe.com/w/5070355

    A Quantum Hologram of Christ’s Resurrection? by Chuck Missler
    Excerpt: “You can read the science of the Shroud, such as total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics.” The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically.
    http://www.khouse.org/articles/2008/847

    Turin Shroud Enters 3D Age – Pictures, Articles and Videos
    https://docs.google.com/document/pub?id=1gDY4CJkoFedewMG94gdUk1Z1jexestdy5fh87RwWAfg

  301. 301

    Bornagain77,

    That is all very interesting, but how do you differentiate a teleological Darwinian process from one that isn’t?

  302. jon specter, Isn’t it obvious??? As I pointed out, If you want to answer that question You must look at the basis of reality!!! Since the basis of reality is found to be teleological/Theistic (logos; information theoretic) in nature, then that answers the question!!! If a ‘Darwinian’ process of a gradual increase of functional information ever did occur, it would ultimately have to be attributable to a teleological processes since reality is shown to be Theistic in its basis,,, shown not to be ‘materialistic’ in its basis;

    notes;

    falsification of reductive materialism;

    The Failure Of Local Realism – Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    Physicists close two loopholes while violating local realism – November 2010
    Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.
    http://www.physorg.com/news/20.....alism.html

    Ions have been teleported successfully for the first time by two independent research groups
    Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,,
    http://www.rsc.org/chemistrywo.....ammeup.asp

    Atom takes a quantum leap – 2009
    Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
    “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.
    http://www.freerepublic.com/fo.....1769/posts

    Quantum no-hiding theorem experimentally confirmed for first time
    Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This experiment provides experimental proof that the teleportation of quantum information must be instantaneous in this universe.)
    http://www.physorg.com/news/20.....tally.html

    etc.. etc.. etc..

    Falsification of non-reductive materialism;

    THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010
    Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes.
    http://www.faqs.org/periodical.....27241.html

    Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century
    Excerpt: Gödel’s Incompleteness Theorem says:
    “Anything you can draw a circle around cannot explain itself without referring to something outside the circle – something you have to assume to be true but cannot prove “mathematically” to be true.”
    http://www.cosmicfingerprints......pleteness/

    This following site is a easy to use, and understand, interactive website that takes the user through what is termed ‘Presuppositional apologetics’. The website clearly shows that our use of the laws of logic, mathematics, science and morality cannot be accounted for unless we believe in a God who guarantees our perceptions and reasoning are trustworthy in the first place.

    Proof That God Exists – easy to use interactive website
    http://www.proofthatgodexists.org/index.php

    Nuclear Strength Apologetics – Presuppositional Apologetics – video
    http://www.answersingenesis.or.....pologetics

    Materialism simply dissolves into absurdity when pushed to extremes and certainly offers no guarantee to us for believing our perceptions and reasoning within science are trustworthy in the first place:

    Dr. Bruce Gordon – The Absurdity Of The Multiverse & Materialism in General – video
    http://www.metacafe.com/watch/5318486/

    BRUCE GORDON: Hawking’s irrational arguments – October 2010
    Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world. Neither is it the case that “nothing” is unstable, as Mr. Hawking and others maintain. Absolute nothing cannot have mathematical relationships predicated on it, not even quantum gravitational ones. Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what “breathes fire into the equations and makes a universe for them to describe.,,, the evidence for string theory and its extension, M-theory, is nonexistent; and the idea that conjoining them demonstrates that we live in a multiverse of bubble universes with different laws and constants is a mathematical fantasy. What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse – where it is alleged that anything can spontaneously jump into existence without cause – produces a situation in which no absurdity is beyond the pale.
    For instance, we find multiverse cosmologists debating the “Boltzmann Brain” problem: In the most “reasonable” models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science.
    http://www.washingtontimes.com.....arguments/

    etc.. etc.. etc..

    Perhaps jon a simpler way for you to understand this is can you please tell me how a Darwinian process could possibly be construed as ‘non-teleological’ in a Theistic universe?

  303. jon specter, Isn’t it obvious??? As I pointed out, If you want to answer that question You must look at the basis of reality!!! Since the basis of reality is found to be teleological/Theistic (logos; information theoretic) in nature, then that answers the question!!! If a ‘Darwinian’ process of a gradual increase of functional information ever did occur, it would ultimately have to be attributable to a teleological processes since reality is shown to be Theistic in its basis,,, shown not to be ‘materialistic’ in its basis;

    notes;

    falsification of reductive materialism;

    The Failure Of Local Realism – Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    Physicists close two loopholes while violating local realism – November 2010
    Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.
    http://www.physorg.com/news/20.....alism.html

    Ions have been teleported successfully for the first time by two independent research groups
    Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,,
    http://www.rsc.org/chemistrywo.....ammeup.asp

    Atom takes a quantum leap – 2009
    Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
    “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.
    http://www.freerepublic.com/fo.....1769/posts

    Quantum no-hiding theorem experimentally confirmed for first time
    Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This experiment provides experimental proof that the teleportation of quantum information must be instantaneous in this universe.)
    http://www.physorg.com/news/20.....tally.html

    etc.. etc.. etc..

    Falsification of non-reductive materialism;

    THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010
    Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes.
    http://www.faqs.org/periodical.....27241.html

    Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century
    Excerpt: Gödel’s Incompleteness Theorem says:
    “Anything you can draw a circle around cannot explain itself without referring to something outside the circle – something you have to assume to be true but cannot prove “mathematically” to be true.”

    This following site is a easy to use, and understand, interactive website that takes the user through what is termed ‘Presuppositional apologetics’. The website clearly shows that our use of the laws of logic, mathematics, science and morality cannot be accounted for unless we believe in a God who guarantees our perceptions and reasoning are trustworthy in the first place.

    Proof That God Exists – easy to use interactive website
    http://www.proofthatgodexists.org/index.php

    Nuclear Strength Apologetics – Presuppositional Apologetics – video
    http://www.answersingenesis.or.....pologetics

    Materialism simply dissolves into absurdity when pushed to extremes and certainly offers no guarantee to us for believing our perceptions and reasoning within science are trustworthy in the first place:

    Dr. Bruce Gordon – The Absurdity Of The Multiverse & Materialism in General – video
    http://www.metacafe.com/watch/5318486/

    BRUCE GORDON: Hawking’s irrational arguments – October 2010
    Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world. Neither is it the case that “nothing” is unstable, as Mr. Hawking and others maintain. Absolute nothing cannot have mathematical relationships predicated on it, not even quantum gravitational ones. Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what “breathes fire into the equations and makes a universe for them to describe.,,, the evidence for string theory and its extension, M-theory, is nonexistent; and the idea that conjoining them demonstrates that we live in a multiverse of bubble universes with different laws and constants is a mathematical fantasy. What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse – where it is alleged that anything can spontaneously jump into existence without cause – produces a situation in which no absurdity is beyond the pale.
    For instance, we find multiverse cosmologists debating the “Boltzmann Brain” problem: In the most “reasonable” models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science.

    etc.. etc.. etc..

    Perhaps jon a simpler way for you to understand this is can you please tell me how a Darwinian process could possibly be construed as ‘non-teleological’ in a Theistic universe?

  304. 304

    Isn’t it obvious???

    Well, no. You questioned what Darwinism can or can’t do, then went on about mind-body duality, quantum mechanics, the second law of thermodynamics, the big bang, gravity, a supposed refutation of front loading, and finally The Shroud of Turin.
    It was a masterpiece of prose, but it did not address what I thought was a relatively simple question, specifically the technique used to differentiate a teleological Darwinian process from a non-teleological event.

    In your latest response, you revisit several of those themes, then also discussed Gödel’s Incompleteness Theorem, some po-mo stuff about whether are perceptions and observations can be trusted, and attacked the idea of a multiverse. It was another tour de force, to be sure, but still didn’t answer my questions.

    Your final paragraph came the closest:

    Perhaps jon a simpler way for you to understand this is can you please tell me how a Darwinian process could possibly be construed as ‘non-teleological’ in a Theistic universe?

    First of all, as I have said previously, I am not a Darwinist, so I am under no obligation to provide proof of what I don’t subscribe to. However, your final phrase “how a Darwinian process could possibly be construed as ‘non-teleological’ in a Theistic universe?” seems to suggest that there is, ultimately, no non-teleological processes. If that is the case, it seems to me that the idea of CSI is completely superfluous. If everything is designed, a tool to differentiate design from non-design is unnecessary because it will only ever yield one answer.

  305. jon specter, perhaps you find establishing the Theistic basis of reality to be irrelevant, yet none-the-less, to answer the question you put forth (How does one differentiate a teleological Darwinian process from one that isn’t?) it is necessary to establish precisely that point, to answer whether reality is materialistic or theistic in its basis! As for you saying that CST is completely superfluous, that is only true if one does not care to see or know where and if God has interceded further in the universe He created and sustains!

  306. vjtorley #283

    I am quite confused by your comment. You write:

    I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

    It appears that you believe that CSI is sometimes a useful way to detect design and sometimes not. Is that right?

    If so, if you come across some CSI how do you know if it is a sign of design?

  307. 307

    Bornagain,

    First of all, let us get one thing out of the way. I don’t appreciate you questioning my relationship with God. That is a matter between Him and me and you would be better served remembering about motes and beams.

    Second, you have twisted my words when you accuse me of saying CSI is superfluous. I said that CSI would be superfluous if everything is designed, which seems to be your argument, as best as I can tell. If everything is designed, you don’t need a design detector.

    At this point, you have committed two slanders against me by questioning my relationship with our Father and misrepresenting my question. I believe you need to make amends if we are going to proceed further.

  308. jon what in the world are you talking about??? Questioning your relationship with God????

    I stated this,,,

    ‘perhaps you find establishing the Theistic basis of reality to be irrelevant, yet none-the-less, to answer the question you put forth (How does one differentiate a teleological Darwinian process from one that isn’t?) it is necessary to establish precisely that point, to answer whether reality is materialistic or theistic in its basis! As for you saying that CSI is completely superfluous, that is only true if one does not care to see or know where and if God has interceded further in the universe He created and sustains!’,,,

    I cannot see where I directly accused you of being a neo-Darwinist/atheist anywhere in that post!!!!. As for my comment on CSI being important for Design Detection ON TOP of what God has already created and sustains, I solidly maintain that CSI is very important if one is concerned with detecting design on top of the design ALREADY establish in the universe. Perhaps its just as well that you go so far off base and accuse me of slander where there was no malicious intent, for this entire thread has been extremely bizarre as to the respect of reason, and I just as gladly cease from participating in it.

  309. 309

    bornagain,

    This statement (especially the bolded part):

    As for you saying that CSI is completely superfluous, that is only true if one does not care to see or know where and if God has interceded further in the universe He created and sustains

    is a backhanded statement about my faith and my interest in understanding intelligent design.

    I solidly maintain that CSI is very important if one is concerned with detecting design on top of the design ALREADY establish in the universe.

    This statement is practically nonsensical, but I’ll play along. How does one differentiate between a Darwinian process that only has one level of design and one that has two levels of design?

  310. Markf and Mathgrrl:

    Thank you both for your posts. Sorry for not replying sooner, but I was unable to get home last night, as the earthquake in Japan stopped train services. I spent the night sleeping (or trying to sleep) in a shopping mall near Yokohama station. One thing I learned: newspaper wrapped around your legs does not keep you very warm. Another thing: cold floors are very difficult to sleep on. It’s a good thing they had the air conditioners on, or I would have frozen. The relief workers finally handed out blankets and cardboard for us to sit on around 5 a.m. Everyone remained very calm, and the emergency relief workers did a very professional job. Further north, of course, the tragedy was much, much worse: over 1,000 people died.

    I would invite anyone who thinks ill of homeless people to think again. They don’t have air conditioning, like I did. They really have it rough, night after night after night. I can think of no group which is more deserving of our charity than the homeless.

    Markf, you asked an excellent and very probing question in your last post (#305), which followed up on my post in #283. You wrote:

    It appears that you believe that CSI is sometimes a useful way to detect design and sometimes not. Is that right?

    Actually, what I suspect is that IF the mathematics in my previous post in #283 are correct (and I’m not sure about that), then the definition of CSI may have to be revised somewhat. Right now, I’m asking some people I know to check my calculations, so I’ll get back to you on that one. Stay tuned.

    You also wrote:

    If so, if you come across some CSI how do you know if it is a sign of design?

    A very good question. See above.

    MathGrrl (#295):

    You wrote:

    Why are you using “x2″ instead of the actual sequence? Using the “two to the power of the length of the sequence” definition of CSI, we should be calculating based on the actual length.

    Good question. The “x2″ refers to the semiotic description. Let me put it another way, borrowing an example from the old joke about what dogs understand when their owners are talking: “Blah Blah Blah Blah Ginger Blah Blah” – except that in this case the “Blah” is not repetitive. In the original, it’s a long random string, then the gene that gets duplicated, and then more random stuff. And the gene that gets duplicated is itself a random string. To make things easier to visualize, I imagined that the duplicated gene was right at the end. I wrote the random stuff as “!@#$%^” even though of course it’s all A’s G’s, T’s and C’s. I wrote the gene itself as (AGTCGAGTTC), even though a real gene has about 100,000 bases (and of course it’s random too). Thus after the gene duplication, the simplest semiotic description is not !@#$%^(AGTCGAGTTC)(AGTCGAGTTC), but !@#$%^(AGTCGAGTTC)x2, which is much more economical.

    I hope that explains where I’m coming from. As I said to markf, I’m having the math checked out at the moment, so I’ll get back to you when I can.

  311. Indium:
    “Surprising and very promising. Everybody (except for Joseph who seems to run out of arguments) agrees that evolution can generate CSI. And this on an ID blog! This thread will be a great reference for the future. Maybe you should put this conclusion into the “Glossary” on the front page?”

    Technically, we (including Joseph) have been stating that evolution can “unfold” previously existing CSI — ie: from the CSI content of the structure of an EA. Evolutionary Algorithms can definitely produce CSI as an output, but here’s the problem …

    (from the last para of my last post above)

    …”Intelligence can use whatever means are at its disposal, including evolutionary algorithms, to generate CSI. The problem is whether or not chance and law, absent intelligence, can generate evolutionary algorithms that produce CSI. According to the recent work done by Dembski and Marks, the answer is that the the EA itself is at least as improbable as the output of that EA. Therefore, if chance and law can’t generate CSI from scratch, then it won’t be able to generate the evolutionary algorithm that can produce that CSI. Again, I have discussed this in the links I provided earlier.”

    IOW, if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT). That is basically how useful EAs, which can produce CSI patterns such as an efficient antenna, operate. Can anyone show law+chance absent intelligence either producing CSI or an EA that can produce CSI? That is, will a random set of laws and initial conditions derived from a source of statistically random data (ie: Random.org) — so as to remove interferring intelligent/foresighted input — generate CSI or an EA which then produces CSI?

    Until someone shows that foresight is not required to build an EA, by providing evidence that answers the previous question in the affirmative, the present mathematical and observational evidence shows that evolution can only be seen as a process requiring intelligence.

    P.S.

    Just in case anyone attempts to use the “weather simulation argument” as some sort of argument that “you can’t say that a simulation — including evolutionary algorithms — requires intelligence or else we would have to state that all weather events are intelligently deisgned,” I will point out that …

    -Weather *forecasts* definitely require an intelligently designed set of rules. That, I’m sure is undisputable. This only tells us that the forecast itself is intelligently designed.

    -However, as to weather *patterns:* The difference between a “weather pattern simulation” and a “simulation of an evolutionary algorithm producing CSI” is that weather patterns can be arrived at from a random set of laws and intitial conditions (law+chance absent intelligence), producing chaotic patterns which are mathematically indistinguishable from the types of patterns which make up “weather.” However, no one has shown that CSI or an EA that outputs a CSI pattern can be generated from a random set of laws and initital conditions (law+chance absent intelligence).

  312. @vjtorley

    It’s good to know you’re alright buddy.

  313. Has anyone here heard of
    “General Intelligent Design”?

  314. #310

    The problem is whether or not chance and law, absent intelligence, can generate evolutionary algorithms that produce CSI. According to the recent work done by Dembski and Marks, the answer is that the the EA itself is at least as improbable as the output of that EA.

    You are presumably referring to the “law of conservation of information”. You write as though this was an accepted law. It is highly controversial and in my view fallacious. I wrote a small piece about it some time ago. But I am far from alone.

  315. Dr. Torley, Glad you’re OK.

    Video – A Prayer After the Earthquake
    http://www.godtube.com/watch/?v=9C2EEMNU

  316. #309 vjtorley

    I am delighted to know you came through this awful event OK. As a matter of interest, did it in any degree weaken your confidence in an all powerful and loving God?

  317. markf, you said that the law of conservation of information is fallacious;

    Well prove it and collect 1 million dollars,,,

    “The Origin-of-Life Prize” ® (hereafter called “the Prize”) will be awarded for proposing a highly plausible natural-process mechanism for the spontaneous rise of genetic instructions in nature sufficient to give rise to life. The explanation must be consistent with empirical biochemical, kinetic, and thermodynamic concepts as further delineated herein, and be published in a well-respected, peer-reviewed science journal(s).
    http://www.us.net/life/

    markf, you probably say that it is unfair toi use ‘simple life’ as a benchmark, well I would be satisfied if you could just falsify Dr. Abel’s null hypothesis for the generation of prescriptive information, although I don’t have a million dollars to give you if you do, but at least you will have proven your point and falsified the law of Conservation of Information with actual empirical evidence instead of rhetoric!;

  318. @markf

    -”As a matter of interest, did it in any degree weaken your confidence in an all powerful and loving God?”

    A major tragedy occurs and the first thing you ask about is if a religious persons faith has faltered?

    *sigh*

    For the record, as far as THIS religious person goes, seeing as we believe eveything works together for the good, I’m not sure why you’d even bother to ask this. Especially when it has nothing to do with the topic at hand.

    I wonder sometimes what the irrelegious are thinking when they type stuff like this. Just sad.

    - Sonfaro

  319. markf:
    “You are presumably referring to the “law of conservation of information”. You write as though this was an accepted law. It is highly controversial and in my view fallacious. I wrote a small piece about it some time ago. But I am far from alone.”

    I’ll check out your piece when I have some extra time, but so far I have not read any near convincing “rebuttals.”

    I only referred to the conservation of information as an hypothesis that is founded upon presumably correct mathematics, and which is consistent with both observation and expermination with EAs.

    Furthermore, If you wish to refute anything I’ve stated here with regard to the conservation theorems or CSI then just provide evidence that law+chance, absent intelligence will produce either CSI from scratch or an EA which produces CSI. As I pointed out in my last comment …

    “IOW, if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT). That is basically how useful EAs, which can produce CSI patterns such as an efficient antenna, operate. Can anyone show law+chance absent intelligence either producing CSI or an EA that can produce CSI? That is, will a random set of laws and initial conditions derived from a source of statistically random data (ie: Random.org) — so as to remove interferring intelligent/foresighted input — generate CSI or an EA which then produces CSI?

    Until someone shows that foresight is not required to build an EA, by providing evidence that answers the previous question in the affirmative, the present mathematical and observational evidence shows that evolution can only be seen as a process requiring intelligence.”

  320. 320

    Bornagain,

    Since you are participating again at 316, despite saying you were done at 307, can I assume that you will honor me with an answer to my question:

    How does one differentiate between a Darwinian process that only has one level of design and one that has two levels of design?

  321. jon, though that is not how you originally framed the question, I’ve already answered your question (as it was originally framed) and it is apparently one that you have failed to grasp the simplicity of, or it is an answer that you do not want to hear, for whatever reason. Furthermore, it is an answer that you accused me of questioning your relationship of God with?!? Thus since I have better things to do than rehash what is blatantly obvious and being accused of slandering a man’s relationship with God, though I intended no such thing, I think it best I not try to answer any of your questions, since one, I feel it would be futile, and two, lest I be misconstrued and be accused of questioning the integrity of your faith once again.

  322. Sonfaro, bornagain77 and markf

    Thanks for your kind thoughts.

    markf: In answer to your question, the quake didn’t weaken my faith in God, but it reinforced my views on the silliness of theodicies which rationalize suffering.

    One answer I’ve received on gene duplication and CSI:

    With regard to your question, Phi_s(T) is the ‘number of patterns for which ……’ (as you stated). I take this to be the number of different patterns, or number of different sequences that will perform the same function. The key word here is ‘different’. It is nonsense to try to measure CSI by arbitrarily duplication. You will get a different answer every time, depending upon whether you have duplicated the gene three times, or three trillion times.

    The only way gene duplication will increase CSI is if the two genes perform a function that one gene alone will not perform. In that case, the double gene forms a single functional pattern.

    Got to go now. Talk to you soon.

  323. #317 Sonfaro

    A major tragedy occurs and the first thing you ask about is if a religious persons faith has faltered?

    *sigh*

    It may seem tactless but vjtorley had already given an excellent explanation of his own experiences. I respect and like him so this was of great interest – but I had nothing more to ask. The problem of theodicy is an important one but it is often discussed in theory by academics with little personal experience of the worst things nature can do. Here is someone I respect who has just personally experienced nature at its worst.

    For the record, as far as THIS religious person goes, seeing as we believe eveything works together for the good, I’m not sure why you’d even bother to ask this.

    I can only point to vjtorley’s own response:

    the quake didn’t weaken my faith in God, but it reinforced my views on the silliness of theodicies which rationalize suffering.

  324. #318 CYJman

    Furthermore, If you wish to refute anything I’ve stated here with regard to the conservation theorems or CSI then just provide evidence that law+chance, absent intelligence will produce either CSI from scratch or an EA which produces CSI.

    vjtorley has agreed above that gene duplication will produce CSI.

  325. #321 vjtorley

    I don’t know who provided you with the “answer” on gene duplication but it brings us right back to the question of how you recognise/measure CSI. The person who wrote is relating CSI to function. This is not what Dembski was doing in his paper. He was trying to identify a generic mathematical property of patterns which implies design without reference to function or any other external specification. I am sure he failed – but that is another matter. If you tie CSI back to function then you cannot detect design by just looking at a pattern – you have to understand how that pattern participates in a living thing.

    I don’t think the ID community realises the amount of confusion among its own members surrounding CSI, FSCI, dFSCI etc. E.g. when I debated this with Gpuccio sometime ago he agreed that one of his criteria for dFCSI was the exact opposite of one of Dembski’s (He saw Kolmogorov complexity as a sign of design, Dembski sees absence of Kolmogorov complexity as a sign of design).

  326. 326

    bornagain,

    though that is not how you originally framed the question, I’ve already answered your question (as it was originally framed)

    I originally asked the question about “how do you differentiate a teleological Darwinian process from one that isn’t?”. You responded a comment covering a wide range of subjects with nary a formula or mathematical procedure in sight. The gist of what you were saying was that the entire universe was designed.

    So, I asked if everything is designed, what use was a design detector. To which you responded, at 307 that you “solidly maintain that CSI is very important if one is concerned with detecting design on top of the design ALREADY establish in the universe.” So, I reframed the question in hopes that you would now provide that formula or procedure. Hopes that I see have still gone unfulfilled.

    it is apparently one that you have failed to grasp the simplicity of, or it is an answer that you do not want to hear, for whatever reason.

    Yes, indeed, I have failed to see how the Shroud of Turin is relevant to detecting design in a genetic sequence. And, let me state that adding for whatever reason at the end, doesn’t change the fact that you have slandered me again in the part that I bolded.

    I think it best I not try to answer any of your questions, since one, I feel it would be futile, and two, lest I be misconstrued and be accused of questioning the integrity of your faith once again.

    I asked a simple question that would be directly answered by providing a formula or mathematical procedure. You threw a whole bunch of stuff against the wall, apparently hoping something would stick. But there was not a formula or procedure in sight. I can only conclude that the answer is, three, you don’t know what that formula is, but are too proud to admit it.

  327. -”It may seem tactless but vjtorley had already given an excellent explanation of his own experiences. I respect and like him so this was of great interest – but I had nothing more to ask. The problem of theodicy is an important one but it is often discussed in theory by academics with little personal experience of the worst things nature can do. Here is someone I respect who has just personally experienced nature at its worst.”

    Regardless of the why’s, of all the questions you thought to ask – that was it? It was off-topic, unwarranted, and indeed tactless.

    Well, you got your answer anyway I suppose. (From two sides of the fence no less). I hope the discussion will move foreward now. The topic is of interest.

    - Sonfaro

  328. Jon, slandering me and then accusing me of slandering you again? How come it feels so much like a cheap daytime soap opera with you instead of science???

    I really don’t trust the way you twist stuff around,,, But to defend against your accusations,,,, Though I am not a mathematician, CJYman at 270, has assured me that this formula, which I have referenced for years, is a rough measure of CSI;

    Functional information and the emergence of bio-complexity:
    Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak:
    Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define ‘functional information,’ I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions.
    http://genetics.mgh.harvard.ed.....S_2007.pdf

    Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video
    http://www.metacafe.com/watch/3995236
    Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007
    Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,,
    http://www.tbiomed.com/content/4/1/47

    ———–

    I have briefly looked over Szoztak’s method for calculating functional information and upon first inspection it actually appears to be very similar to Dembski’s calculation for CSI.

    However, I think Dembski’s calculation is a little more detailed, since it measures functional information against both sequence space (as Szostak does) and a universal probability bound.
    http://www.uncommondescent.com.....ent-373694

    Three subsets of sequence complexity and their relevance to biopolymeric information – Abel, Trevors
    Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).
    http://www.ncbi.nlm.nih.gov/pm.....MC1208958/

  329. 329

    Jon, slandering me and then accusing me of slandering you again?

    Actually, I recapped our discussion to date, noting that you hadn’t provided a formula and wrapping up with my conclusion that you really don’t know what the formula is. Is that what you consider slander? Okay, now that you have provided a formula that that you have been assured you is the correct one, we can resolve this fairly easily.

    I have looked at that formula and it is not apparent to me how it differentiates between the base level of design you refer to and the second, additional level of design. Can you work through an example of each for gene with duplications, I’ll gladly withdraw my opinion.

  330. Jon, since I think we have both grossly misunderstood each other in our approaches, let us try to start off on a new foot and turn the other cheek to what we both feel have been slanders by the other???.,,, I am not a mathematician thus I have to rely on others who are qualified in that area. As well I have to rely directly on empirical evidence to refute the baseless claims of neo-Darwinists. ,,, I can point you to someone who is very well versed in the mathematics for your gene duplication question, (he has responded to a sincere question before) and I can cite the evidence that brings into question the whole gene duplication scenario,,,

    notes:

    First this man for the math;

    The GS (genetic selection) Principle – David L. Abel – 2009
    Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function.
    No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.”
    http://www.bioscience.org/2009.....6/3426.pdf

    Furthermore Dr. Abel’s observations for the inadequacy necessity and chance to produce functional prescriptive information, even by gene duplication, is bore out by empirics:

    Response from Ralph Seelke to David Hillis Regarding Testimony on Bacterial Evolution Before Texas State Board of Education, January 21, 2009
    Excerpt: He has done excellent work showing the capabilities of evolution when it can take one step at a time. I have used a different approach to show the difficulties that evolution encounters when it must take two steps at a time. So while similar, our work has important differences, and Dr. Bull’s research has not contradicted or refuted my own.
    http://www.discovery.org/a/9951

    Behe and Snoke go even further, addressing the severe problems with the Gene Duplication scenario in this following study:

    Simulating evolution by gene duplication of protein features that require multiple amino acid residues: Michael J. Behe and David W. Snoke
    Excerpt: The fact that very large population sizes—10^9 or greater—are required to build even a minimal [multi-residue] feature requiring two nucleotide alterations within 10^8 generations by the processes described in our model, and that enormous population sizes are required for more complex features or shorter times, seems to indicate that the mechanism of gene duplication and point mutation alone would be ineffective, at least for multicellular diploid species, because few multicellular species reach the required population sizes.
    http://www.pubmedcentral.nih.g.....id=2286568

    Interestingly Fred Hoyle arrived at the same conclusion, of a 2 amino acid limit, years earlier from a ‘mathematical’ angle:
    http://www.uncommondescent.com.....ent-367658

    The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations – Douglas D. Axe* – December 2010
    quote of note: ,, the most significant implication comes not from how the two cases contrast but rather how they cohere—both showing severe limitations to complex adaptation. To appreciate this, consider the tremendous number of cells needed to achieve adaptations of such limited complexity. As a basis for calculation, we have assumed a bacterial population that maintained an effective size of 10^9 individuals through 10^3 generations each year for billions of years. This amounts to well over a billion trillion (10^21) opportunities (in the form of individuals whose lines were not destined to expire imminently) for evolutionary experimentation. Yet what these enormous resources are expected to have accomplished, in terms of combined base changes, can be counted on the fingers.
    http://bio-complexity.org/ojs/.....O-C.2010.4

    Evolution by Gene Duplication Falsified – December 2010
    Excerpt: The various postduplication mechanisms entailing random mutations and recombinations considered were observed to tweak, tinker, copy, cut, divide, and shuffle existing genetic information around, but fell short of generating genuinely distinct and entirely novel functionality. Contrary to Darwin’s view of the plasticity of biological features, successive modification and selection in genes does indeed appear to have real and inherent limits: it can serve to alter the sequence, size, and function of a gene to an extent, but this almost always amounts to a variation on the same theme—as with RNASE1B in colobine monkeys. The conservation of all-important motifs within gene families, such as the homeobox or the MADS-box motif, attests to the fact that gene duplication results in the copying and preservation of biological information, and not its transformation as something original.
    http://www.creationsafaris.com.....#20110103a

    further notes;

    Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining ‘the cybernetic cut’, in this following Podcast:

    Programming of Life – Dr. Donald Johnson interviewed by Casey Luskin – audio podcast
    http://www.idthefuture.com/201....._life.html

  331. further note:

    Waiting Longer for Two Mutations – Michael J. Behe
    Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’ (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model.
    http://www.discovery.org/a/9461

    Whale Evolution Vs. Population Genetics – Richard Sternberg PhD. in Evolutionary Biology – video
    http://www.metacafe.com/watch/4165203

    Darwinism Vs. Whale Evolution – Part 1 – Richard Sternberg PhD. – SMU talk – video
    http://www.metacafe.com/watch/5263733

    Michael Behe, The Edge of Evolution, pg. 162 Swine Flu, Viruses, and the Edge of Evolution
    “Indeed, the work on malaria and AIDS demonstrates that after all possible unintelligent processes in the cell–both ones we’ve discovered so far and ones we haven’t–at best extremely limited benefit, since no such process was able to do much of anything. It’s critical to notice that no artificial limitations were placed on the kinds of mutations or processes the microorganisms could undergo in nature. Nothing–neither point mutation, deletion, insertion, gene duplication, transposition, genome duplication, self-organization nor any other process yet undiscovered–was of much use.”
    http://www.evolutionnews.org/2....._edge.html

    Again I would like to emphasize, I’m not arguing Darwinism cannot make complex functional systems, the data on malaria, and the other examples, are a observation that it does not. In science observation beats theory all the time. So Professor (Richard) Dawkins can speculate about what he thinks Darwinian processes could do, but in nature Darwinian processes have not been shown to do anything in particular.
    Michael Behe – 46 minute mark of video lecture on ‘The Edge of Evolution’ for C-SPAN
    http://www.uncommondescent.com.....ent-361037

  332. This following paper clearly reveals that there is a ‘cost’ to duplicate genes that further precludes the scenario from being plausible:

    Experimental Evolution of Gene Duplicates in a Bacterial Plasmid Model
    Excerpt: In a striking contradiction to our model, no such conditions were found. The fitness cost of carrying both plasmids increased dramatically as antibiotic levels were raised, and either the wild-type plasmid was lost or the cells did not grow. This study highlights the importance of the cost of duplicate genes and the quantitative nature of the tradeoff in the evolution of gene duplication through functional divergence. http://www.springerlink.com/co.....4014664w8/

    This recent paper also found the gene duplication scenario to be highly implausible:

    The Extinction Dynamics of Bacterial Pseudogenes – Kuo and Ochman – August 2010
    Excerpt: “Because all bacterial groups, as well as those Archaea examined, display a mutational pattern that is biased towards deletions and their haploid genomes would be more susceptible to dominant-negative effects that pseudogenes might impart, it is likely that the process of adaptive removal of pseudogenes is pervasive among prokaryotes.”
    http://www.evolutionnews.org/2.....37581.html

    Further comments on the implausibility of the gene duplication scenario:
    https://docs.google.com/document/pub?id=1u-mn_eUVxx5aSv_iz6xkRXbqJri_ZxJLY2Q9Hx02-X4

    ————-

    It should also be noted that neo-Darwinists, since they have such a extreme difficulty proving that purely material processes can generate any functional information whatsoever, that life is packed to the brim with information,,,

    “The manuals needed for building the entire space shuttle and all its components and all its support systems would be truly enormous! Yet the specified complexity (information) of even the simplest form of life – a bacterium – is arguably as great as that of the space shuttle.”
    J.C. Sanford – Geneticist – Genetic Entropy and the Mystery Of the Genome

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

    of note: The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, considered the most accurate measure of the transcendent information present in a ‘simple’ life form. For calculations please see the following site:

    Molecular Biophysics – Information theory. Relation between information and entropy:
    https://docs.google.com/document/pub?id=18hO1bteXTPOqQtd2H12PI5wFFoTjwg8uBAU5N0nEQIE

    Systems biology: Untangling the protein web – July 2009
    Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. “Combine all this and you can start to think that maybe some of the information flow can be captured,” he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. “The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent,” he says. “The simple pathway models are a gross oversimplification of what is actually happening.”
    http://www.nature.com/nature/j.....0415a.html

    etc.. etc.. etc…

  333. 333

    You know that is all very interesting, but none of it even comes close to answering my question to you. I think this is unlikely to go anywhere. At any rate, we can end on one point of agreement as it relates to your first paragraph in comment 329.

  334. Jon, Sorry I could not help, it appears we are not even speaking the same language though we both speak English!

    ————–

    Though we are apart in our language, Maybe we both can appreciate this song, for I believe music is referred to as the ‘universal language’

    Christy Nockels – Waiting Here For You
    http://www.godtube.com/watch/?v=9CFF01NU

  335. Markf

    I think this will have to be my last post on this thread, as I’m having trouble bringing it up on my PC – it seems to be eating into my computer’s virtual memory. And while I’d like to start a new thread, I’m currently working on an interesting one I promised I’d do for my next post.

    I’ll just make a few general comments. First, I agree that a high degree of CSI, as originally defined by Professor Dembski in his 2005 paper “Specification: The Pattern that Specifies Intelligence” is not sufficient by itself to warrant a design inference. I’ve been thinking about this in connection with gene duplication. Perhaps we can visualize it more clearly if we think of “base duplication” rather than gene duplication, as a gene still contains a random string of bases (A, G, T and C). Suppose one of the bases on a gene is substituted for the base next to it, and the next one, and so on, until eventually the entire gene consists of only that base (and its partner on the other helix). Now let the process continue all the way along a genome, until it’s all A’s on one helix (and all T’s on the other helix) – a very boring genome. We’ve now got a sequence which is very simply described (low descriptive complexity) but also very unlikely to arise by pure chance (high probabilistic complexity). But is a design inference warranted here? Surely not. Abel would call this an example of order rather than complexity. And I think he would be right. This sequence doesn’t have a function or a meaning. There seems nothing mind-like about it.

    Should we then give up on CSI, and say that only FCSI warrants a design inference? I think not. Abel himself acknowledges that not only functional information, but also meaningful but non-functional information, can warrant a design inference. Think of Carl Sagan’s “Contact”: if we detected a sequence of digits corresponding to the first 100 prime numbers coming from outer space, then we’d certainly infer design. Ditto if we detected a sequence of digits corresponding to pi or e. But what’s interesting about these cases is that the actual sequence of digits is not compressible. Pi and e are irrational. So I’m inclined to think that the combination of low descriptive complexity, high probabilistic complexity AND a high level of Shannon information (non-compressibility), taken together, warrants a design inference. If these three conditions are met, then you CAN infer design just by looking at a pattern, from its mathematical properties alone (independently of function). Thus CSI (suitably redefined) CAN warrant a design inference. I haven’t written down a precise mathematical formula expressing this idea – I’ll have to turn that over in my head for a while. But I hope you and MathGrrl can see what I’m getting at.

    Regarding gene duplication: since the low descriptive complexity comes at the expense of Shannon complexity, I think a design inference is ruled out.

    I do sympathize with your confusion at the inconsistent statements made by some ID proponents on the criteria that warrant a design inference. For instance, I think that the confusion over Kolmogorov complexity stems from the fact that some patterns which are low in Kolmogorov complexity are also easily compressible (e.g. ababab). That’s why I added the condition of high Shannon complexity. I think everyone in the ID community agrees, though, that a high degree of FCSI warrants a design inference. The non-functional cases are a bit harder to pin down, and this is a fairly new area of mathematics, so it’s hardly surprising that formulations attempting to lay down the criteria for inferring design have to be modified when counter-instances are put forward. Still, I see this as a healthy sign. At least we’re making progress.

    Regarding the futility of theodicies, I’d like to quote the closing two paragraphs of David Bentley Hart’s article, Tsunami and Theodicy , which originally appeared in the March 2005 issue of “First Things”, after the Boxing Day tsunami of December 2004, and which was reprinted in the January 2010 edition of “First Things”:

    I do not believe we Christians are obliged – or even allowed – to look upon the devastation visited upon the coasts of the Indian Ocean and to console ourselves with vacuous cant about the mysterious course taken by God’s goodness in this world, or to assure others that some ultimate meaning or purpose resides in so much misery. Ours is, after all, a religion of salvation; our faith is in a God who has come to rescue His creation from the absurdity of sin and the emptiness of death, and so we are permitted to hate these things with a perfect hatred. For while Christ takes the suffering of his creatures up into his own, it is not because he or they had need of suffering, but because he would not abandon his creatures to the grave. And while we know that the victory over evil and death has been won, we know also that it is a victory yet to come, and that creation therefore, as Paul says, groans in expectation of the glory that will one day be revealed. Until then, the world remains a place of struggle between light and darkness, truth and falsehood, life and death; and, in such a world, our portion is charity.

    As for comfort, when we seek it, I can imagine none greater than the happy knowledge that when I see the death of a child I do not see the face of God, but the face of His enemy. It is not a faith that would necessarily satisfy Ivan Karamazov, but neither is it one that his arguments can defeat: for it has set us free from optimism, and taught us hope instead. We can rejoice that we are saved not through the immanent mechanisms of history and nature, but by grace; that God will not unite all of history’s many strands in one great synthesis, but will judge much of history false and damnable; that He will not simply reveal the sublime logic of fallen nature, but will strike off the fetters in which creation languishes; and that, rather than showing us how the tears of a small girl suffering in the dark were necessary for the building of the Kingdom, He will instead raise her up and wipe away all tears from her eyes – and there shall be no more death, nor sorrow, nor crying, nor any more pain, for the former things will have passed away, and He that sits upon the throne will say, “Behold, I make all things new.”

    That pretty well sums up my sentiments, for earthquakes and tsunamis anywhere.

    I hope this answers your questions.

  336. Hey vjtorley,

    Thanks for that note on theodicy’s. It’ll be something for me to ponder on.

    - Sonfaro

  337. CJYman,

    In the end, programming a CSI calculator would not be difficult as the only calculations required are multiplication and logarithm. Asking the right questions for a user to input the proper values in the proper locations might be difficult, though.

    I agree, which is why I find CSI as currently defined to be non-rigorous, mathematically.

    I would be very interested in seeing how you would calculate CSI for the four scenarios I describe in 117 above.

    Furthermore, if you will read through my provided links above again, you will notice that I have no qualm with evolutionary mechanisms generating CSI.

    I don’t want to put words in your mouth, so I’d like to ask one question about this statement in particular. Do you therefore agree that software simulations of evolutionary mechanisms are capable of generating CSI without intelligent intervention? That is, do you agree that CSI can arise solely from mechanisms such as mutation and crossover combined with differential reproductive success without any interference in the simulation once it is running?

    Intelligence can use whatever means are at its disposal, including evolutionary algorithms, to generate CSI. The problem is whether or not chance and law, absent intelligence, can generate evolutionary algorithms that produce CSI.

    I don’t understand this statement. The real world exists (unless you want to argue for nihilism or solipsism, in which case I’ll leave you to it). The physics and chemistry we observe leads to the evolutionary mechanisms we observe (I’m leaving aside abiogenesis for the moment). Unless you’re claiming that an unobserved intelligent agent of some sort is involved in all the biochemistry performed in all the labs around the world, it seems that we can see evolutionary algorithms arising from natural processes.

  338. vjtorley,

    Sorry for not replying sooner, but I was unable to get home last night, as the earthquake in Japan stopped train services. I spent the night sleeping (or trying to sleep) in a shopping mall near Yokohama station.

    My thoughts are with you. I hope the planned rolling blackouts aren’t too much of a hardship and that you are upwind of the nuclear plants.

    Good luck and be careful.

  339. vjtorley,

    Why are you using “x2? instead of the actual sequence? Using the “two to the power of the length of the sequence” definition of CSI, we should be calculating based on the actual length.

    Good question. The “x2? refers to the semiotic description. Let me put it another way, borrowing an example from the old joke about what dogs understand when their owners are talking: “Blah Blah Blah Blah Ginger Blah Blah” – except that in this case the “Blah” is not repetitive.

    I agree with your essential point, I believe, hence my mention of Kolmogorov Chaitin complexity previously. This is one of several reasons why I also agree with your previous statement that “Actually, what I suspect is that IF the mathematics in my previous post in #283 are correct . . . then the definition of CSI may have to be revised somewhat.” My only disagreement is that I think that the required revision might be larger than you suggest.

    Would you agree that until such a revision is available and demonstrated to objectively and unambiguously measure the involvement of intelligent agency, CSI cannot be claimed to be a useful metric for that task?

  340. CJYman,

    IOW, if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT).

    That does not follow from the No Free Lunch theorems. All those theorems say, in layman’s terms, is that averaged over all search spaces, no algorithm performs better than a random search. For particular search spaces, some algorithms can perform dramatically better than others.

    This was raised on one of the threads you mentioned earlier. Without going back to it, I remember several people pointing out that the world we inhabit is one “search space” in your model (the scare quotes are because there are several other issues with modeling evolution as a search). It’s not surprising that some algorithms are better able than others to traverse that space. It’s even less surprising that the evolutionary mechanisms we observe are components of those algorithms. If they didn’t work in this “search space”, we wouldn’t observe them.

    However, as to weather *patterns:* The difference between a “weather pattern simulation” and a “simulation of an evolutionary algorithm producing CSI” is that weather patterns can be arrived at from a random set of laws and intitial conditions (law+chance absent intelligence), producing chaotic patterns which are mathematically indistinguishable from the types of patterns which make up “weather.” However, no one has shown that CSI or an EA that outputs a CSI pattern can be generated from a random set of laws and initital conditions (law+chance absent intelligence).

    Actually, that’s exactly what is shown by some real biological systems, as calculated by vjtorley above. Computer simulations have shown the same; consider Schneider’s ev, which demonstrated exactly the same behavior he observed in real biological systems. This qualifies as “mathematically indistinguishable from the types of patterns” we see in natural processes.

    If you disagree that these systems generate CSI, please show me how you would calculate CSI for the four scenarios I describe in my post 177 in this thread.

  341. vjtorley,

    One answer I’ve received on gene duplication and CSI:

    With regard to your question, Phi_s(T) is the ‘number of patterns for which ……’ (as you stated). I take this to be the number of different patterns, or number of different sequences that will perform the same function. The key word here is ‘different’. It is nonsense to try to measure CSI by arbitrarily duplication. You will get a different answer every time, depending upon whether you have duplicated the gene three times, or three trillion times.

    The only way gene duplication will increase CSI is if the two genes perform a function that one gene alone will not perform. In that case, the double gene forms a single functional pattern.

    This is why I used the production of a certain amount of a protein as the specification in my scenario. It seems to meet the criteria of your correspondent, so it appears your calculations remain basically correct.

  342. vjtorley,

    I think this will have to be my last post on this thread, as I’m having trouble bringing it up on my PC – it seems to be eating into my computer’s virtual memory.

    It’s a long one. Perhaps you’d consider joining us over on Mark Frank’s blog (apologies to Mark for my presumption)?

    First, I agree that a high degree of CSI, as originally defined by Professor Dembski in his 2005 paper “Specification: The Pattern that Specifies Intelligence” is not sufficient by itself to warrant a design inference.

    I hadn’t seen this when I posted my most recent question to you in this thread. Please ignore it, since this answers it fully.

    I look forward to further discussions with you. Keep safe.

  343. #341

    It’s a long one. Perhaps you’d consider joining us over on Mark Frank’s blog (apologies to Mark for my presumption)?

    That would be a pleasure. I have started a new thread specifically for you and CSI.

  344. MathGrrl and Dr. Torley, if you can load,,,

    I would like to point this Gene Duplication study out which confirms that Genetic Entropy has not ever been violated, not even by gene duplication;

    Is gene duplication a viable explanation for the origination of biological information and complexity?
    Abstract; All life depends on the biological information encoded in DNA with which to synthesize and regulate various peptide sequences required by an organism’s cells. Hence, an evolutionary model accounting for the diversity of life needs to demonstrate how novel exonic regions that code for distinctly different functions can emerge. Natural selection tends to conserve the basic functionality, sequence, and size of genes and, although beneficial and adaptive changes are possible, these serve only to improve or adjust the existing type. However, gene duplication allows for a respite in selection and so can provide a molecular substrate for the development of biochemical innovation. Reference is made here to several well-known examples of gene duplication, and the major means of resulting evolutionary divergence, to examine the plausibility of this assumption. The totality of the evidence reveals that, although duplication can and does facilitate important adaptations by tinkering with existing compounds, molecular evolution is nonetheless constrained in each and every case. Therefore, although the process of gene duplication and subsequent random mutation has certainly contributed to the size and diversity of the genome, it is alone insufficient in explaining the origination of the highly complex information pertinent to the essential functioning of living organisms.
    http://onlinelibrary.wiley.com.....5/abstract

  345. MayhGrrl you state;

    ‘The physics and chemistry we observe leads to the evolutionary mechanisms we observe’

    Please MathGrrl’ do tell of any ‘evolutionary’ example whatsoever that has been ‘observed’ passing ‘the fitness test’;

    For a broad outline of the ‘Fitness test’, required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles:

    Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video
    http://www.metacafe.com/watch/3995248

    Testing the Biological Fitness of Antibiotic Resistant Bacteria – 2008
    http://www.answersingenesis.or.....-drugstore

    Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology
    Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution.
    http://www.evolutionnews.org/2.....s_wro.html

    List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria:
    http://www.trueorigin.org/bacteria01.asp

    The following study surveys four decades of experimental work, and solidly backs up the preceding conclusion that there has never been an observed violation of genetic entropy:

    “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
    Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net ‘fitness gain’ within a ‘stressed’ environment i.e. remove the stress from the environment and the parent strain is always more ‘fit’)
    http://behe.uncommondescent.co.....evolution/

    Michael Behe talks about the preceding paper on this podcast:

    Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time – December 2010
    http://intelligentdesign.podom.....3_46-08_00

  346. Mathgrrl and markf

    Thank you for your posts. This will be my very last one on this thread.

    Regarding the nuclear reactor problems in Japan, you can find out what’s happening by checking here: http://bravenewclimate.com . I have to say that the media is sensationalizing the reactor problems, and the headlines on the Drudge Report are wildly over the top. This isn’t even as serious as Three Mile Island, let alone Chernobyl. See here: http://au.news.yahoo.com/thewe.....ow-abroad/ .

    Fortunately, I live quite a long way from Fukushima, so my family is safe. Anyway, thank you for your concern, Mathgrrl.

    Regarding CSI: on page 24 of his paper, Dembski defines the specified complexity Chi (minus the context sensitivity) as -log2[(10^120).Phi_s(T).P(T|H)], where T is the pattern in question, H is the chance hypothesis and Phi_s(T) is the number of patterns for which agent S’s semiotic description of them is at least as simple as S’s semiotic description of T.

    Here’s how I would amend the definition:

    Chi=-log2[(10^120).(SC/KC).PC], where SC is the Shannon complexity, KC is the Kolmogorov complexity (here defined as the length of the minimum description needed to characterize a pattern) and PC is the probabilistic complexity, defined as the probability of the pattern arising by natural non-intelligent processes. I envisage PC as a summation, where we consider all natural non-intelligent processes that might be capable of generating the pattern, calculate the probability of each process actually doing so over the lifespan of the observable universe and within the confines of the observable universe, and then sum the probabilities for all processes. Thus PC would be Sigma[P(T|H_i)], where H_i is the hypothesis that the pattern in question, T, arose through some naturalistic non-intelligent process (call it P_i). In reality, a few processes would likely dwarf all the others in importance, so PC could be simplified by ignoring the processes that had a very remote chance of generating T, relatively speaking.

    According to my definition, a string having a high ratio of Shannon complexity to Kolmogorov complexity (here defined as the length of the minimum description needed to characterize a pattern is more likely to be a product of design – especially if its probabilistic complexity is low. The (10^120) factor covers all events happening in the lifespan of the observable universe. Thus we can say that if Chi=-log2[(10^120).(SC/KC).PC] is greater than 1, then it is reasonable to conclude that T was designed.

    Can you think of any plausible counter-examples?

  347. P.S. When I wrote “Shannon complexity: in my previous post, I should have been a little clearer about what I meant. I simply meant: the length of the string after being compressed in the most efficient manner possible.

  348. Jon Specter,
    If you are still here, I would like to try and answer your question:
    How do you detect design in the universe if the universe itself is designed?
    I think that it has to do with the question,designed for what?
    I believe the evidence is strong that the universe was designed to SUPPORT and SUSTAIN life. The laws of physics and the physical constants make it possible for stars and planets and galaxies to exist as well as ordinary matter and life itself. That being said, physics and chemistry themselves have been shown to be totally inadequate at producing functionally complex organization
    and information,not to mention life.
    Empirical science has shown that nature all by itself cannot PRODUCE life.The law of biogenesis confirms that.So even though the physical universe is designed to HOUSE complex life, it is not designed to MAKE complex life. So we have to look at the only other agency which is abled to produce information and functionally complex organization,and that is intelligence. That is what ID is.
    Figuring out what can be reasonably attributed to natural processes and what cannot though an explanatory filter. A random jagged mountain side can be attributed to natural processes but an iphone can’t.
    You understand?

  349. Hello MathGrrl,

    I really wish I had more to time to go over this with you, but alas, edumacation is quite busy right now.

    It appears that we would need to hash through almost as many posts as I originally linked for you to cover these concepts in depth and what they actually mean, how CSI is related to NFLT and the work done by Dembski and Marks on active info, and how this all applies to ID Theory.

    For now, I’ll reply to your last few comments. Please respond back, since I may have some extra time to respond again. However, it may also be that we’ll have to carry on this discussion at another time.

    I just really hope that for now, you finally realize that your assertions, for more than half of this post about CSI being non-rigorous based on your misunderstanding of its calculation, are completely incorrect.

    Earlier I stated:
    “IOW, if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT).”

    MathGrrl, you responded:
    “That does not follow from the No Free Lunch theorems. All those theorems say, in layman’s terms, is that averaged over all search spaces, no algorithm performs better than a random search. For particular search spaces, some algorithms can perform dramatically better than others.”

    Yes, and it is Dembski and Marks further work which shows that to match that specific algorithm to the search space is just as difficult as finding the output of that EA in the first place.

    If you do not agree, simply provide evidence that chance+law, absent intelligence will produce either CSI from scratch or the EA to produce CSI, in the form of the experiment utilizing Random.org that I mentioned.

    MathGrrl:
    “This was raised on one of the threads you mentioned earlier. Without going back to it, I remember several people pointing out that the world we inhabit is one “search space” in your model (the scare quotes are because there are several other issues with modeling evolution as a search). It’s not surprising that some algorithms are better able than others to traverse that space. It’s even less surprising that the evolutionary mechanisms we observe are components of those algorithms. If they didn’t work in this “search space”, we wouldn’t observe them.”

    Of course, and if law+chance won’t produce the CSI observed in this universe from scratch without evolution, then — barring the use of an infinite multiverse in probability calculations for this universe which then arbitrarily destroys the foundation for all probability based science –law+chance won’t produce the EA (structure of life, laws of our universe, and the match between the two) required within this universe to produce CSI.

    In reference to the multi-verse, even if it does exist, as I’ve stated on another thread … “The best attempt to try to explain such organization so far is to throw vast excess of probabilistic resources at the problem in order to “allow” chance to do the dirty work of generating these patterns that are routinely observed to require intelligent systems utilizing their foresight. However, along with multiple universes there comes no non-arbitrary cutoff point as to what infinite probabilistic resources are to be used to explain. Infinite probabilistic resources can be used to explain away every pattern in existence and thus science stops since no further explanation is required. Even the infamous camera found on a planet on the other side of the universe, those hypothetical radio signals from ETI, and the orbit of the planets around the sun, and the arrangement of crystals could be explained by chance if infinite probabilistic resources are given.” Thus, explanations observed to require either intelligence or law become arbitrarily superfluous. Science then grinds to a halt.

    I also stated:
    “However, as to weather *patterns:* The difference between a “weather pattern simulation” and a “simulation of an evolutionary algorithm producing CSI” is that weather patterns can be arrived at from a random set of laws and intitial conditions (law+chance absent intelligence), producing chaotic patterns which are mathematically indistinguishable from the types of patterns which make up “weather.” However, no one has shown that CSI or an EA that outputs a CSI pattern can be generated from a random set of laws and initital conditions (law+chance absent intelligence).”

    MathGrrl, you replied with:
    “Actually, that’s exactly what is shown by some real biological systems, as calculated by vjtorley above. Computer simulations have shown the same; consider Schneider’s ev, which demonstrated exactly the same behavior he observed in real biological systems. This qualifies as “mathematically indistinguishable from the types of patterns” we see in natural processes.”

    You have completely misrepresented and twisted my argument. This may have been why KF left the discussion. You are continuing to leave out important parts of our arguments. You did it with the calculation for CSI for many comments, despite having three sources explain it to you in detail with calculations and now you are at it aagin. I really want to give you the benefit of the doubt on this one, but it is starting to get frustrating.

    The important part that you left out is that a random set of laws and intitial conditions (that is, law+chance) will produce patterns which are mathematically indistinguishable from weather patterns, however a random set of laws and initial conditions will produce neither CSI nor an evolutionary algorithm that outputs CSI.

    For an EA to work, the law structure has to be anything but random, since it must match the search space of the problem which needs to be optimized.

    In order for you to disagree with any of this, you will need to provide an example where a program was written by law+chance — the laws and intitial conditions were chosen from a random source such as Random.org — and it either developed CSI from scratch or it designed an EA which then output CSI.

    Again, as I’ve already explained,
    “… if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT). That is basically how useful EAs, which can produce CSI patterns such as an efficient antenna, operate.

    Until someone shows that foresight is not required to build an EA, by providing evidence that answers the previous question [referencing a random set of laws -- law+chance -- producing CSI or an EA that produces CSI] in the affirmative, the present mathematical and observational evidence shows that evolution can only be seen as a process requiring intelligence.

  350. MathGrrl:
    “If you disagree that these systems generate CSI, please show me how you would calculate CSI for the four scenarios I describe in my post 177 in this thread.”

    I don’t disagree that an EA can produce CSI, I disagree that law+chance can write an EA that produces CSI.

    I’ve already provided calculations for CSI. It took a while to research the variables. I don’t have the time to do this again right now. It is now your turn to back up your position and show that law+chance will either produce CSI or an EA that produces CSI relying only on random data from a source such as Random.org. The reliance on random data is required to remove any possible foresighted element to the construction of the program and the creation of and matching of search algorithm to search space (akin to the construction and matching of the values for the natural laws and the structure of life itself).

    Obviously a programming environment would be allowed. This is akin to granting that an environment within which any laws can operate can exist without intelligence.

  351. 351

    Kuartas, I understood all that the first time, when bornagain said it.

  352. dang jon, I was admiring how much more clearly Kuartus stated it than I did.

  353. vjorley,

    Thank you for your posts. This will be my very last one on this thread.

    Thanks for all your work on actually computing CSI. I hope you’ll continue to participate on Mark Frank’s blog.

    Regarding CSI: on page 24 of his paper, Dembski defines the specified complexity Chi (minus the context sensitivity) as -log2[(10^120).Phi_s(T).P(T|H)], where T is the pattern in question, H is the chance hypothesis and Phi_s(T) is the number of patterns for which agent S’s semiotic description of them is at least as simple as S’s semiotic description of T.

    Here’s how I would amend the definition:

    Chi=-log2[(10^120).(SC/KC).PC], where SC is the Shannon complexity, KC is the Kolmogorov complexity (here defined as the length of the minimum description needed to characterize a pattern) and PC is the probabilistic complexity, defined as the probability of the pattern arising by natural non-intelligent processes.

    While I understand your motivation for using Kolmogorov Chaitin complexity rather than the simple string length, the problem with doing so is that KC complexity is uncomputable. For most sequences, the most that can be said is that the minimal description is no more than the length of the string plus a constant related to the language being used to describe it. That raises the same issues related to the use of the length of the sequence in Dembski’s formulation.

    I envisage PC as a summation, where we consider all natural non-intelligent processes that might be capable of generating the pattern, calculate the probability of each process actually doing so over the lifespan of the observable universe and within the confines of the observable universe, and then sum the probabilities for all processes.

    This is another term that is impossible to calculate, although in this case it is a practical rather than a theoretical limitation. We simply don’t know the probabilities that make up PC. We don’t even know all the processes — that’s why we continue to do research.

    Computing PC based on known processes and assumed probabilities will certainly lead to many false positives. This version of CSI is therefore more a measure of our ignorance than of intelligent agency, just as Dembski’s is.

    Can you think of any plausible counter-examples?

    That’s not how science works. If you’re proposing a new metric, you need to clearly and rigorously define it, which you’ve made a good start at, and show how it actually measures what you claim it measures with some worked examples. Personally, I’d like to see it applied to my four scenarios.

    One problem you’ll immediately encounter is identifying artifacts that are not designed, so that you can show that your metric doesn’t give false positives. That’s a metaphysical question that is sure to raise challenges from some ID proponents, no matter what artifacts you choose.

  354. CJYman,

    I just really hope that for now, you finally realize that your assertions, for more than half of this post about CSI being non-rigorous based on your misunderstanding of its calculation, are completely incorrect.

    On the contrary, I think my claim that CSI is not rigorously defined is well supported by the fact that several people compute it differently. Even vjtorley, who made an admirable effort to actually use Dembski’s definition, had to interpret certain terms.

    I still haven’t seen any ID proponent attempt to calculate CSI for the last three of the four scenarios described in 177 above, and the calculations for the first scenario all show that evolutionary mechanisms are, in fact, capable of generating CSI.

    “That does not follow from the No Free Lunch theorems. All those theorems say, in layman’s terms, is that averaged over all search spaces, no algorithm performs better than a random search. For particular search spaces, some algorithms can perform dramatically better than others.”

    Yes, and it is Dembski and Marks further work which shows that to match that specific algorithm to the search space is just as difficult as finding the output of that EA in the first place.

    This is exactly why modeling evolution as a search can confuse the issue. There is no process to “match that specific algorithm to the search space”. We inhabit a universe with particular characteristics. There aren’t any other “search spaces”, although the Earth’s environment is constantly changing. The No Free Lunch theorems have absolutely no applicability to a single “search space”. It is profoundly unsurprising that mechanisms that can be modeled as algorithms that work in search spaces modeled on the real world are observed in the real world.

    If you do not agree, simply provide evidence that chance+law, absent intelligence will produce either CSI from scratch or the EA to produce CSI, in the form of the experiment utilizing Random.org that I mentioned.

    That’s not how science works. If you make a claim, you have to support it. Thus far the claim that CSI is an indicator of the involvement of an intelligent agent has not been supported.

    “This was raised on one of the threads you mentioned earlier. Without going back to it, I remember several people pointing out that the world we inhabit is one “search space” in your model (the scare quotes are because there are several other issues with modeling evolution as a search). It’s not surprising that some algorithms are better able than others to traverse that space. It’s even less surprising that the evolutionary mechanisms we observe are components of those algorithms. If they didn’t work in this “search space”, we wouldn’t observe them.”

    Of course, and if law+chance won’t produce the CSI observed in this universe from scratch without evolution, then — barring the use of an infinite multiverse in probability calculations for this universe which then arbitrarily destroys the foundation for all probability based science –law+chance won’t produce the EA (structure of life, laws of our universe, and the match between the two) required within this universe to produce CSI.

    You’re confusing two claims. You seem to accept that, in this universe and in particular on this planet, evolutionary mechanisms can generate CSI, however loosely defined. That’s the only point I was trying to clarify on this thread.

    Your second claim seems to be based on the anthropic principle. That’s a whole separate discussion with a rich background. Frankly, I find it pretty unconvincing but, with all due respect to your views, also pretty uninteresting. There’s just not enough math in it. ;-)

    If you do, in fact, accept that CSI can be generated via known evolutionary mechanisms, I’ll leave the anthropic principle arguments to others.

  355. MathGrrl and Dr. Torley if you can load, JonathanM commented here on the inability of ‘gene duplication’, operating per neo-Darwinian processes, to generate any non-trivial functional complexity;

    Michael Behe Hasn’t Been Refuted on the Flagellum!
    Excerpt: Douglas Axe of the Biologic Institute showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalization. If a duplicated gene is neutral (in terms of its cost to the organism), then the maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself).
    http://www.evolutionnews.org/2.....44801.html

    I looked up Dr. Axe’s paper here and found

    The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations Douglas D. Axe*
    Excerpt: In particular, I use an explicit model of a structured bacterial population, similar to the island model of Maruyama and Kimura, to examine the limits on complex adaptations during the evolution of paralogous genes—genes related by duplication of an ancestral gene. Although substantial functional innovation is thought to be possible within paralogous families, the tight limits on the value of d found here (d ? 2 for the maladaptive case, and d ? 6 for the neutral case) mean that the mutational jumps in this process cannot have been very large.
    http://bio-complexity.org/ojs/.....O-C.2010.4

    Though the math is technical, and over my head, I do know that this study lines up extremely well with the empirical evidence that shows severe limits for the gene duplication scenario. A scenario that has never ‘empirically’ violated the principle of Genetic Entropy. i.e. all examples of put forth by neo-Darwinists fail after careful scrutiny!

  356. 356

    Mathgrrl,

    You have repeatedly stated that evolutionary mechanisms can create CSI.

    I am completely comfortable being the odd man out here, so again, I must insist. CSI is an acronym for complex specified information. That is a noun with two very deliberate modifiers in front of it. That particular noun has certain characteristics that make it distinguishable among other words. One of the primary characteristics is that information only exists by means of semiotic representations (symbols and rules). There are no examples of it existing by any other means anywhere in the cosmos. This is simply an observation of reality.

    Many people have been tempted here to ignore the fact that Shannon specifically noted two distinct characteristics of a given signal (that which is meaningful and that which is noise). This distinction is a logical result of his engineering point-of-view (ie. that within the transmission and reception of a signal, some part of that signal could be information and some part could be noise). In ignoring this fact, one could temporarily ignore the fact that information does indeed require symbols in order to exist, while noise does not have that requirement. The conflation of the two is rampant (and often deliberate). However, in the presence of the two modifiers (complex and specified), relying on that kind of ignorance is not available. CSI does in fact exist (as does all other meaningful information) as a result of symbols and rules.

    So when you repeatedly claim that evolutionary mechanisms can “create” or “produce” complex specified information, you are making an absolutely unsupported statement. For a mechanism to “create” information (by definition) that mechanism would first have to have the capacity to create symbols and rules. Otherwise, the very most that could be said is that a mechanism has the capacity to alter or manipulate the symbols and rules that it had already been given – since it doesn’t have the capacity to create them itself. You might even say that evolutionary mechanisms can manipulate information within an existing intelligent system. That would be far more accurate (and convincing) than the claim you make.

    In essence, you are removing from the table the key characteristic of meaningful (complex specified) information, and then you make claims as to how it can be produced. I know you fully understand this rather significant problem, or else you wouldn’t try so hard to dismiss it. Nonetheless, it is a fact.

    - – - – - -

    Having witnessed your defense of an unsupported assertion, I make this observation on behalf of the gallery. I also make it in opposition to what other proponents of ID may say and think.

    If you care to respond Mathgrrl, why not this time actually address the issue instead providing your customary uninterested dismissal. I am more than happy to get down to the level of the symbol itself. We can remove the system you take for granted, and we will see if your position holds up.

  357. Jon Specter, I dont get it. If you understood, how come you said that the concept of CSI was superfluous and that detecting design in a designed universe was practically nonsense? It seems to me that you didnt understand,or else I dont think you would have had a problem with it.I take it you dont disagree with my response?

  358. MathGrrl,

    The fact that some people don’t understand CSI, or calculate for it from different givens (which can be done but the calculation changes to the context-dependent form of CSI), makes no difference to mine and KF’s argument for CSI based explicitly on the math explained by Dembski in “Specification: the patterns which signify intelligence.”

    If I had more time, we could argue about different people’s interpretation of it, but the links I provided for you as well as the info KF provided, show that how I have calculated for CSI does indeed provide a specific measure of probability based on resources and search space. If my calculations are correct, and based on Demsbki’s CSI as I have defended in the links I provided for you, then CSI is indeed a rigorous mathematical concept. Just because you continue to misunderstand it or others disagree with how I have calculated it, does not make it any less rigorous. I state again, I have defended how I calculate it in the provided links, with direct quotes and examples from Dembski’s paper.

    Either way, disagreement or not, I have provided a way to calculate for CSI and my argument is based on that calculation. Can you show that CSI based on my calculation and KF’s explanation of the concept will be produced by law+chance?

    KF has already much earlier in this thread provided you with our comments here as an example of a pattern requiring intelligence in its generation and also exhibiting CSI. The only thing that your last comment tells me is that you can provide no example of law+chance absent intelligence generating either CSI from scratch or an EA which produces CSI. Remember, the laws and intitial conditions of the program must be derived strictly from a random source such as RAndom.org to remove any potential foresight of an intelligent agent.

    So the ID hypothesis of CSI — at least as KF and I have explained it and calculated for it — as a no-go for law+chance absent intelligence can so far be seen to be correct.

  359. Upright BiPed,

    Well stated at 356!

    I agree completely. It is obvious that EAs shuffle around CSI (from the CSI in the structure of its programming — the probability of matching search algorithm to search space — to the CSI in the structure of its output) but do not generate CSI.

    There does seem to be a lot of confusion about this.

  360. 360

    kuartus:

    Jon Specter, I dont get it. If you understood, how come you said that the concept of CSI was superfluous and that detecting design in a designed universe was practically nonsense?

    Understanding what you are saying and agreeing that it is meaningful are two different questions. I understood what you were saying, but I do not find it meaningful. A design detector should be able to differentiate between designed and non-designed items. If the entire universe exhibits design, then there are no non-designed items to detect.

    On a lark, I stayed up last night writing a computer program for a design detector under those assumptions. Feel free to use it. No charge.

    Here is the code:

    Design = 1

    It is a model of parsimonious code, if I do say so myself.

  361. Jon Specter, it seems you have no idea what you are talking about.I believe I made it pretty clear for you.Design detection is about differentiating betweeen what agents with foresight and purpose driven actions can do as opposed to agencies which dont have those qualities. Could natural processes which dont have intelligence make an iphone, or even something as simple as a notebook? A mind can.
    You are confusing two different things.Just because the universe as a whole item has an intelligent source, it does not mean it is intelligent in and of itself. A note book is not intelligent. Its as dumb as a rock. It cant carry out purpose driven actions all by itself. You also seem to imply that you cant differentiate between designed items.
    Can you not tell the difference between a computer and a bicycle?They are both designed items. Yet they are designed for different things. The universe is designed to sustain life.Yet just because the conditions are consistent with life, it is not enough.For example just because you have all the parts necessary for making a bike, you wont have a bike without assembling it.Again, design detection is about figuring out what needs FURTHER assembly withing the universe after realizing that the physical universe itself would have been causally inadequate to account for its origin. There I made it as simple as I could.

  362. Jon, I think I see your problem now.
    You believe that as a result of the universe being designed, then everything IN the universe must also be designed. But that is not the case. When we say the universe is designed, we mean the CONDITIONS in the universe, not necessarily everything in it.The laws of physics and the physical constants and the properties of the earth and so forth.For example I dont believe, and I dont think anyone believes the ceres asteroid or mount everest are designed. I hope this clears things up.

  363. Upright BiPed,

    So when you repeatedly claim that evolutionary mechanisms can “create” or “produce” complex specified information, you are making an absolutely unsupported statement. For a mechanism to “create” information (by definition) that mechanism would first have to have the capacity to create symbols and rules.

    Your objections to vjtorley’s calculation of CSI are an excellent demonstration of why rigorous mathematical definitions and example calculations are essential to making progress in this discussion. Based on Dembski’s discussion of CSI in Specification: The Pattern That Signifies Intelligence, vjtorley clearly demonstrated that gene duplication, an event for which there is empirical evidence, generates CSI.

    If you wish to refute this, you need to provide a similar level of detail. That means rigorously defining your version of CSI and demonstrating how to calculate it for the same scenario. Unless and until you do so, any claims you make about your metric are unsupported.

  364. CJYman,

    The fact that some people don’t understand CSI, or calculate for it from different givens (which can be done but the calculation changes to the context-dependent form of CSI), makes no difference to mine and KF’s argument for CSI based explicitly on the math explained by Dembski in “Specification: the patterns which signify intelligence.”

    The paper’s title is Specification: The Pattern That Signifies Intelligence, and vjtorley used the discussion of CSI in it to demonstrate that gene duplication that results in increased production of a protein does, by that definition, generate CSI.

    Either way, disagreement or not, I have provided a way to calculate for CSI and my argument is based on that calculation. Can you show that CSI based on my calculation and KF’s explanation of the concept will be produced by law+chance?

    I found your calculation related to titin to be confusing, frankly. You didn’t provide a mathematically rigorous definition of CSI that I saw and you didn’t go into as much detail as did vjtorley.

    If you believe that your version of CSI is equivalent to what Dembski has published and you further believe that it is a reliable indicator of intelligent agency, please provide your rigorous definition and demonstrate how you arrive at a different answer than did vjtorley for the scenario he analyzed. Applying your definition to the other three scenarios I described would also be very helpful to others attempting to recreate your calculations.

    Let’s get right down to the math, right here in this thread.

  365. 365

    Mathgrrl, despite me asking you to do otherwise, you simply talked past my point without addressing any of it.

    In strategic parlance this is referred to as a flank. It’s a manuever specifically intended to avoid the front of an defended position. As a tactic in debate, it is intended to draw attention away from the defended position and engage elsewhere – where the actual strength of an argument can be avoided at all cost.

    Your continued avoidance is therefore duly noted.

  366. Darwinist: “Random variation and natural selection can jolly well produce CSI.”

    ID scientist: “I doubt it.”

    ID Scientist:”Intelligent agency is the only known cause for CSI.”

    Darwinst: “What is CSI?”

    You’ve got to love it!.

  367. 367

    By the way, the demonstrable evidence of a flank can be seen in the text of Mathgrrls post.

    She begins by quoting me:

    So when you repeatedly claim that evolutionary mechanisms can “create” or “produce” complex specified information, you are making an absolutely unsupported statement. For a mechanism to “create” information (by definition) that mechanism would first have to have the capacity to create symbols and rules.

    Then she makes a statement that literally has nothing whatsoever to do with my post:

    Your objections to vjtorley’s calculation of CSI are an excellent demonstration of why rigorous mathematical definitions and example calculations are essential to making progress in this discussion. Based on Dembski’s discussion of CSI in Specification: The Pattern That Signifies Intelligence, vjtorley clearly demonstrated that gene duplication, an event for which there is empirical evidence, generates CSI.

    The alert observer will notice that her statement make no refernce whatsoever to anything I said in my post.

    For those that have been following this thread, please feel free to return to ANY exchange between she and I on this topic, and you will see the exact same pattern.

  368. Upright BiPed,

    Mathgrrl, despite me asking you to do otherwise, you simply talked past my point without addressing any of it.

    Actually, anyone reviewing this thread would find me justified in saying that about you. Throughout this conversation I have focused solely on obtaining a rigorous mathematical definition of CSI and some example calculations to learn how it works. I have made it very clear that my goal is to test the ID claim that CSI is a reliable indicator of intelligent agency. You have never provided a definition nor any example calculations.

    I look forward to discussing CSI with you when you decide to define it and show some calculations.

  369. Dr Torley

    Been busy elsewhere, but must pause to wish you and Japan well in the face of a devastating disaster.

    Participants and onlookers:

    I passed by UD just now for the first time in some days, having been busy elsewhere.

    I see this thread still goes in circles driven by MG’s refusal to acknowledge what is in front of her.

    Digitally coded, functionally specific, complex information is a commonplace entity, it is the base of software, modern communications and related fields. It also happens to be what we express in written language [and in phonemes], as well as music. It is naturally measured in bits [a metric of chained, contextually effective yes/no decisions], which are of course functionally specific.

    To date, we have literally trillions of cases in point of such dFSCI. We know how to convert to a bit metric, and routinely do so. We buy and sell hard drives, CDs, DVD.s SD Cards, and memory sticks and chips by their capacity in [functionally specific] bits. Bits at work, in ways familiar from or materially similar to the familiar technologies of C21 life and work and play.

    Anyone who tries to tell me that this is not an adequately defined concept and metric, is immediately in utter disconnect from digital reality in C21. Indeed, I am immediately suspicious that we are seeing willful selective hyperskepticism.

    The only effective answer to that is to point out the absurdity of using computer technology to deny the fundamental reality of such technologies: bits at work, under intelligent direction.

    MG, sorry if that cap fits, but it plainly does.

    Now, here is a simple threshold-based metric for such digitally expressed FSCI, long since presented in the UD weak argument correctives [top right this and every UD page], no 28:

    For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.)

    On massive evidence, such cases are reliably the product of intelligent design, once we independently know the causal story. So, we are entitled to (provisionally of course; as per usual with scientific work) induce that FSCI is a reliable, empirically observable sign of design . . .

    Or, if you want that boiled down to a formula, let us do so as I do in my always linked:

    _______________

    >> we can construct a rule of thumb functionally specific bit metric for FSCI:

    a] Let contingency [C] be defined as 1/0 by comparison with a suitable exemplar [i.e. this is an operational definition on family resemblance], e.g. a tossed die that on similar tosses may come up in any one of six states: 1/ 2/ 3/ 4/ 5/ 6. That is, diverse configurations of the component parts or of outcomes under similar initial circumstances must be credibly possible.

    b] Let specificity [S] be identified as 1/0 through specific functionality [FS] or by compressibility of description of the specific information [KS] or similar means that identify specific target zones in the wider configuration space. [Often we speak of this as "islands of function" in "a sea of non-function." (That is, if moderate application of random noise altering the bit patterns will beyond a certain point destroy function [notoriously common for engineered systems that require working parts mutually co-adapted at an operating point, and also for software and even text in a recognisable language] or move it out of the target zone, then the topology is similar to that of islands in a sea.)]

    c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, with 500 – 1,000 bits serving as the threshold for “probably” to “morally certainly” sufficiently complex to meet the FSCI/CSI threshold.

    d] Define the vector {C, S, B}>/b> based on the above [as we would take distance travelled and time required, D and t: {D, t}], and take the element product C*S*B [as we would take the element ratio D/t to get speed].

    e] Now we identify the simple FSCI metric, X:

    C*S*B = X,

    the required FSCI/CSI-metric in [functionally] specified bits.

    Once we are beyond 500 – 1,000 functionally specific bits, we are comfortably beyond a threshold of sufficient complex and specific functionality that the search resources of the observed universe would by far and away most likely be fruitlessly exhausted on the sea of non-functional states if a random walk based search (or generally equivalent process) were used to try to get to shores of function on islands of such complex, specific function. >>
    _______________

    If you want more sophisticated metrics, they have been provided aplenty above, and have all been brushed aside in the haste to hyperskeptical dismissal. Including the Durston metric which put meat on the islands of function or hot zone or target zone concept used by Dembski, and published a metric in FITS for 35 protein families.

    VJT has provided straight and modified versions of a calculation on the Dembski model.

    But we do not need to do that.

    All we need to do is to challenge MG to provide a case where, beyond 1,000 bits:

    1 –> Symbolic codes — glyphs and rules for meaningful and functional combinations — originate by undirected forces of chance and necessity.

    2 –> Algorithms, program statements, data structures and correlated physical implementing machinery to cause function similarly originated by chance plus blind necessity.

    3 –> Consequently, cybernetic functionality emerged without intelligent direction and control.

    __________

    There are of course no such cases, that is why MG is resorting to selective hyperskepticism.

    But, we literally have billions of cases where such systems originate through intelligent direction and control (including the sort of genetic or evolutionary algorithms, so called she is still trying to throw up as an objection).

    So, on inference to best explanation [the underlying epistemological frame of origins science], it is clear that we have a best and empirically reliable explanation for cases of dFSCI.

    Now, simply look at a cell based organism, and observe the genes and associated regulatory networks. dFSCI at work, in a cybernetic code based system.

    On inference to best explanation, backed up by billions of test cases, design. QED.

    G’day

    GEM of TKI

  370. 370

    Actually, anyone reviewing this thread would find me justified in saying that about you.

    My comments to you have centered around the singular statement you repeatedly make. That being that “evolutionary mechanisms” have the ability to “create” CSI. At this point I have probably made it stupidly clear that I refute that conclusion as a matter of empirical observation. “Evolutionary mechanisms” cannot make CSI. I have made my argument on that point abundantly clear. You steadfastly refused to engage that argument because it’s a no winner for you.

    That is what readers will see.

  371. Upright BiPed,

    My comments to you have centered around the singular statement you repeatedly make. That being that “evolutionary mechanisms” have the ability to “create” CSI. At this point I have probably made it stupidly clear that I refute that conclusion as a matter of empirical observation. “Evolutionary mechanisms” cannot make CSI.

    Since you continue to fail to provide a rigorous mathematical definition of CSI, I can only go by the one provided by Dembski in Specification: The Pattern That Signifies Intelligence. vjtorley, an ID proponent of impeccable credentials, demonstrated how Dembski’s own words lead to the conclusion that CSI can be generated by known evolutionary mechanisms.

    If you want to refute that conclusion, you’re going to need to provide a clear definition of your terms and detailed calculations. Bluster, bloviation, and incivility are not adequate substitutes.

  372. kairosfocus,

    Your post suffers from the same problems as those by Upright BiPed: No math.

    Provide your rigorous mathematical definition of CSI, show how vjtorley’s calculations are incorrect, and provide example calculations for the four scenarios I described and we’ll be able to have a rational conversation about whether or not CSI is a reasonable metric for identifying intelligent agency.

    Until you do that, any claims you make about CSI are quite literally meaningless.

    ID is supposed to be a scientific theory. Let’s work together to provide the mathematical basis to make it testable.

  373. Mathgirl, I think you are hilarious.
    This is what I think you should do. Send Dr. Dembski an email telling him to provide a mathematically rigorous definition of CSI for you.
    There, problem solved.

  374. MathGrrl, in case you wondering if evolution has been ‘tested’ with mathematical rigor (you know to avoid the been one-sided). It has been tested and failed:

    Whale Evolution Vs. Population Genetics – Richard Sternberg PhD. in Evolutionary Biology – video
    http://www.metacafe.com/watch/4165203/

    Waiting Longer for Two Mutations – Michael J. Behe
    Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’ (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model.
    http://www.discovery.org/a/9461

    This following calculation by geneticist John Sanford for ‘fixing’ a beneficial mutation, or for creating a new gene, in humans, gives equally absurd numbers that once again render the Darwinian scenario of humans evolving from apes completely false:

    Dr. Sanford calculates it would take 12 million years to “fix” a single base pair mutation into a population. He further calculates that to create a gene with 1000 base pairs, it would take 12 million x 1000 or 12 billion years. This is obviously too slow to support the creation of the human genome containing 3 billion base pairs.
    http://www.detectingtruth.com/?p=66

    Indeed, math is not kind to Darwinism in the least when considering the probability of humans ‘randomly’ evolving:

    In Barrow and Tippler’s book The Anthropic Cosmological Principle, they list ten steps necessary in the course of human evolution, each of which, is so improbable that if left to happen by chance alone, the sun would have ceased to be a main sequence star and would have incinerated the earth. They estimate that the odds of the evolution (by chance) of the human genome is somewhere between 4 to the negative 180th power, to the 110,000th power, and 4 to the negative 360th power, to the 110,000th power. Therefore, if evolution did occur, it literally would have been a miracle and evidence for the existence of God. William Lane Craig

    William Lane Craig – If Human Evolution Did Occur It Was A Miracle – video
    http://www.youtube.com/watch?v=GUxm8dXLRpA

    Along that same line:

    Darwin and the Mathematicians – David Berlinski
    “The formation within geological time of a human body by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components.”
    Kurt Gödel, was a preeminent mathematician who is considered one of the greatest to have ever lived. Of Note: Godel was a Theist!
    http://www.evolutionnews.org/2.....cians.html

    “Darwin’s theory is easily the dumbest idea ever taken seriously by science.”
    Granville Sewell – Professor Of Mathematics – University Of Texas – El Paso

  375. 375

    Mathgrrl,

    Once again, you post a quote of mine, then follow it with a statement that addresses absolutely nothing whatsoever of the case I have made against your position.

    This all stands to reason of course – the case I’ve presented is not something you can address, for to do so would immediately eliminate your position. In other words, the strength of the evidence against your claim is in direct proportion to the willful ignorance you’ve put on display.

    Given the situation where your claim is at the mercy of the evidence, this is not likely to change.

  376. UB

    Your argument seems to be that CSI (indeed all information) involves symbols and only a designer can create symbols. I don’t think there is a single neat definition of “symbol”. But perhaps you can clarify this with an example.

    What are the symbols in the haemoglobin molecule and what do they symbolise?

  377. MathGrrl, I think your question may be malformed. Maybe I don’t understand this well enough, and please accept my apologies if I am wrong. However, I thought “specified” is not measurable in the same way that “information” is. I thought information was either specified or non-specified. In other words, you measure the same thing in either case, you just determine first whether it’s specified or not. The same may be true with complexity, as suggested by the term CSI. After all, “complex” and “specified” are characteristics of the thing being measured — information. They’re either there or they’re not.

  378. 378

    Kuartus:

    You believe that as a result of the universe being designed, then everything IN the universe must also be designed.

    Well, that was essentially the argument that bornagain was using to wave off my questions. If you are not making that argument, then perhaps we can make progress here.

    I hope this clears things up.

    It clears up the fact that you are not making the same argument as bornagain. However, it still does not address my original question to bornagain of how one differentiates between a teleological Darwinian process and a non-teleological process. Can you take a run at that?

  379. jon, though the universe is shown to be Theistic in its basis from quantum mechanics, as opposed to Deistic, or materialistic, as the universe could have been found to be, that does not preclude one from seeing design that was further implemented into the universe by God. The ONLY thing that acknowledging the truth of a Theistic universe does is to show that to even ask if materialistic ‘non-teleological’ processes were ever involved in creating the unparalleled levels of design we find in life is completely nonsensical since materialism is in fact falsified as an explanation for reality in the first place. ,,, Furthermore Jon, all normal ‘non-teleological’ adaptations we observe in the life, in which God is merely sustaining life and the universe, always come at a cost of the information that was originally encoded in the life form, if we ever did see the functional complexity of a life form increase over its original ‘optimal’ form, we can can be sure that God intervened since the universe is shown to be theistic in its basis!.

  380. MG:

    Do you understand the absurdity of:

    1 –> Using a digital computer [even if a smartphone etc, that is what it is . . .] to compose and post an alphanumeric textual message in contextually evasive English

    2 –> Thus producing 588 7-bit [128 state] functionally specific ASCII characters, of ~1.1 *10^1,239 possible configs, vastly beyond the search capacity of the observed cosmos (but easily within the reach of mind, per massive observation)

    3 –> Which can be described specifically, analysed, calculated upon and the like using standard, routinely used simple digital communication techniques (as has just been done)

    4 –> Where, in light of say the functionality and structure of DNA and its genetic code, such dFSCI is the materially relevant subset of complex, specified information [CSI],

    5 –> CSI here being used as a general description that is specified quantitatively in various models, such as the simple FSCI metric already presented above, by Durston et al in their FITS metric for FSC, and Dembski’s metric from his Specification paper (and other models) as well as other adaptations [metrics in real world contexts may have different approaches and models that are good enough once fit for purpose]

    6 –> Where all along the Dembski metric on CSI has been in the UD WAC’s top right this and every UD page, no 27:

    >> A more general approach to the definition and quantification of CSI can be found in a 2005 paper by Dembski: “Specification: The Pattern That Signifies Intelligence”.

    For instance, on pp. 17 – 24, he argues:

    define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [X] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases pS(t) and also by the maximum number of binary search-events in our observed universe 10^120]

    X = – log2[10^120 ·pS(T)·P(T|H)].

    To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so pS(T) = 4. Calculation yields X = -361, i.e. < 1, so that such a hand is not improbable enough that the – rather conservative — ? metric would conclude “design beyond reasonable doubt.” (If you see such a hand in the narrower scope of a card game, though, you would be very reasonable to suspect cheating.)

    Debates over Dembski’s models and metrics notwithstanding, the basic point of a specification is that it stipulates a relatively small target zone in so large a configuration space that the reasonably available search resources — on the assumption of a chance-based information-generating process — will have extremely low odds of hitting the target. So low, that random information generation becomes an inferior and empirically unreasonable explanation relative to the well-known, empirically observed source of CSI: design. >>

    7 –> Where also, biologically relevant cases and onward adaptations have been given above in this thread and elsewhere.

    [ . . . ]

  381. All this, in order to make compounding protests on the alleged lack of adequate definition of CSI!

    I am sorry, MG, but your remarks above are patently self-referentially absurd [using what you deny in order to object to it], and reflect willful obtuseness and selective hyperskepticism. This, you have sustained for WEEKS.

    I don’t doubt that elsewhere you are trespassing on the patience and efforts above to try to make Dr Torley and others seem to not know what they are talking about and onwards that he core concepts of design theory are ill defined nonsense.

    I shall be direct: SUCH IS A SELF-REFERENTIALLY ABSURD STRAWMAN OF YOUR OWN MAKING.

    I think the time has more than come for you to start from basics and get your own concepts right:

    a: what is a digital vs an analogue quantity?

    b: what is a bit or binary digit?

    c: for a cluster of n bits, how many possible states or configurations are there?

    d: For 1,000 bits how many are there?

    e: How does this compare to the ~ 10^150 Planck time states for the 10^80 atoms of our observed cosmos across a working life of 10^25 s or about 50 mn times the time said to have elapsed since the big bang?

    f: What is a symbolic code, and how does it work?

    g: For a complex, digital coded symbolic, linguistic system like this post, what would happen very rapidly to function if more and more random changes are introduced in the coded characters? For algorithmically functional coded systems [i.e. with descriptive and prescriptive information that makes an executing machine do something]?

    h: Thus, does it make sense to speak of islands of function in the space of possible configs? Why or why not?

    i: Once we are past 1,000 bits, is it reasonable that any undirected process on the gamut of our cosmos would generate dFSCI?

    j: Has it ever been observed that a process of chance plus mechanical necessity staring from an arbitrary initial configuration has constructed an algorithmically or linguistically functional system beyond 1,000 bits storage capacity? (Systems that start within an island of function and per an algorithm hill climb to better performance, are NOT cases in point.)

    k: Have we seen intelligent beings create such dFSCI rich systems?

    l: Is this the routine and empirically reliable source for such systems, once we directly know the source?

    m: taking the infinite monkeys theorem, and in light of the statistical thermodynamics principles thereby illustrated, is it analysitcally reasonable to expect that the pattern just outlined, that the routine reliable source of dFSCI is intelligence, will be overturned observationally?

    n: Is such dFSCI then a reliable sign of design? Why or why not?

    o: Other complex functional [parts brought together to achieve function] entities that do not use codes directly, can be reduced to such, often by a nodes, interfaces and arcs mesh with specifications, where the structure of yes/no decisions to construct the entity gives a bit metric of the blueprint. Can these be seen as represented by such structured codes? Why or why not?

    p: is or is not this broader FSCI — where the specificity is constrained by and relates to the function [i.e. if the specificity and the function are not coupled, the entity is not FSCI] a sign of design? Why or why not?

    q: When therefore you see such dFSCI AND related functional, specific complex organisation in the living cell, is or is this not a sign pointing to design as the best abductive explanation? Why or why not?

    GEM of TKI

  382. F/N: It is helpful to remind ourselves, again, on Orgel on CSI — i.e the general descriptive term [and notice how that term in its root OOL context naturally invites a focus on FSCI as key subset] is prior to the mathematical models, and antedates Dembski’s involvement by 20 or more years:

    ____________

    >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.] >>

  383. 383

    Mark at 376

    The symbols I am referring to are those contained within nucleic sequencing (genetic code), I think that was fairly obvious from my comments.

  384. F/N 2: MG, are you wiling to assert that Orgel’s remarks above — which are the basis for both the broader term CSI and the more focussed one FSCI; just notice: “Organization, then, is functional[ly specific] complexity and carries information” — are “meaningless”?

    Why or why not?

  385. QuiteID,

    I thought “specified” is not measurable in the same way that “information” is. I thought information was either specified or non-specified. In other words, you measure the same thing in either case, you just determine first whether it’s specified or not. The same may be true with complexity, as suggested by the term CSI. After all, “complex” and “specified” are characteristics of the thing being measured — information. They’re either there or they’re not.

    You’ve put your finger on a couple of the parts of the definition of CSI that I find least mathematically rigorous. These issues are a big part of why I raised the questions I have on this thread. If ID proponents are going to claim, as they do, that CSI is a reliable metric for detecting the intervention of intelligent agency, they must define that metric with sufficient rigor that it can be objectively measured by anyone interested in doing so.

    My goal, as I’ve explained repeatedly throughout this discussion, is to actually test the claims made about CSI. The four scenarios I described in my post 177 above are my attempt to get enough information to be able to perform that calculation.

    If you are willing to provide the level of detail that vjtorley did, I’m very interested in looking at your calculations.

  386. kairosfocus,

    Hundreds of words are not as valuable in this context as a single calculation. You can explain for as long as you like how wonderful CSI is as a concept, but until you go to the level of effort that vjtorley did to actually clarify the definition and show how to compute it, your claims are unfounded.

    All I’ve been asking for throughout this discussion is a mathematically rigorous definition of CSI and some detailed examples of how to calculate it. That is not an unreasonable request. As I noted in my post 238, if I asked for similar detail about a metric proposed by one of my colleagues, she’d fill whole whiteboards with far more than I requested.

    vjtorley has set the bar here. Are you willing to try to clear it?

  387. 387

    KF, thank you for standing in the face of absurdity, and pointing it out. You and others have repeatedly enagaged the mathematical concepts of CSI in addressintg Mathgrrl’s questions. My problem, however, doesn’t come from the mathematics of CSI, but instead comes from Mathgrrls ultimate conclusion that “evolutionary mechanisms can create CSI”.

    I have made my case, but she refuses to address it.

  388. F/N 3: To see how applicable that nodes and arcs view is, let us think about sound, speech and alphabetic writing.

    Sound is analogue, compressions and rarefactions of the air or something like that, that propagates. To get to speech, we have sufficiently distinct vocal tract sounds, that can be combined as clusters of phonemes.

    Phonemes are then represented by essentially arbitrary symbols, like what became our A I gather started out as a stylised Ox-head or something like that. Sets of distinct symbols, chained in space as a string then represented phonemes, which are discretised from sounds: w-o-r-d-s.

    Such strings of symbols then can be transformed into bits, by coding schemes such as ASCII.

    But here we are, digital from analogue, strung together in string structures. And we then focus analytically on the strings.

    Do you see why this can also be used for the wire-mesh for a 3-d object, and for 5he exploded diagram that shows how components are to be integrated to form a functional entity?

    Then, by specifying rules and symbols, we can describe the blueprint as a nodes and arcs mesh [and cluster of vectors, i.e. a matrix].

    So now we have a way to use the dFSCI analysis to address complex organised functional entities.

  389. F/N 4: Since much of the above is wranglings about definitions, here is Wiki in the guise of admission against interest:

    ________________

    >> A definition is a passage that explains the meaning of a term (a word, phrase or other set of symbols), or a type of thing. The term to be defined is the definiendum (plural definienda). A term may have many different senses or meanings. For each such specific sense, a definiens (plural definientia) is a cluster of words that defines that term . . . .

    Like other words, the term definition has subtly different meanings in different contexts. A definition may be descriptive of the general use meaning, or stipulative of the speaker’s immediate intentional meaning. For example, in formal languages like mathematics, a ‘stipulative’ definition guides a specific discussion. A descriptive definition can be shown to be “right” or “wrong” by comparison to general usage, but a stipulative definition can only be disproved by showing a logical contradiction [3].

    A precising definition extends the descriptive dictionary definition (lexical definition) of a term for a specific purpose by including additional criteria that narrow down the set of things meeting the definition . . . .

    An intensional definition, also called a coactive definition, specifies the necessary and sufficient conditions for a thing being a member of a specific set. Any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition.

    An extensional definition, also called a denotative definition, of a concept or term specifies its extension. It is a list naming every object that is a member of a specific set.

    So, for example, an intensional definition of ‘Prime Minister’ might be the most senior minister of a cabinet in the executive branch of government in a parliamentary system. An extensional definition would be a list of all past, present and future prime ministers.

    One important form of the extensional definition is ostensive definition. This gives the meaning of a term by pointing, in the case of an individual, to the thing itself, or in the case of a class, to examples of the right kind. So you can explain who Alice (an individual) is by pointing her out to me; or what a rabbit (a class) is by pointing at several and expecting me to ‘catch on’ . . . .

    a genus of a definition provides a means by which to specify an is-a relationship, and the non-genus portions of the differentia of a definition provides a means by which to specify a has-a relationship.

    When a system of definitions is constructed with genera and differentiae, the definitions can be thought of as nodes forming a hierarchy or—more generally—a directed acyclic graph; a node that has no predecessors is a most general definition; each node along a directed path is more differentiated (or more derived) than its predecessors, and a node with no successors is a most differentiated (or a most derived) definition. When a definition, S, is the tail of all of its successors (that is, S has at least one successor and all of the direct successors of S are most differentiated definitions), then S is often called a species and each of its direct successors is often called an individual or an entity; the differentia of an individual is called an identity. >>
    ________________

    In short, not all definitions are of the same order, and different types of definition have both meaningfulness and practical or analytical utility. As well, concept and cases come before precisting definitions and descriptive models, which is where the mathematical model comes from.

    And, in our context, the CSI, FSC and FSCI models given above are jut that, models responsive to a reality — function + specificity + complexity in an organisation that has to meet a criterion that is observably functional and specifying — commonly encountered in language, in technology and in the living cell.

    This also brings up the further factor: all of this is based on our experience of the world as active, intelligent observers and designers. So, we can begin form that base of experience in developing descriptions, models, definitions, theories etc.

  390. UB at 383

    The symbols I am referring to are those contained within nucleic sequencing (genetic code), I think that was fairly obvious from my comments

    So does that mean a protein such as haemoglobin does not contain information? Is DNA the only part of life that contains information?

  391. 391

    bornagain,

    Your latest comment to me isn’t any more comprehensible than your previous efforts. You have already admitted that you are not a mathematician and don’t fully understand the concepts your are speaking so confidently about. Instead, you state that you rely on the expert opinion of others. Yet, as this comment thread lays bear, those experts don’t even agree among themselves what CSI is (some even have completely different acronyms), how it is calculated, or even if normal non-teleological biological processes can generate CSI. And none, save, vtorley (who seems to agree that nonteleological Darwinian processes can create CSI), have even attempted a calculation.

    This whole thread ought to be disconcerting for the ID supporter. It certainly is for me

  392. MathGrrl, I’m no mathematician. I just wanted to clarify what’s being defined. If specification is not a quantity but an either/or property, then the question is whether information can evolve from a non-specified to a specified form, correct? I don’t think it can, but I wouldn’t know how to show that mathematically.

    Speaking more philosophically, if specification is not a quantity, then information should not be able to be “kind of” or “partly” specified. That would be a big obstacle to any evolutionary model.

  393. MF: Plainly, the info on a protein is a copy through mRNA of the info in DNA. DNA is the primary info source, and proteins are the functional, working expression of that info. They of course back-encode to the DNA code that specified them [up to a certain degree of redundancy], but that is not additional info.

  394. 394

    Mark,

    Haemoglobin is the product of information, in the same way a gear is. It is produced through information in order to serve a function.

  395. MG, 386:

    There you go again.

    You have had history [Orgel et al], you have had concepts, you have had verbal definitions, you have had quantitative metric and calculations, you have been shown how your own posts instantiate the phenomenon and you simply sweep them away as “hundreds of words.”

    I am sorry, I now conclude this is a case of none being so blind as one who WILL not see.

    Your problem — pardon directness — is not want of adequate concept and models for CSI, it is the fallacy of the closed, ideologised mind. I suggest you start here to fix it.

    I have a constitutional crisis brewing, an economic mess, an up-coming budget issue, and a regional sustainable energy challenge now being compounded by the implications of issues linked to the mess playing out in Japan and how that is coming across on our TV etc screens.

    Dr Torley (who graciously gave up hours of his time on end to try to help you) is IN Japan.

    I think after nearly 400 posts, a lot of effort has been expended to try to help you, including exactly the sorts of definitions and calculations you demand again. The evidence is, you don’t want to be helped, you only want to throw up selectively hyperskeptical objections to comfort yourself with the idea that CSI is ill defined and meaningless.

    I notice, that, after dismissing he concept of CSI as meaningless, and being confronted with Orgel’s presentation of the same concepts, you have ducked the challenge of explaining to us whether or no Orgel was meaningless in his remarks, and why.

    That tells me all I need to know . . . and I don’t need to use the T-word.

    Good day, madam.

    GEM of TKI

  396. #394 UB

    1) So can you confirm that as far as you are concerned the only part of life to contain information and therefore CSI is DNA. The bacterial flagellum and the immune system are not examples of CSI.

    2) Assuming that is true, I assume the symbols in a string of DNA are the bases. What do they symbolise?

  397. And Jon, exactly what physical evidence has been presented to suggest that information has increased over and above what was already present in life? NONE! If you do know of any unambiguous cases of the functional complexity of an organism increasing above the 2 protein-protein binding site limit of Dr. Behe please do tell. For me the proof is in the pudding, so MathGrrl can hypothesize all day long in her imaginary world of evolutionary algorithms, which were designed by humans by the way, but that bothers me not in the least for I know of the extreme poverty of evidence she faces in real life for actually demonstrating Darwinian precepts to be true. For me empirical overrules imagination all day long, as it should for anyone. Ask yourself Jon, if Darwinian evolution were true, why in blue blazes are we not flooded with thousands upon thousands of examples when we request proof? Please Jon, tell me exactly why MathGrrl is reduced to arguing for extremely trivial gains in functional information within human engineered evolutionary algorithms? Does it not strike you in the least bit odd that she would even have to argue from such a diminished position in the first place? Should she not instead be arguing from countless examples in the real world that she wishes she could produce if Darwinism were true?

  398. MathGrrl:

    MathGrrl:
    “I found your calculation related to titin to be confusing, frankly. You didn’t provide a mathematically rigorous definition of CSI that I saw and you didn’t go into as much detail as did vjtorley.”

    Seriously? Which part did you have trouble with as compared to the definition of CSI that was provided by Dembski in his “Specification …” paper. Did you actually read through all the comments I provided in those links? Do you have any questions that were not brought up in those links that I didn’t provide answers to?

    BTW, for the sake of argument, since I haven’t had the time to go through vjtorley’s calculations, I accept his conclusion since it is perfectly consistent with what I have been trying to explain to you. Evolutionary Algorithms will indeed produce CSI, but only if CSI previously exists. But EAs will not generate CSI *where non exists.* Now what complaint do you have?

    MathGrrl:
    “If you believe that your version of CSI is equivalent to what Dembski has published and you further believe that it is a reliable indicator of intelligent agency, please provide your rigorous definition and demonstrate how you arrive at a different answer than did vjtorley for the scenario he analyzed.”

    1. I would probably arrive at the same answer as vjtorley … and if not the exact same answer, at least the same conclusion. That should ahve been obvious to you if you actually read through the links I provided for you, which is what you seemed to state you did do.

    2. I defended, in greater depth than anyone here, my calculation of CSI as being the same as Dembski’s calculation in one of the Telic Thoughts Threads that I linked to. I provided exact quotes from Dembski’s paper along with his examples, comparing them to my own, showing how I have calculated for CSI in the same way. Did you or did you not read through that link? If so, what problems do you have?

    MathGrrl:
    “Applying your definition to the other three scenarios I described would also be very helpful to others attempting to recreate your calculations.”

    Ask me again this summer, if you are truly interested, and we will go through them together. At the moment, I don’t have time for further calculations. In fact, I’ve already provided at least one (of the protein Titin) with detailed explanation, and it is now your turn.

    You are actually starting to sound like a “creationist” where no matter how many examples myself and others such as KF, vjtorley, and Dembski give, it “isn’t good enough.” I don’t have time for those games.

    MathGrrl:
    “Let’s get right down to the math, right here in this thread”

    Sure, feel free to bring up the problems you have *that I have not already responded to in those links* with my previous calculation.

    In the end, no one here has shown the origination of CSI without previous CSI. That is the point that I have been attempting to get you to understand.

    I am not arguing against anyone showing that evolution can produce a pattern that can be measured as containing CSI. I agree, and have told you on at least a few occassions, that an EA can produce a CSI pattern as an outcome.

    However, that CSI is only produced from a sufficiently complex program that itself, as has already been explained to you by myself and others (especially KF), also contains CSI since their structure is at least on the same level of complex specificity as our comments on this blog.

    If you disagree with me, and in order to show a flaw in my argument, you will have to actually show a situation where CSI was produced by scratch or an EA was generated which then in turn produced CSI, by only law+chance absent intelligent input or any previous CSI. My aforementioned experiment will test this concept of the inability for law+chance to *generate* CSI (when none existed previously) nicely.

    If you really had a case either for your position or a case against mine, you would pulling out the evidence of such a simulation and showing the calculations just like the rest of us have been providing calculations.

    The fact that you refuse to do so after the relevant concepts have been explained to death and measurements provided by at least 3 sources, shows me that all you are interested in is simple dismissal of our arguments and the continual propagation of misinformation.

    I’m done here unless you can actually bring a critique to the table that you are willing to defend with calculations and the experiment that I suggested, or if you are willing to articulate an actual concern, that I haven’t already covered, with any of my explanations or calculations.

    In fact it appears that you need to, for the second time, seriously read through those links I provided for you before you come back here to continue to “slough off” and simply dismiss and ignore almost everything that myself and others have explained and continue to explain.

Leave a Reply