Home » Intelligent Design » Lobbing a grenade into the Tetrapod Evolution picture

Lobbing a grenade into the Tetrapod Evolution picture

A year ago, Nature published an educational booklet with the title 15 Evolutionary gems (as a resource for the Darwin Bicentennial). Number 2 gem is Tiktaalik a well-preserved fish that has been widely acclaimed as documenting the transition from fish to tetrapod. Tiktaalik was an elpistostegalian fish: a large, shallow-water dwelling carnivore with tetrapod affinities yet possessing fins. Unfortunately, until Tiktaalik, most elpistostegids remains were poorly preserved fragments.

“In 2006, Edward Daeschler and his colleagues described spectacularly well preserved fossils of an elpistostegid known as Tiktaalik that allow us to build up a good picture of an aquatic predator with distinct similarities to tetrapods – from its flexible neck, to its very limb-like fin structure. The discovery and painstaking analysis of Tiktaalik illuminates the stage before tetrapods evolved, and shows how the fossil record throws up surprises, albeit ones that are entirely compatible with evolutionary thinking.”

Just when everyone thought that a consensus had emerged, a new fossil find is reported – throwing everything into the melting pot (again!). Trackways of an unknown tetrapod have been recovered from rocks dated 10 million years earlier than Tiktaalik. The authors say that the trackways occur in rocks that: “can be securely assigned to the lower-middle Eifelian, corresponding to an age of approximately 395 million years”. At a stroke, this rules out not only Tiktaalik as a tetrapod ancestor, but also all known representatives of the elpistostegids. The arrival of tetrapods is now considered to be 20 million years earlier than previously thought and these tetrapods must now be regarded as coexisting with the elpistostegids. Once again, the fossil record has thrown up a big surprise, but this one is not “entirely compatible with evolutionary thinking”. It is a find that was not predicted and it does not fit at all into the emerging consensus.

“Now, however, Niedzwiedzki et al. lob a grenade into that picture. They report the stunning discovery of tetrapod trackways with distinct digit imprints from Zachemie, Poland, that are unambiguously dated to the lowermost Eifelian (397 Myr ago). This site (an old quarry) has yielded a dozen trackways made by several individuals that ranged from about 0.5 to 2.5 metres in total length, and numerous isolated footprints found on fragments of scree. The tracks predate the oldest tetrapod skeletal remains by 18 Myr and, more surprisingly, the earliest elpistostegalian fishes by about 10 Myr.” (Janvier & Clement, 2010)

The Nature Editor’s summary explained: “The finds suggests that the elpistostegids that we know were late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates.” Henry Gee, one of the Nature editors, wrote in a blog:

“What does it all mean?
It means that the neatly gift-wrapped correlation between stratigraphy and phylogeny, in which elpistostegids represent a transitional form in the swift evolution of tetrapods in the mid-Frasnian, is a cruel illusion. If – as the Polish footprints show – tetrapods already existed in the Eifelian, then an enormous evolutionary void has opened beneath our feet.”

For more, go here:
Lobbing a grenade into the Tetrapod Evolution picture

http://www.arn.org/blogs/index.php/literature/2010/01/09/lobbing_a_grenade_into_the_tetrapod_evol

Additional note: The Henry Gee quote is interesting for the words “elpistostegids represent a transitional form”. In some circles, transitional forms are ‘out’ because Darwinism presupposes gradualism and every form is no more and no less transitional than any other form. Gee reminds us that in the editorial office of Nature, it is still legitimate to refer to old-fashioned transitional forms!

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

415 Responses to Lobbing a grenade into the Tetrapod Evolution picture

  1. Mark Frank stated here yesterday morning:

    I don’t think anyone is suggesting that Tiktaalik is the direct ancestor of all quadrupeds. This is well known confusion about the status of transitional fossils. It would be a stunning coincidence to come across a fossil of a direct ancestor. Tiktaalik shows transitional features between fish and quadrupeds which would almost certainly have been shared by other species both before and after (indeed some of these features are shown by some current species). There is no contradiction in quadrupeds existing tens of millions of years before Tiktaalik.

    I think that covers the bases here.

    Tiktaalik is almost exactly what you want in terms of a key “transitional”; that it is a side-branch is irrelevant. Unless the fossil is fake, it is a “crocoduck”. Similarly, the platypus is a transitional too (unless you wish to see it as chimeric).

    The question for ID is, in what way was the designer involved in early tetrapods? Fossils being discovered in the “wrong order” has nothing to do with ID unless that’s something ID predicts. But for what compelling reason would the designer do that, if they could just as easily work through conventional common descent?

    Ultimately, I guess the whole point of trashing transitionals is to trash CD… so I must ask, is common descent bogus or not? At this point, I take the ID position on CD to be “CD is true, but in our heart of hearts, we wish it weren’t.”

    (In asking this, I consider it a separate question from that of saltation. Perhaps Tiktaalik was saltated from its egg, but it would still be part of a — much smaller — TOL.)

  2. Lenoxus

    Unless the fossil is fake, it is a “crocoduck”. Similarly, the platypus is a transitional too (unless you wish to see it as chimeric).

    If platypus is fossil than the wolf is fossil as well. You should perhaps write more on it. Who is ancestor of “transitional platypus” and who is it’s descendant?

  3. Lenoxus: Similarly, the platypus is a transitional too (unless you wish to see it as chimeric).

    The better terminology is to say the platypus has intermediate features. It’s not chimeric. The platypus “beak” is not a true beak, but a spongy, sensitive organ. Egg-laying is the primitive condition.

  4. VMartin:

    If platypus is fossil than the wolf is fossil as well. You should perhaps write more on it. Who is ancestor of “transitional platypus” and who is it’s descendant?

    There are fossil platypuses, but I was referring to current living ones. “Transitional”, as I’m using the term, does not mean “descending from one known specific species and ancestral to another”. It simply means “retaining the features of two distantly related groups”. Because of the “bushy” nature of evolution, one will only find closer and closer approximations to an exact “missing link”, in the form of cousin speices. Tiktaalik is a cousin of whatever fishapod is our ancestor, but that exact fishapod may never be known.

    I was just using the platypus because it’s a striking example of an overlap of mammals and reptiles, exactly as the evolutionary TOL would have it. (Some IDers believe its DNA and morphology also show evidence of otherwise “unrelated” species, indicating some sort of “gene splicing” in its ancestry.)

    Regarding whether it’s “still legitimate to refer to old-fashioned transitional forms”… One reason it’s not so contradictory is that some clades are strikingly different and the intermediate between them has been lost forever. If there were lots of monotreme speices alive today, instead of just two, the bridge between mammals and reptiles would be much more “solid”, and in that sense, the platypus and echidna would be “less” transitional than they are. Likewise with Tiktaalik and its footprint-making relative here.

  5. Correction: I should have said “intermediates between them have been lost”.

  6. There was something about this whole story that really caught my attention – the sheer level of confusion.

    Sometimes, people try to get away with saying “science is self-correcting” – but the trouble is, we are expected to believe provisionally whatever is barked at this moment – only to be told it’s wrong later – And no, wait – that other thing is also wrong, and – now the next thing turns out to be wrong too.

    There isn’t enough certainty here to justify anyone other than a heavy duty specialist spending a lot of time on it, let alone lobbies going to court to force kids in public schools to learn it.

    Never mind pretentious social nonsense like Michael Bloomberg hailing the now-discredited Ida fossil.

    The scary part is that a lot of these people don’t seem to get the fact that they are losing credibility because they are not credible.

    “Self-correcting” should mean something more robust than constantly wiping the egg off one’s face.

    I don’t know what to think about common descent. I suppose it’s true. It’s certainly plausible. But if my family history had looked anything like this, I would certainly place no faith whatever in genealogy.

    The problem, in my view, is overinflated claims aimed at supporting the Darwin industry. Take the Darwin hype away and we can just say at certain points, “We don’t know what happened.”

    That would do a considerable amount to restore credibility to the whole enterprise.

  7. Quote of the day – no doubt!

    “Self correcting should mean something more robust than constantly wiping the egg off one’s face”

  8. (From Dr. Tyler’s ARN post):

    Second, the evolution of tetrapods is an important test case for the relevance of design thinking – we ask the question whether tetrapods are here by Design or whether Law+Chance processes are sufficient explanation. Research is proceeding assuming the latter option, but the new discovery suggests that pursuing multiple working hypotheses (including design-based options) might be more prudent.

    Which raises the obvious question – how would one go about testing the ID hypothesis, or at least using it to develop our understanding of these fossils?

  9. Lenoxus -

    You asked a great question regarding ID and common descent. This is a common confusion, so I’ll try to clarify it for you. I’ll refer to ID with Common Descent as IDCD.

    ID is consistent with both common descent and special creation. However, the ID form of common descent is much different than the Darwinian form. The Darwinian form of common descent says that new features must have arisen slowly, with each added piece being a selected accident. Therefore, Darwinian evolution assumes that change is normally gradual and normally parsimonious (that is, a complex feature doesn’t usually evolve twice).

    IDCD, however, doesn’t have to obey those principles. IDCD holds that the information was embedded in the first organism that guides future evolution. Most people don’t know that the Amoeba contains 100x more base pairs than humans. Therefore, it is not inconceivable that the original organism had lots of genetic material to contribute to evolution.

    Therefore, because of this front-loading of information, there is no reason to think that there is a single lineage that became the tetrapods. There is no reason to think that a given feature had to have gradually evolved. Highly complex, specialized tissue can appear “all-of-a-sudden” because the organism had been carrying around that information the whole time, but only expressed it based on a certain timing or event or even as a stochastic combination of existing features.

    So, IDCD is neither gradualistic, nor does it necessarily lend itself to a very parsimonious phylogeny. It often “looks like” special creation to an outside observer because it has many of the same issues with Darwinism that special creation does. They both have different forms of life beginning abruptly with fully-formed features. The difference is that special creation often uses an immediate mode of creation, while IDCD often uses secondary causes from the first life. There are also mixes of the two, where there are multiple created kinds, but each created kind was front-loaded with latent information (this is my personal view).

    Hopefully that clears it up for you.

  10. Heinrich -

    One thing which I think ID can contribute to any historical aspect of earth history is shaving off hypothetical creatures. While there are certainly many creatures which haven’t yet been found, and I’m sure many of these creatures include chimeras of existing features in existing creatures, there is no reason to believe that there must be creatures where none have been found or evidenced. Darwinism has a bad habit of perpetually adding dashed lines in-between creatures for where it expects to find relationships. Instead, ID says that, perhap we can just take the fossil record as we find it. Perhaps what we need to be doing is measuring, say, the average known time fossils go missing from the fossil record, and use that plus statistical completeness estimates to estimate the error bounds of the fossil record. Instead, Darwinists will substitute a narration of what they think happened in the past to substitute for 99% of earth history, rather than simply looking at what’s there.

    Here’s a simple example – extinction estimates. Darwinists will say that 99.99% of species that have ever lived have gone extinct. Well, that’s actually a bunch of B.S. There are roughly 250,000 species that have been identified in the fossil record, and well over 1,000,000 species that exist today. Taken at face value, even if every species in the fossil record has gone extinct (which they haven’t), that means that 80% of species that ever existed ARE STILL ALIVE. That’s quite a stretch. So where do Darwinists get their number? By assuming that innumerable species existed in the transitional spaces. Why? Because they _must_ have existed there for their theory to be true.

    ID says that Darwinism is simply an unnecessary hypothesis. We should take the fossil record as it comes to us, measure its completeness on its own terms, and determine its limits as we can determine apart from Darwinism.

    After doing so, we might find certain features of the fossil record to be consistent with Darwinism, or we might not. The problem is that the Darwinists distort what they see to fit into their picture of Darwinism.

    There are also a set of Silurian trackways which were thought to be arthropods…why? Because it was thought that tetrapods hadn’t existed yet.

    Basically, Darwinism has been forcing the way in which we view the fossil record and earth history. When it is in conflict with the data, over and over again, the data gets modified to fit with Darwinism. ID makes a clean break with the Darwinistic picture, and would allow us to take the animal distributions within the fossil record much more on its own terms.

  11. Lennox,

    Because of the “bushy” nature of evolution, one will only find closer and closer approximations to an exact “missing link”, in the form of cousin speices.

    LOL! “bushy” = without pattern. The pattern is imposed by the imagination of the Darwinist. It is like a rorschach tedt. No matter the sequence, the Darwinist thinks he sees a pattern.

  12. One thing which I think ID can contribute to any historical aspect of earth history is shaving off hypothetical creatures.

    How will ID do that, other by by fiat (i.e. saying they don’t exist)? How will it build an argument that these hypotheticals don’t exist?

    There are roughly 250,000 species that have been identified in the fossil record, and well over 1,000,000 species that exist today. Taken at face value, even if every species in the fossil record has gone extinct (which they haven’t), that means that 80% of species that ever existed ARE STILL ALIVE

    Eh? You’re assuming that every species there has been has been found fossilised. I’m, well, sceptical about this – do tardigrades fossilise well, for example?

  13. Lenoxus,

    It appears that you do not understand the meaning of “transitional”.

    Transitionals do not just occupy side branches.

    Transitionals are supposed to be the link between two other forms.

    Also this is not to trash Common Descent- just the “science” behind it.

  14. Heinrich -

    The reason ID allows for shaving off of hypothetical creatures from the fossil reord is that ID does need them to exist – information is a sufficient explanation. Darwinism needs a certain pattern of organisms to exist.

    As for my comparison, I agree that the fossil record probably does not hold every species to ever exist. My point was to show the dramatic difference between available evidence and Darwinian assumptions, and just to what degree they differ. Without Darwinian assumptions, 80% of creatures who ever existed might still be alive, but adding Darwinian assumptions 99.99% of them are extinct! That’s quite a difference in scale. My 80% number is probably not the true one, and there have probably been more than 250,000 species in history (though many of them are probably still alive). The point is that we should let the fossil record itself speak to its own completeness, rather than using Darwinism to tell us what is missing.

  15. In comment 7 Cabal links to PZ-

    PZ doesn’t seem to understand the concept of transitionals.

    In order to be a transitional it must appear IN THE LINEAGE.

    Otherwise it is known as a mosaic- as is the platypus.

  16. Jehu: LOL! “bushy” = without pattern.

    Bushy is a pattern, of course. It’s a nested hierarchy with many stems along short branches. The nested hierarchy is a mathematical pattern, and phylogeny makes very specific empirical predictions, i.e. correlations between various traits.

  17. johnnyb: Without Darwinian assumptions, 80% of creatures who ever existed might still be alive, but adding Darwinian assumptions 99.99% of them are extinct!

    There is not only a succession of species, but a succession of ecosystems. Most species around today did not exist in the time of the dinosaurs. There were no Panthera or Ursidae. No Bovidae or Hominidae. But they did have ancestors! Then consider earlier eras! As always, we have to start with Common Descent and establish the overall historical pattern.

  18. Joseph -

    Actually, the current definition of “transitional” is so muddled that it includes anything that provides information (whatever that means) about how a transition might have occurred.

  19. JohnnyB,

    The definition of “transition” is very clear.

    Methinks it is the evolutionists who are muddled.

  20. johnnyb: Actually, the current definition of “transitional” is so muddled that it includes anything that provides information (whatever that means) about how a transition might have occurred.

    The terms “transitional” and “intermediate” are often used interchangeably, but transitional implies being closer to the common ancestor, while intermediate is derived. Rarely can it be known if a particular individual fossil is on the direct lineage to later organisms.

  21. Even if Tiktaalik were a perfect transitional form, it would not constitute evidence for Darwinism, and it would not tell us how fish evolved into amphibians. An old post by Denyse (dated 24 December 2007) explains why. It highlights the difference between intelligent design thinking and Darwinist thinking so well that I cannot resist quoting an extract:

    Tiktaalik helps illustrate the difference between the approach of scientists who are convinced Darwinists and that of scientists who view the problems of evolution primarily in terms of information theory (intelligent design). The Darwinist says, There! – we have found a missing link, so now we KNOW! what happened….

    The information theorist I consulted had an entirely different approach to the problem. He said that we do not know what happened, because we do not know how the information that produced this change came to be in the system. We have observed only the change itself, not the arrival of the information. The problem, he said, is not with the changes that are required to produce an amphibian but with determining exactly how such changes came about:

    There are two ways to look at the problem in going from fish to Tiktaalik and then from Tiktaalik to amphibian. If we merely look at a body plan then it doesn’t take too much imagination to say that the fish evolved into Tiktaalik and then the Tiktaalik evolved into the amphibian through natural selection. That is a very crude way of looking at evolution, however …. very 19th centuryish.

    It’s kind of like looking at a laptop computer and a pizza box … there are similarities in structure and an argument could be made (provided one did not open either the laptop or the pizza box) that natural processes had turned the pizza box into a shape of a laptop. The other approach (more appropriate to the 21st century) is to look at the coding changes in the genome that would be required to go from a fish to Tiktaalik and from Tiktaalik to an amphibian.

    It never ceases to amaze me that Darwinists can so blithely create a scenario based on morphology and not so much as breath one sentence about the massive coding changes that would be required. Have they no concept of 21st century information requirements? Of course it is true that we do not have the DNA of Tiktaalik, but we do have DNA for Coelecanth and for amphibians. Decent science would require, at the very least, a careful analysis of the coding difference between Coelecanth and some representative amphibian before a scientist would go out on a limb and announce that the evolutionary change (without any ID input) is even feasible, let alone represented by Tiktaalik.

    One reason why the intelligent design controversy is intractable is that many scientists are stuck in the primitive mode of discovering that a change occurred and declaring that “Darwin’s theory explains this!” They then make up stories to show how Darwin’s theory could explain it.

    You know the sort of thing: “Tiktaalik survived by propping itself up and eating small fish in shallow waters.” Maybe Tiktaalik did that. But that does not explain how he found himself to be in any position to do it.

    That kind of just-so story will not wash any more, because the enormous inner complexity of the design of life forms requires a more detailed accounting than that.

    Darwin’s theory is simply not adequate to the task, and the resources spent propping it up would be better used otherwise. (Emphases mine – VJT.)

    Need I say more?

  22. Heinrich @ 9 and 13

    johnnyb has provided some helpful thoughts. The issue here is not to win your agreement with the points made, but to allow you to understand the ID paradigm. Many of the claims of Darwinians (for example, that over 99% of species are unrepresented in the fossil record) are inferences from their gradualistic presuppositions. Those of us who have not bought into the Darwinian mindset are therefore totally unimpressed by their conclusions.

  23. David Tyler: Many of the claims of Darwinians (for example, that over 99% of species are unrepresented in the fossil record) are inferences from their gradualistic presuppositions.

    A scientific hypothesis is a tentative assumption made in order to draw out and test its empirical consequences. The Theory of Evolution posits that changes will occur in viable stages. Finding an intermediate fossil is a confirmation of that hypothesis. That’s why scientists will spend years looking for such evidence.

    You might say that Intelligent Design is consistent with this finding. But Intelligent Design is not specific enough to yield clear entailments. Indeed, that’s why Intelligent Design Advocates hardly ever bother with the messy details of biological research.

  24. Mr Vjtorley,

    I think that Tiktaalik provides evidence for Darwinism by being found where it was predicted to be found.

    You cite Mrs O’Leary:

    The Darwinist says, There! – we have found a missing link, so now we KNOW! what happened….

    I think you know as well as I that the only group that claims absolute knowledge of past events is the YEC community. Mrs O’Leary is engaging in agit-prop.

    Further, citing an unamed information theorist:

    The problem, he said, is not with the changes that are required to produce an amphibian but with determining exactly how such changes came about:

    He then goes on to suggest genomic comparisons.

    I think that is an excellent suggestion. While it seems that coelocanth mitochondrial DNA has been sequenced, the whole genome is not.

    But let us say for the sake of argument that genomic comparisons led to a hypothesis that ancient genome F could be transformed into ancient genome T and then A via a series of duplications and substitutions. This would still be just “the changes”, but not “how they got there”.

    Would you or this theorist actually be satisfied?

    More interestingly, will you apply the same test – exactly how such changes came about to an intelligent design based hypothesis, if one is ever advanced?

    Mrs O’Leary (and you do quote approvingly) seems to think that a distinction has been made here between Darwinist and ID modes of thinking. What is it?

    The Darwinist is interested in how the information arrived in the system also, and is going to answer that material changes in the material representation of the information is sufficient explanation. By chance, a gene was duplicated. By chance, a base was substituted, inserted or deleted. By chance, a virus inserted DNA from another entity. Information arrives in the system by chance.

    I’ve never heard that ID thinking provides a different set of mechanisms than these, merely an argument that some set of them is improbable within certain resource constraints. But that distinction of improbability is not present in what you quote. So yes, I think you do need to say more.

  25. The reason ID allows for shaving off of hypothetical creatures from the fossil re[c]ord is that ID does need them to exist – information is a sufficient explanation.

    Sorry, I don’t follow this – how does information cause creatures to come into existence? “In the Beginning was the Word, and the Word was Information”?

    Sorry, I got carried away. Anyway, I’m afraid I don’t understand what you mean by information being a sufficient explanation – something very different to what I was thinking?

    The point is that we should let the fossil record itself speak to its own completeness, rather than using Darwinism to tell us what is missing.

    Here is a nice post that provides a back-of-an-envelope calculation which suggests that there might still be a lot of fossils to find. It doesn’t use any evolutionary theory.

  26. Zachriel,

    Indeed, that’s why Intelligent Design Advocates hardly ever bother with the messy details of biological research.

    This is just flat wrong. Not to mention Michael Behe and Scott Minnich.

  27. johnnyB, #11, excellent!

  28. Zachriel: Indeed, that’s why Intelligent Design Advocates hardly ever bother with the messy details of biological research.

    Clive Hayden: This is just flat wrong.

    That’s it? Looking at their selected publications, most don’t even test a valid Intelligent Design hypothesis. Granted, the Intelligent Design community is rather tiny, but that just emphasizes the fact that they have had little success in attracting additional researchers. There’s just no clear entailments that generate significant research such as are found in mainstream scientific journals.

  29. Zachriel,

    That’s it? Looking at their selected publications, most don’t even test a valid Intelligent Design hypothesis.

    Sure they do, and that link shows plenty of research in ID. But that is of course not all, that is just the Biologic Institute, that is not Scott Minnich and Michael Behe. I suppose your next comment will be that it is not “enough” research to satisfy “you”, maybe you’ll compare it to science as a whole or Darwinian biology in particular, but your initial assertion was not “relative” to anything else, it was, quote, “Indeed, that’s why Intelligent Design Advocates hardly ever bother with the messy details of biological research.” which pertains to the ID research itself, not by comparison to anything Darwinian, but a statement of what research ID advocates do themselves. And in this light, the statement that “Intelligent Design Advocates hardly ever bother with the messy details of biological research” is flat out false.

  30. Nakashima,

    I think you know as well as I that the only group that claims absolute knowledge of past events is the YEC community. Mrs O’Leary is engaging in agit-prop.

    This is factually wrong, the Darwinitwits have most certainly, over and over, claimed knowledge of the past to “evidence” their fancy of evolution. Mrs O’Leary is not engaging in agit-prop, and quite frankly, I don’t appreciate the accusation. There was nothing propagandistic in what she said. I am going to ask you to apologize, or you will no longer comment here.

  31. Clive Hayden: Sure they do, and that link shows plenty of research in ID.

    It did? Though not having read every article, most of them appear to be studies of patterns without regard to any stated entailments of ‘Intelligent Design.’ (And if that is the best, then it’s pretty meager at that.)

    Clive Hayden: … but your initial assertion was not “relative” to anything else …

    The qualifier was “hardly ever.” As most of the studies you pointed to don’t state and test an entailment of ‘Intelligent Design,’ it isn’t clear you have justified your case.

  32. Joseph,

    Lenoxus,
    It appears that you do not understand the meaning of “transitional”.
    Transitionals do not just occupy side branches.
    Transitionals are supposed to be the link between two other forms.
    Also this is not to trash Common Descent- just the “science” behind it.
    +
    In comment 7 Cabal links to PZ-
    PZ doesn’t seem to understand the concept of transitionals.
    In order to be a transitional it must appear IN THE LINEAGE.
    Otherwise it is known as a mosaic- as is the platypus.

    I beg to differ. What a find like for instance Tiktallik shows, is that there were species with just the right anatomy as expected/predicted to have been living at a certain time period in a specific location.
    How many sibling species may have been around at the same time?

    I presume it will take more that just a few fossils covering a transition lasting millions of years to determine which are direct lineage and which belong in theoretical dead end branches.
    It is like we go to a cemetery and dig up all the bodies. How can we determine who left descendants and who didn’t?

    But some of them will most most likely be in the LINEAGE?

    Think it over.

  33. Zachriel,

    The qualifier was “hardly ever.” As most of the studies you pointed to don’t state and test an entailment of ‘Intelligent Design,’ it isn’t clear you have justified your case.

    All of the studies do indeed “entail” ID. As well as everything else from Dr. Marks and Dr. Dembski in the Evo. Informatics Lab, and Michael Behe, Scott Minnich, Jonathan Wells, Stephen Meyer, Richard Sternberg, Ann Gauger, Charlie Thaxton, Douglas Axe, Gerald Schroeder, John Lennox, Michael Denton, David Berlinski, etc.

  34. Predicting swamp creatures will be found in swamp… wow, Mr. Nakashima, I see you’re still peddling strawman arguments.

    I have not missed much.

    Next thing you know, you’ll show us where Darwinist predicted fish in a lake.

    Once again, the fictional stories of Tiktaalik are overrated and prove nothing. It is not a transitional fossil. Busted yet again.

    Maybe next you can wow us with Darwin’s prediction of Bears turning into whales. Afterall, this is how Darwinism works. Imagine it happened, much like Hollywood imagines Global Warming.

    Anyone seen a bear-whale lately? All they have to do is swim a long time in the water, knipping at bugs and stuff, lol.

    There are good arguments put forward here about the stamp collecting process of Darwinist.

    And there is ample rebuttal to the BS fodder statement by Zachriel insisting IDist do not do dirty work. What a load of Darwinian Bear-whale fallacy.

    And lets not just limit this to IDist, but to anyone that doubts gradualistic processes of the failed Darwinian paradigm.

    Dr. John Sanford, genetic gene gun inventer, multi-patent holder and many more like him exist that have said the evolutionary fictional story telling was and is a great waste of time and money.

    Gradualism has failed. Only in the fairy-tale fantasies of Dawkins and Dawkinite brights where new creatures replace a Darwinian bear, is the failed paradigm still pushed.

    Imagining the new creature is any better than a bear-whale story is pure disney fantasy. That this bilge is pushed on the public year after year is a travesty.

    Just like Al Gore’s kickback, business funded adventure scheme of Global Warming.

    http://www.IceCap.us

    And just like Climate-gate, Darwin-gate will be busted open one day. Better trash all your emails Darwinist.

  35. DATCG:

    Predicting swamp creatures will be found in swamp… wow, Mr. Nakashima, I see you’re still peddling strawman arguments.

    Actually, Shubin did more than just look in any old swamp. He was looking for exposed Devonian rocks of a specific age (~375 mya) formed in a freshwater river delta.

  36. Clive Hayden

    I am going to ask you to apologize, or you will no longer comment here.

    I don’t know why Darwinian commenters are permitted a voice here at all; all they do is confuse the issue to one on the validity of evolutionary theory. This obscures the real argument: the positive argument for design!

  37. How-to be a darwinist…

    oh gee… they didn’t like that story… bear to whale… so lets give them a new story. We can point to a picture of artist drawing and say it represents the “transition” fossil today. Dawkins is such a good TV Darwinist… look – see – this creature could be the ancestor of whales, it once lived on land. Evidence? None. Zilch. Only imagination of what may have happened in “transitions”.

    There are not scientific facts giving details of how it happened, only that it is “factual” that it did. There are fictional accounts, with many “maybe” “could have” “might” and “probable” statements.

    As sorted above and reviewed well by others, this is the problem with Darwinist fictional accounts. It is constant confusion, not science, it is constant story-telling, not science.

    At best, you can infer, but it is sold as if it is rocket science, when in fact is is glorified scientific fiction in so many instances updated with more glorified science fiction.

    Same fiction, just different fossils each time. And sure, the scientist have PhDs. But that is meaningless if they cannot be there or reproduce it. As a result, we get science fiction, very high caliber, yes, indeed – for children, teens, and those that refuse to mature and question.

    The debate is not about evolution even. As many IDist believe in some form of Common Descent.

    Its about the shoddy science and ficitional story telling that Darwinist present to the public year after year on TV and in print as FACTual atheistic views of evolution… when its NOT. It is story telling.

    It is their bible and they worship Darwin in churches today. They have created their own god.

  38. efren ts,

    please, lol… do you really think thats a prediction? Let me tell you something about the 375 million river delta… thats not a prediction, its a given.

    You will find fossils like this in every river delta and swamp if you continue to look, with sufficient funds. Looking in what is known territory for creatures like this is not a prediction. It is not a prediction of darwinism or darwinist.

    A real prediction would be if they said, they would find something unexpected, not a water creature, lol.

    What a bunch of overblown claims. And I mean that with no disrepect to you. Only the Darwinians that blow this way out of proportion and then try to make it a “transitional” fossil.

    It clearly is not. Never was. At best, it may be a side branch. So what, thats nothing new.

    Does it take skill sets to find fossils? Obviously, but that is not proof of Darwinian gradualism, or is it proof of any prediction related to atheistic Darwinian evolution.

    It is only evidence of common sense. I’ve read where Creationist find fossils all the time because – gee – they predicted they would be there. Does that prove Darwinism? No, of course not. Are you going to turn the table and say because Creationist knew where to look for fossils, it proves Creation?

    Remember the Coelacanth? It to was claimed a transitional.

    At best we know sea creatures exist in seas, swampland creatures exist in river deltas or swamps, land creatures exist on land.

    But no evidence exist that tie together these creatures. Only story telling about what is perceived to match an inference by Darwinist.

  39. I must go… the latest evidence appears to be a Forest of Trees.

    Not bushes, not one single long line of common ancestory.

    And I do not think this is very controversial. There are evolutionist putting this forth today as a possible theory.

    And Gould was correct about the fossil record. Not much has changed since his days of honest introspection of the failed Darwinian record.

  40. Zack, I disagree. You need debate.

  41. DATCG: Predicting swamp creatures will be found in swamp…

    Sure, a Nunavut valley is exactly where you would look for a swamp creature.

  42. Zachriel,

    Bushy is a pattern, of course. It’s a nested hierarchy with many stems along short branches. The nested hierarchy is a mathematical pattern, and phylogeny makes very specific empirical predictions, i.e. correlations between various traits.

    The system of nested hierarchies predates Darwinism and was invented by a creationist, Carl Linnaeus, who wrote:

    The Earth’s creation is the glory of God, as seen from the works of Nature by Man alone. The study of nature would reveal the Divine Order of God’s creation, and it was the naturalist’s task to construct a “natural classification” that would reveal this Order in the universe.

    To attempt to credit the success of Linnaeus’ nested hierarchies to Darwinism is disingenuous.

  43. Jehu: Jehu: LOL! “bushy” = without pattern.

    Zachriel: Bushy is a pattern, of course. It’s a nested hierarchy with many stems along short branches.

    Jehu: The system of nested hierarchies predates Darwinism …

    But what you originally wrote indicated that it wasn’t a pattern. Now you seem to agree it is a pattern.

    Jehu: To attempt to credit the success of Linnaeus’ nested hierarchies to Darwinism is disingenuous.

    The dastards!

    What Darwin did was provide a testable explanation for the pattern. But first, you have to agree that the pattern exists, that it can be objectively derived from character traits.

  44. Zachriel:

    The terms “transitional” and “intermediate” are often used interchangeably, but transitional implies being closer to the common ancestor, while intermediate is derived.

    Excuse me but just how are you using the word “transition“?

    It doesn’t appear you are using it how it is widely accepted.

    Rarely can it be known if a particular individual fossil is on the direct lineage to later organisms.

    True- What makes it all the more difficult is that no one even knows if the transformations required are even possible.

  45. Zachriel:

    Bushy is a pattern, of course.

    Not just one pattern and not just any pattern.

    It’s a nested hierarchy with many stems along short branches.

    Spewage alert!

    Zachriel- you don’t know what a nested hierarchy is.

    And a bush- the pattern it makes- isn’t a nested hierarchy.

    The nested hierarchy is a mathematical pattern, and phylogeny makes very specific empirical predictions, i.e. correlations between various traits.

    And correlations between traits exist because that is how the design was set-up.

    With descent with modification we should expect to see a hodge-podge of traits- mixing and matching- transitionals and intermediates- a real mess.

    We sure as heck shouldn’t expect to see a nice neat, orderly nested hierarchy.

  46. Cabal:

    I beg to differ. What a find like for instance Tiktallik shows, is that there were species with just the right anatomy as expected/predicted to have been living at a certain time period in a specific location.

    If tetrapods already existed then why would someone be looking for a transtional form in strata YOUNGER than that?

    BTW Cabal I understand the difficulties in correctly placing fossils.

    That is why I laugh every time I hear of a new “transitional”.

  47. Zachriel:

    Indeed, that’s why Intelligent Design Advocates hardly ever bother with the messy details of biological research.

    Yet that is exactly what irreducible complexity entails.

    As for details it is your position that hardly ever bothers with any.

    And when they do come it is always in support of slight, oscillating variations.

  48. Joseph: Excuse me but just how are you using the word “transition“?

    “Transitional fossil” is a scientific term and is defined as an organism which exhibits traits common to both ancestral and derived groups. Transitional implies being closer to the common ancestor, while an intermediate is more derived.

    Transitional Fossil

  49. Mr Hayden,

    Thank you for your concern.

    This is factually wrong, the Darwinitwits have most certainly, over and over, claimed knowledge of the past to “evidence” their fancy of evolution. Mrs O’Leary is not engaging in agit-prop, and quite frankly, I don’t appreciate the accusation. There was nothing propagandistic in what she said. I am going to ask you to apologize, or you will no longer comment here.

    I have gone back and looked at some of the coverage from the initial publication on Tiktaalik. I found on the popular site Pharyngula, this statement about Tiktaalik’s limbs:

    Those limbs tell us something about the evolution of limbs. Tiktaalik was definitely not a terrestrial animal, but had developed muscular, bony limbs and a strong pectoral girdle that had helped it prop itself up on the substrate, perhaps even holding itself partly out of the water.

    My emphasis added. So it seems that I must stand corrected, Darwinists do sometimes claim absolute knowledge. I apologize to Mrs O’Leary, yourself, Mr Vjtorley for misleading him, and any readers of my previous comment.

  50. Nakashima: I think you know as well as I that the only group that claims absolute knowledge of past events is the YEC community.

    Well, fossils have been used to argue that dinosaurs once roamed the Earth.

  51. Zack, I disagree. You need debate.

    There is plenty of that. My point is that it is a clever strategy to refocus the argument to one about the validity of whatever the current version of Darwinism happens to be.

    The design argument gets sidelined by these endless back-and-forths about nested heirarchies and transitional fossils. I see Timaeus, jerry, vjtorley kairosfocus and others spending time and effort producing long and detailed refutations to darwinists. All very well but, as a very occasional commenter, I, if I wasn’t a long time lurker, might get the impression that this blog is primarily concerned with the shortcomings of darwinism.

    The real, positive argument for design needs to be the focus of debate. People should read Joseph’s posts and learn from them.

  52. Zachriel,

    Your Wikipedia link on transitional fossils does not help you:

    Transitional fossils (popularly termed missing links) are the fossilized remains of intermediary forms of life that illustrate an evolutionary transition.

    Did you get that?

    Do you understand it?

  53. This reminds me of the recent discovery (Ardi?) that showed that humans are not descendants of apes, but that we share a common ancestor with them.

    Mr. Hayden,

    Please do not ban Nakashima. He makes the most reasoned arguments from a Darwin perspective here and so is most helpful in refining ID theories.

  54. Zachriel:

    What Darwin did was provide a testable explanation for the pattern.

    That is false.

    Darwin had to revert to well-timed extinctions to “explain” the pattern.

    But first, you have to agree that the pattern exists, that it can be objectively derived from character traits.

    One thing is certain it is not a pattern derived via descent with modification.

    Ya see with descent with modification we should expect to see many transitionals and intermediates.

    These would blur the lines of distinction required by nested hierarchies.

  55. Zachriel:

    Well, fossils have been used to argue that dinosaurs once roamed the Earth.

    And the absence of fossils cannot be used to argue that humans did not exist during the age of dinosaurs.

  56. I wonder if DATCG will ever, ever, ever forget to mention Darwin’s bearlike-origin hypothesis. This confuses me. Why not harp on the silliness of the current hypothesis, whereby whales descend from carnivorous ungulates? A cow-wolf? That’s almost as preposterous as giant flying reptiles…

    Except, of course, for the fossil remains. I’ll just assert this out loud and repeatedly so that no UDer will find it ridiculous any more: WHALES DESCENDED FROM LAND-GOING MAMMALS. WHALES DESCENDED FROM LAND-GOING MAMMALS. Perhaps this descent was aided by non-naturalistic design, perhaps it happened in one generation per new species, but the actual descent is uncontroversially true.

    You know, Darwin made more interesting mistakes than that one — for example, for a while he believed in a mechanism for heredity that he ultimately had to admit was false. Why not pick on him for that? Because it would be silly?

    Joseph:

    With descent with modification we should expect to see a hodge-podge of traits- mixing and matching- transitionals and intermediates- a real mess.

    We sure as heck shouldn’t expect to see a nice neat, orderly nested hierarchy.

    Well, life past and present does demonstrate a remarkable “hodge-podge” in the sense of its diversity. But if you mean that we should expect numerous violations of nested hierarchy, um… what? That would only be the case if every organism had an equal chance of successfully reproducing with every other contemporary organism. Then we would see winged horses and other chimeras. In our universe, however, no outside force is needed to keep horses and birds from mating.

  57. Lenoxus,

    The very definition of “transitional” and “intermediate” says that a nested hierarchy is not expected from descent with modification.

    Darwin “explained” the distinct categories by calling on well-timed extinctions.

    As for whales being descended from land animals- how can we test that?

    How many mutations did it take?

    How can it be measured?

    How many transitions were required?

  58. Joseph, before I go about answering those questions you’ve posed, would you mind answering a few of mine?

    I have a creeping suspicion that you are not a descendant of your proposed grandfather. Unless you can show, with plain and direct evidence, each step in the development process from the fertilization of the egg, all the way through the fetal stages in the womb, and in addition each phase of childhood up through adulthood into this very day, I refuse to believe the silly notion that you are a product of a simple sperm and egg coming together in the body of a female human.

  59. Zachriel,

    What Darwin did was provide a testable explanation for the pattern [nested hierarchies]. But first, you have to agree that the pattern exists, that it can be objectively derived from character traits.

    Nested hierarchies exists in extant species. In regards to the fossil record, the so-called “bush” is a pattern imposed upon the data by the imagination of the Darwinists. This is the “trade secret” of of paleontology that Gould referenced in his famous quote.

    The extreme rarity of transitional forms in the fossil record persists as the trade secret of paleontology. The evolutionary trees that adorn our textbooks have data only at the tips and nodes of their branches; the rest is inference, however reasonable, not the evidence of fossils.

    Furthermore, although extant species can be organized into nested hierarchies, they do not resolve into a phylogenetic tree, or bush, except in the fertile imagination of Darwinists.

  60. Leviathan:

    Joseph, before I go about answering those questions you’ve posed, would you mind answering a few of mine?

    If they are relevant.

    I have a creeping suspicion that you are not a descendant of your proposed grandfather.

    That is not a question and I never proposed a grandfather.

    Unless you can show, with plain and direct evidence, each step in the development process from the fertilization of the egg, all the way through the fetal stages in the womb, and in addition each phase of childhood up through adulthood into this very day, I refuse to believe the silly notion that you are a product of a simple sperm and egg coming together in the body of a female human.

    That isn’t a question.

    And why should I care about your ignorance pertaining to reproduction?

  61. It is indeed a question. How can we test your lineage?

    How much development did it take?

    How can it be measured?

    How many transitions were required?

    and I never proposed a grandfather.

    Correct me if I’m wrong, but I take out of this that you don’t believe your existence is due to your mother and father’s parents on both sides mating in order to bring about your parents, and then your parents mating to bring about you? You expect everyone here to believe you magically “poofed” into existence as a fully-formed adult male?

  62. Joseph,

    And the absence of fossils cannot be used to argue that humans did not exist during the age of dinosaurs.

    True, and the discovery of 65(+) million years old human bones would be a serious blow against the ToE!

    But OTOH, with all the research and the complete absence of contrary evidence or indications, it seems a safe bet that evidence for neither rabbits in the Csmbrian nor humans coexisting with dinosaurs will be found.

    Seems to me the accent now being on the power of genetics to determine evolutionary history – by and large confirming the history built on evidence from geology and paleontology – largely is overlooked, and, IMHO, little understood by opponents.

  63. The discussion of transitional forms has some value in clarifying what is needed from the fossil record and what it demonstrates. I’d like to draw attention to a comment by the authors of the research paper that: “the ghost ranges of tetrapods and elpistostegids are greatly extended” (from the text of their Figure 5). Also, Janvier and Clement write: “The temporal mismatch implies the existence of long ‘ghost ranges’ among Devonian tetrapodomorphs” (from the text of their Figure 1). These features complement the characteristic of abrupt appearance in the fossil record. In order to allow time for gradualistic evolution, extended ghost ranges have to be invoked by Darwinists. A design prediction is that these ghost ranges are more an invention of Darwinism than the consequence of an imperfect fossil record. The Darwinian response must be: keep searching! Janvier and Clement write: “[the trackways] are likely to trigger a burst of field investigations into potential tetrapodomorph fish sites of Emsian or earlier age”. We await the findings with interest.

  64. Cabal:

    True, and the discovery of 65(+) million years old human bones would be a serious blow against the ToE!

    I doubt that.

    But OTOH, with all the research and the complete absence of contrary evidence or indications, it seems a safe bet that evidence for neither rabbits in the Csmbrian nor humans coexisting with dinosaurs will be found.

    Absence of evidence is not evidence of absence.

    Besides there are artifacts dated to millions of years.

    Seems to me the accent now being on the power of genetics to determine evolutionary history – by and large confirming the history built on evidence from geology and paleontology – largely is overlooked, and, IMHO, little understood by opponents.

    But there isn’t any genetic data which demonstrates that the transformations required are even possible.

  65. and I never proposed a grandfather.

    Leviathan:

    Correct me if I’m wrong, but I take out of this that you don’t believe your existence is due to your mother and father’s parents on both sides mating in order to bring about your parents, and then your parents mating to bring about you?

    PLease TRY to follow along.

    All I am saying ios that I never proposed who my grandfather was.

    That was in response to your statement:

    I have a creeping suspicion that you are not a descendant of your proposed grandfather.

    Ya see I never told you who my grandfathers were.

    IOW I never proposed a grandfather.

  66. Joseph,

    IOW I never proposed a grandfather.

    Excellent! You never proposed a particular grandfather, but you are of course not denying that you have had – indeed must have had a grandfather?

    Just like we consider Tiktaalik looking like a candidate for the role of ‘grandfather’ but we are not proposing that she is the ‘grandfather’.

    Fair enough?

    Absence of evidence is not evidence of absence.

    Of course not, what I am saying is that it seems a safe bet that evidence … will not be found. You wanna bet it will be found? I am afraid I won’t be around to cash in…

    But there isn’t any genetic data which demonstrates that the transformations required are even possible.

    You’re entitled to your opinion, but in your own words Absence of evidence is not… (Although I believe there’s plenty of evidence strongly indicative of the possibility.)

  67. Zachriel @ 24 wrote:

    “You might say that Intelligent Design is consistent with this finding. But Intelligent Design is not specific enough to yield clear entailments. Indeed, that’s why Intelligent Design Advocates hardly ever bother with the messy details of biological research.”

    Ah, yes, Zachriel. The messy details of biological research. Like laying out the genetic and developmental steps necessary to produce a camera eye, or an avian lung, or a foot from a fin, or a lung from an air bladder, or a sonar system from nothing, by chance and necessity, without the aid of intelligence. And also, explaining why, if these complex systems are so advantageous, all those creatures who *don’t* possess them haven’t died out. (There’s an “entailment” for you.)

    I agree that scientific theories should deal with “messy details”. The difficulty is that neo-Darwinism is not in a position to lecture anyone about this, as it is so barren of messy details itself.

    Of course, you were given several opportunities, on the December 19th thread, of laying out the “messy biological details” of the evolution of *any organ, organelle, system or organism of your choosing*, but evaded the challenge. And that’s nothing new for Darwinians. Every book currently sitting in the Library of Congress, and every peer-reviewed scientific article in every existing journal of the life sciences, has similarly evaded the challenge. The fact is that Darwinians don’t have the slightest clue how the eye, the lung, etc. evolved — but still have the gall to claim that Darwinian theory should have the monopoly position in biology, and in high school science education. It is absolutely astounding. No scientist in any other field claims so much for ignorance that is so great. Only evolutionary biologists, with the lowest level of explanatory achievement in any science known to man, have this kind of chutzpah. Such little-brother boastfulness must be due to that old “physics envy”, eh, Zachriel?

    Or am I wrong? Have you been preparing that detailed evolutionary map? Are you about to publish it for us here? If you do, I’ll eat my words. If not, I’ll continue to infer what I’ve inferred all along — that neo-Darwinism is storytelling, speculation, bluff, and a disconnected jumble of scientific facts, none of it adding up to a a plausible mechanism, as the term “mechanism” is conceived of in adult sciences like physics, chemistry and engineering.

    T.

  68. Are you about to publish it for us here? If you do, I’ll eat my words.

    No worries there. Apparently Zachriel’s commenting priviledges have been withdrawn.

  69. Mr Timaeus,

    Judging from comments on another site, Mr Zachriel believes that he has been banned again from UD, and will not be able to respond. Sorry to be the bearer of bad news.

  70. Until astronomers can provide a detailed account of the movement every molecule within a star, I will continue to reject their preposterous theories of stellar formation, aging, and death. Anything less is just hand-waving, blithely assuming that the known physical and chemical principles happen througout the entire star. Who’s to say that stars aren’t empty shells of pure light, provided their energy by the actions of intelligent agency? Why not let the star’s surface — all of a star we can ever see — speak for itself?

    Timaeus:

    Only evolutionary biologists, with the lowest level of explanatory achievement in any science known to man, have this kind of chutzpah.

    If ID were the reigning paradigm, would it suddenly be able to provide those detailed pathways you’re asking for? Yes, the record of life’s history is messy, and very much incomplete. This is a problem for any hypothesis of life’s origin and evolution. Is the Discovery Institute hiding a secret record of the DNA of every organism in a given line of descent?

    If you do, I’ll eat my words.

    No, you and others (like Behe) will say that we don’t know whether that’s how it happened, so it’s a speculative just-so story.

  71. Lenoxus @ 69:

    You don’t know how I’ll react until you produce the goods.

    Regarding the stars, we can duplicate some of the physical processes that go on inside stars here on earth — in hydrogen bombs, for instance. We also know a great deal about atomic physics and atomic chemistry in general. So we have a basis for extrapolation. Regarding macroevolution, we have no empirical experience. We have never seen even the fastest-reproducing species (fruit flies, malaria, etc.) create fundamentally new complex systems, in any number of observed generations. And we understand very little of developmental processes, and are still very much puzzled regarding the relationship between genetics and developmental biology, with the old one-gene, one-trait model proving inadequate and no clear new understanding taking its place. We thus are far too ignorant about the mechanisms of life to have any basis for a confident extrapolation from normal inheritance to macroevolutionary change. This is why evolutionary biologists continue to work backwards, inferring macroevolution from the fossil record, because when they try to work forwards, from first biological principles, they are stymied — they can’t account for how it happened.

    ID’s job isn’t to provide detailed pathways, because ID isn’t a historical theory of origins. Neo-Darwinism is a historical theory of origins. It thus commits itself to the explication of detailed pathways. To the extent that it cannot deliver such pathways, it has failed by its own lights.

    ID is committed only to showing that living systems have informational properties that cannot be explained by chance and necessity alone, but require the input of intelligence. If it can show this, it has succeeded by its own lights.

    Where neo-Darwinism and ID clash is not over “evolution”, but over neo-Darwinism’s claim that the science of evolutionary biology has progressed so far that it can now assure the world that no intelligence was necessary to produce complex, integrated biological systems. ID has asked neo-Darwinism repeatedly to demonstrate this, with specific examples. Neo-Darwinism has come up blank every time. MacNeill of Cornell can’t demonstrate it; Coyne of Chicago can’t demonstrate it; Zachriel of Upper Podunk can’t demonstrate it. Who can?

    T.

  72. Timaeus:

    Ah, yes, Zachriel. The messy details of biological research…Have you been preparing that detailed evolutionary map? Are you about to publish it for us here? If you do, I’ll eat my words.

    Unfortunately you’ll have to go hungry, as you are speaking to an empty chair. Zachriel has been silently banned.

    Perhaps you can obtain an explanation from the moderator. As it is, those of us who are less than sympathetic to ID are left to guess, as we detect neither the application of the explicitly enunciated moderation policy nor any other consistently applied rules or principles. In many instances moderation and/or banning appears to serve the sole purpose of impairing the the ability of those who are not sympathetic to ID to press their point of view.

    If the UD moderator has a case to make to the contrary, he should make it. And, BTW, editing or declining to post this message altogether (or banning its author) will not help that case.

  73. 68

    Timaeus

    ——————
    Zachriel can’t respond as he has been banned without even an announcement.

  74. critter @ 71:

    I didn’t know that Zachriel had been banned. He was still posting early yesterday afternoon.

    I have no idea why he was banned, but in any case, his ban came over two days after I had posted my last two replies to him on the Dec. 19th thread, repeating my earlier challenge to him to provide details of the Darwinian mechanisms. In the intervening time he had posted several times on other threads, and had not responded to my challenge. I infer that he had decided to drop my challenge before he was banned.

    As for why he dropped my challenge, the most obvious explanation is given in 68 and 70 above. Darwinists always drop the challenge when it gets to the nitty-gritty. They cannot choke the following, entirely true words out of their throats:

    “I cannot provide any details regarding how complex integrated organs, systems, etc. arose by Darwinian means. I just believe that they did.”

    I wonder how the Dover Trial would have gone if all the scientific witnesses for the plaintiffs had been so bluntly honest about the limitations of their scientific knowledge. But of course, the Dover Trial wasn’t about science — it was about politics.

    T.

  75. Timaeus:

    Regarding the stars, we can duplicate some of the physical processes that go on inside stars here on earth — in hydrogen bombs, for instance.

    Hydrogen bombs are products of design, and thus provide only evidence for the design theory of stars. Also, how in the world do we even know stars are anything like hydrogen bombs? We have no idea. You cannot produce the detailed physical evidence. You have nothing.

    We also know a great deal about atomic physics and atomic chemistry in general. So we have a basis for extrapolation.

    Okay, turning off the sarcasm here… this is exactly the case for biology. If genetics is in some sort of horrible disarray, then so is physics, which is always having to revise the Standard Model and whatnot.

    Regarding macroevolution, we have no empirical experience. We have never seen even the fastest-reproducing species (fruit flies, malaria, etc.) create fundamentally new complex systems, in any number of observed generations.

    My understanding of “macroevolution”, at this point, is “any evolution that has not been demonstrated in a lab”. No matter what traits are gained, it isn’t macroevolution, because it’s not “fundamentally” new, or complex, or whatever.

    So why haven’t we yet bred flies that breathe underwater or something? Well, for one thing, if fruit flies had some propensity to satisfactorily “macroevolve” within a few human generations, gaining extra wings or something, we wouldn’t even need a lab to see that.

    Funny that you mentioned malaria, because Behe believes the opposite — that certain malarial developments can only be the product of design.

    IDers constantly claim that all major past transitions, such as from fish to tetrapods, required design, even if it was very subtle, consisting solely of gradual changes with no saltation. Why should observed gradual evolution in a lab be any different? How would we rule out the designer as being behind the mutations involved?

    We thus are far too ignorant about the mechanisms of life to have any basis for a confident extrapolation from normal inheritance to macroevolutionary change.

    Yet we are knowledgeable enough to confidently extrapolate a completely unobserved and vaguely described phenomenon in place of normal inheritence? Are you ultimately saying that no one should subscribe to any theory of origins until we have enough data, or what?

    ID’s job isn’t to provide detailed pathways, because ID isn’t a historical theory of origins. Neo-Darwinism is a historical theory of origins. It thus commits itself to the explication of detailed pathways. To the extent that it cannot deliver such pathways, it has failed by its own lights.

    So even though some sort of history of life probably happened, nobody should be so presumptuous as to think they have a theory of that history; then there’s just too much to explain.

    Where neo-Darwinism and ID clash is not over “evolution”, but over neo-Darwinism’s claim that the science of evolutionary biology has progressed so far that it can now assure the world that no intelligence was necessary to produce complex, integrated biological systems.

    Can anyone demonstrate that “no intelligence was necessary to produce complex, integrated weather systems”? Really? So we have perfect simulations and models of all weather phenomena, down to the molecule, at our fingertips? We know for a fact that there’s absolutely no designer behind evaporation and tornadoes? If not, why the hubris of refusing a designer?

  76. Lenoxus @ 73:

    I don’t know how much science you know, so I don’t know how much I need to explain to you. Have you heard of spectroscopic analysis? It’s used to determine the presence of elements, and is very handy for indicating the presence of elements in places where we can’t go. For example, the element helium was first detected, not on earth, but by spectroscopic analysis of the light of the sun. So we infer something of the composition of the sun working from physics (spectroscopy) learned via experimental science on earth. Our discovery that the sun is largely hydrogen and helium, combined with other things we know about atoms and their nuclei, allows for a reasonable extrapolation to the conclusion that the sun is a huge furnace in which heavier elements are baked out of hydrogen.

    I wouldn’t say that genetics is in horrible disarray. I said that the old one-gene, one-trait model was no longer tenable. Dr. MacNeill, the Darwinian biologist here, has confirmed this. That doesn’t mean that the whole mechanism of genes and traits worked out in the wake of the discovery of DNA is wrong, but it means that it needs supplementation by more complex mechanisms in which the genome and developmental processes are interrelated in very subtle ways. I understand that biologists are working on this now. But until they have it worked out how genetics and developmental processes work in everyday reproduction that we can observe and study and experiment with, it’s premature to make grand claims about the mechanisms of evolution operating 500 million years ago in ecosystems which we cannot accurately reconstruct. So evolutionary biologists should be very tentative at this point.

    Unfortunately, tentativeness is not in their nature; from the beginning, evolutionary biology has had delusions of grandeur, and it attracts the sort of person who likes to make sweeping generalities about hypothetical large-scale processes whose details are poorly understood. One camp is sure that “drift” is the big factor; another, “selection”; another is sure that all evolutionary change is slow and incremental; still another, that it proceeds in fits and starts; and none of them is capable of verifying any of these grand claims with decisive observations or experiments. (In contrast, you have the humble research biologists, the kind of people who study, say, metal levels in the tissues of lake trout, who are cautious and modest and limited in both their theoretical and practical claims. These are the kinds of biologist who actually add to reliable scientific knowledge.)

    Your remark about Behe shows a misunderstanding of Behe, of my remarks, or both. I am in complete agreement with the main argument of Behe in *The Edge of Evolution*, and of course his point is that Darwinian processes have been able to accomplish almost nothing with malaria — in the way of building complex new cellular machinery — in millions of generations. How likely is it, then, that such processes — by themselves — could turn shrews into bats, or hyraxes into elephants? I suggest you read Behe’s book; it sounds as if you haven’t.

    In answer to your question, I think people should “subscribe” to any theory of origins that tickles their fancy; but if they go beyond the claim that it tickles their fancy, and assert that a particular theory of origins is an irrefutable truth of science, and that anyone who questions it is an anti-scientific religious fanatic, then they had better have some pretty impressive evidence ready. I have nothing against Darwinism *as speculation*. But it is nowhere near strong enough to compel assent, as, say, Newtonian mechanics or the ideal gas laws are, or as atomic theory is. There is nowhere near enough evidence that Darwinian processes can do what they are alleged to have done. All claims to the contrary are bluffs by frustrated biologists who know that their science is nowhere near as exact as chemistry and physics, and bitterly resent the fact. I’m not impressed by bluffs. My chemistry and physics teachers never bluffed me. They proved things. Dawkins and Coyne, on the other hand, bluff regularly. Why should I trust them? And why should I give their view monopoly status in the school system?

    The complexity of weather systems is a poor analogy to biological complexity. Indeed, weather “system” is a misleading term, because it suggests a kind of structural permanence that meteorological phenomena do not possess. Related to this is the difference between efficient causation and final causation; there is no reason to suppose that final causation is necessary to explain weather systems, whereas the onus is on the Darwinians to show that final causation is not necessary to explain living systems, since all appearance, and structural and functional analysis, points in the opposite direction. And indeed, until the time of Darwin, this was the view of virtually all scientific minds — that living systems were radically end-directed and that such systems could not have come about by chance. Darwin hardly made a dent in the traditional view on the level of detailed argument, though he made a huge splash rhetorically, by offering an alternative broad general concept. But he could not explain any of the details. And we aren’t much further ahead on the details today, despite the fact that we know a thousand times more than Darwin did about genetics, development, etc. This suggests that throwing out final causation may well have been premature. Biologists committed that premature act because they wanted to be just like the physicists. It didn’t occur to them, as it occurred to Aristotle, that biology and physics may require different modes of explanation. That’s where physics envy blinded and misguided the biologists.

    Fortunately, physics itself has now become deeper, and has become aware of “fine tuning” in nature. Physics has moved into the 21st century, whereas biologists like Dawkins are still stuck in the 19th. If the biologists do their physics-aping again (in an effort to prove how scientific they are), they may (years after the physicists, as usual) figure out that living systems, as well as physical constants, are fine-tuned. That will lead them to a fuller understanding of living systems, one which leaves Darwinian thinking in the dustbin of history.

    T.

  77. Cabal:

    Excellent! You never proposed a particular grandfather, but you are of course not denying that you have had – indeed must have had a grandfather?

    Prove it.

    Just like we consider Tiktaalik looking like a candidate for the role of ‘grandfather’ but we are not proposing that she is the ‘grandfather’.

    Fair enough?

    Tiktaalik could be a grandfather to other Tiktaaliks.

    That is as far as the evidence can go.

    But there isn’t any genetic data which demonstrates that the transformations required are even possible.

    You’re entitled to your opinion,

    It is a fact, not an opinion.

  78. Cabal,

    The point being is that you need POSITIVE evidence for you claims.

    To date all you have is the refusal to accept the design inference.

  79. Joseph,

    Tiktaalik could be a grandfather to other Tiktaaliks.

    That is as far as the evidence can go.

    Since your view is equally applicable to all fossils, we may safely assume that they are insignificant? Even when genetic research confirm the relationships established by paleontology?

    Sincerely, is it your opinion that unless proof that you had a grandfather can be produced, you didn’t have a grandfather?

    All we need now is for you to produce proof of intelligent design. An inference, like inferring that you have a grandfather has already been rejected by dismissing my inference that you had one by way of requesting proof.

  80. Cabal

    Even when genetic research confirm the relationships established by paleontology?

    That is an odd question. Phylogenetics in many cases contradicts classical cladistics inferred from morphology alone.

  81. Cabal,

    Haven’t you heard the trade secret of paleontology?

    http://www.uncommondescent.com.....eontology/

  82. Is my account still functional?

  83. Cabal:

    Even when genetic research confirm the relationships established by paleontology?

    Unfortunately for you there isn’t any genetic data which demonstrates the transformations required are even possible.

    Sincerely, is it your opinion that unless proof that you had a grandfather can be produced, you didn’t have a grandfather?

    Do TRY to follow along.

    In comment 59 Lev said:

    I have a creeping suspicion that you are not a descendant of your proposed grandfather.

    So take it up with her- ya see she started this stupidity and hooked you instead of me…

  84. Timaeus:

    I mean this sincerely: that was an excellent read. I feel I have a much better understanding of the ID worldview now; thank you.

    One quibble — I still feel that if it’s presumptuous for biologists to develop theories of species origin, it is presumptuous for anyone, because we all have the same ignorance of how exactly DNA works and exactly which species once inhabited our planet.

    Unless, of course, the teleological explanation should be the default in any scientific endeavor, such as astronomy, until such time as sufficient knowledge allows us to comfortably and non-presumptuously rule it out. But I don’t think you think that; I think you think that biology is in particular ripe for the design inference, due to the nature of its subjects — the extraordinary functional complexity of life forms at every level, from the body plan to the cell. Hence, biologists have an “onus” that meteorologists do not.

    A related issue I have is that I don’t fully grasp the relevance of biology’s inexactness in comparison to physics or chemistry. If this is a problem, it’s a problem for everyone; ID doesn’t have some rigorously detailed mechanism to replace biology’s lack of one. And biology being messy and inexact doesn’t somehow provide evidence against evolution or for design. It just means that whatever happened, it happened messily.

    It seems to me that almost any science could be described as “sweeping generalities about hypothetical large-scale processes whose details are poorly understood”. Physicists are always inventing new hypothetical particles to get their theories to work, and can never agree on which interpretation of quantum is correct; surely this means that they are desperately doing whatever they can to avoid the telic alternative?

    Anyway, it looks like I was indeed wrong about Behe and malaria. My impression from reading summaries of The Edge of Evolution was that he used malarial resistance to quinine as an example of a far-too-improbable two-mutation event (from which I assumed he felt a designer was necessary).

    But if I understand correctly now, that particular malarial development is in fact being used as an example of what naturalistic evolution perhaps could have done, given enough generations (and that despite its number of generations, malaria hasn’t developed anything truly interesting). I could still be wrong, of course. :)

  85. Lenoxus:

    Thanks for your gracious comments in the previous post.

    Yes, Behe’s point is that Darwinian mechanisms — as far as the empirical data shows — haven’t proved capable of generating complex new cellular machinery, organs, systems, etc. He does not deny that Darwinian mechanisms can do *some* things — they can provide antibiotic resistance, for example. But they don’t seem to be a credible explanation for the sort of large-scale structural changes that macroevolution requires.

    Of course, one could still hold out on general theoretical grounds for the possibility that Darwinian mechanisms could do the trick, but Behe’s point is that our extrapolations about the past must be based on what we can observe in the present, and what we can observe in the present doesn’t suggest that Darwinian mechanisms can do all that much. It is a cautious empiricism, not a religious dogmatism, which undergirds Behe’s argument here.

    Yet Behe believes in macroevolution. So, then, if Darwinian and kindred mechanisms are nowhere near adequate, what is driving macroevolution? Behe does not claim to know. He does say, however, that he thinks design was somehow involved. He has also said in the past that such involvement might not require miracles; design might be built-in somehow, so that new forms “unfold” from the old, according to some pre-arranged scheme, rather than emerge by chance. He throws out possibilities such as this, but doesn’t dogmatize about them one way or the other. His point is only to establish the very great unlikelihood of chance and the very high probability of design. Yet for this, he has been called every name in the book, by professors who should show more open-mindedness to new ideas, and more personal class in debate.

    Regarding the “ID worldview”, I wouldn’t say there is any one single ID view of things. Some ID supporters accept macroevolution from molecules to man; others accept only limited macroevolution, mixed in with supplementary miracles; others reject macroevolution entirely. Some ID supporters allow a limited role for Darwinian processes. Some ID supporters are young earth creationists. Some are old earth creationists. Some are Jewish, some Muslim, some Catholic, some Protestant. Some are even agnostic. Rather than a world-view, what links ID supporters together is a skepticism about the creative powers attributed to chance by most evolutionary biologists, and a sort of engineer’s or architect’s instinct about the significance of what we see in nature. Of course, most ID supporters believe in a creator God, and don’t conceal this fact, but they strive not to make their argument for design depend on a prior personal belief in such a God. They believe that nature, as it were, points beyond itself to God, or something like God. I don’t see this as a world-view, but as an inference, though of course the inference can be used to support various world-views, including a Christian one.

    Regarding physics, you seem to be talking primarily about things at the outer edge of theoretical physics. I have in mind basic physics, basic electromagnetic theory, basic gravitational theory, basic atomic theory, etc. These things are all pretty well nailed down, even if their deeper implications are still up for grabs. We know from our technological applications in these fields that we have got things pretty well right. If we were completely wrong about the existence of atoms, for example, or about the relationship between electricity and magnetism, or about gravity, modern science and engineering would fall apart. But we aren’t wrong — we know what we are doing, in a lot of areas. When you start talking about grand unified theories, cosmology, string theory, multiverses, singularities — OK, I grant you that there is much that is still up for grabs here.

    The comparison I was making with biology is this: we know the laws of gravity and motion well enough to land a spacecraft on Mars within a few yards of the target. We *don’t* know what functions the vast majority of base pairs in the genome perform. We are nowhere near a full explanation of how a human body is formed, from fertilized egg to newborn. Etc. So what are we doing speculating about whether a shrew could have turned into a bat within ten million years, when we don’t have any DNA for the hypothetical primitive shrew, and wouldn’t know how to interpret it if we did, and when we can’t explain in detail how either a bat *or* a shrew forms in the womb today? How can we say what subtle changes would be needed to make the macroevolutionary transitions between hypothetical past animals, when we don’t understand the biology of the creatures that are right in front of us? That would be like saying that we could explain the evolution of the current internal combustion motor, if all we had were three or four stages of that motor out of the dozens that actually existed, and didn’t understand what half of the parts in the current motor were for, or how any of the earlier motors worked? How could we speak of the engineering changes that were made to the motor at each phase, when we don’t know how any of the blasted machines worked? Yet evolutionary biology engages in these speculations daily. Why not put evolutionary theory on hold, and devote the next 50 years to basic biology — discovering the complete account of every part of the genome and every part of the developmental process, and then turn back to the question of how these systems might have evolved?

    I do think that biology is even more ripe for a design inference now than it was in the time of Paley. However, the main point for me is that, in the absence of a full account of how inheritance works, how the body parts of creatures are altered in processes like metamorphosis and embryonic development, Darwinian sorts of accounts can never be more than speculative. The only way proving that what looks pretty obviously like design is not design, is to show how genetic and embryonic processes could have produced the apparent design through chance and natural laws, without intelligence. Darwinians assert that this has happened, but won’t say how. So why should we believe them? Why shouldn’t we believe that design was involved, until someone can come along with an exhaustive account that shows that no design was necessary? That was what I was asking Zachriel: where is this exhaustive account, which proves that a hippo can become a whale, or a fish an amphibian, without need for design? And, even before he was banned, he couldn’t come up with one. And on web site after web site, the Darwinians, even those with Ph.D.s in biology, fail, when I ask, to come up with such an account. If Darwinism is as solid an accomplishment as Newtonian physics or atomic theory (as the Darwinians repeatedly claim), such an account should be very nearly available. So where is it?

    T.

  86. Timaeus,

    We are nowhere near a full explanation of how a human body is formed, from fertilized egg to newborn

    Meaning?
    WRT Zachriel, you know he is available if you want to learn more about what he might have to say to you?

  87. Cabal @87:

    I would have thought my meaning was plain to any speaker of English. No biologist can give a step-by-step account of exactly how the human body (or any body of any higher animal) is formed, at the cellular and molecular level — all the divisions, all the specializations, all the molecular triggers, all the relations between genetic and epigenetic factors, etc. And of course, without such knowledge, we cannot specify what alterations would need to be made to the genome of a shrew to produce a bat. We therefore cannot estimate the probability that evolutionary change was accomplished by Darwinian means. We do not have enough knowledge of how the machinery works.

    As for Zachriel, he is free to publish, on any web-site he likes, the genetic and developmental steps necessary to take a clump of light-sensitive issue in some primitive worm and turn it into a camera eye, and show that these steps could plausibly occur, without any guidance, in a sequence compatible with natural selection and in the amount of time allowed by the fossil record. If he does so, and you alert me to the place, I will go to that site and read his description. But if he can do that, he should not be publishing it on some hobbyist’s web site. He should be publishing it in a book or refereed scientific journal, because it will confirm the truth of Darwinian evolution and win him a Nobel Prize in chemistry or physiology.

    T.

  88. T:

    As for Zachriel, he is free to publish, on any web-site he likes, …….If he does so, and you alert me to the place, I will go to that site and read his description.

    Actually, there was a comment here the other day pointing to a site he was commenting at subsequent to his banning. But, it appears to have disappeared and I didn’t bookmark the link. Did anyone else catch it?

  89. efren ts @89:

    Zachriel posted something at antievolution.org, but it was not an explanation of how Darwinian evolution can generate complex organs and systems. It was merely a note indicating his banishment from this site. So my challenge remains unmet. No neo-Darwinian can give anything close to a detailed pathway. The exact pathway that was followed historically is not demanded — just a biochemically and developmentally possible pathway — but the details must be included, or no prizes will be awarded.

    Neo-Darwinians, however, prefer discussion of “big concepts” (mutation, selection, bottlenecks, etc.) to discussions of mechanical details — a preference which is typical of sciences which have not achieved exactness and precision and are more like philosophical than scientific accounts. I can explain the evolution of every single living creature on earth with grand concepts like selection, mutation, drift and so on; but precisely because these big concepts, combined in proportion according to the taste of the evolutionary theorist, unrestrained by hard numbers or molecular realities of any kind, can account for *any* imaginable evolutionary result, they account for none at all. The only *testable* evolutionary theory is the one that has the intellectual spine to commit itself to particular proposals for the nitty-gritty mechanical details; e.g., how does one get from a hippo to a whale? What has to change, and in what order, and how many years will each change take, and is there enough time in the fossil record for the total set of changes, and even if there is, what guarantees that all the intermediate stages will be viable? (A very tricky question when it comes to the transition to marine lactation, for example.) Absent such a commitment to detail, Darwinian theory is untestable story-telling, and has no claim to be taken seriously as rigorous science.

    T.

  90. Neo-Darwinians, however, prefer discussion of “big concepts” (mutation, selection, bottlenecks, etc.) to discussions of mechanical details

    Well, sure, but as big concepts go, you don’t get much bigger than ‘design.’ Maybe the approach is, instead of arguing with Darwinists what evolution can or can’t do, is to beat them at the game of providing mechanical details?

    Anyways, thanks for the tip about the website Zachriel is at. FWIW, I like the guy and will go check it out.

  91. We therefore cannot estimate the probability that evolutionary change was accomplished by Darwinian means. We do not have enough knowledge of how the machinery works.

    Any recent advances in estimating or maybe even calculating the probability of Intelligent Design? I find it rather bizarre how ID proponents require pathetic details of evolution while presenting no details whatsoever of ID.
    What about connecting some dots, like evolutionists do?

  92. Timaeus

    The exact pathway that was followed historically is not demanded — just a biochemically and developmentally possible pathway — but the details must be included, or no prizes will be awarded.

    I believe much research is ongoing along these lines. I could point you towards some of it, but in order to judge the right level to pitch it at what qualifications do you have in the relevant field?

    Much of the information is impenetrable to the lay-person and this is perhaps where the confusion lies. It does exist, but perhaps you’ve just not looked at it, or looked at it and not understood it.

  93. 94

    Since the mechanism is enforced as orthodoxy, wouldn’t the relavent research already be in the bag?

    If the answers to these questions are forthcoming, then it would follow that the hardened conclusions which have been unconditionally foistered on the public for the past number of generations, could not have logically been based upon the evidence which has yet to appear.

    Or, is this kind of simple observation just too obvious to be tolerated?

  94. Upright

    Since the mechanism is enforced as orthodoxy, wouldn’t the relavent research already be in the bag?

    Enforced by who? I’ve asked friends who work in the field and they have never heard of any such enforcement. Nor have they ever come under pressure to supress or hide results because they point away from the “orthodoxy”. Having said that, nobody I know who works in the biological sciences has claimed to have evidence for Intelligent Design, so perhaps that’s the reason.

    If the answers to these questions are forthcoming

    Can you be more specific? What questions?

    then it would follow that the hardened conclusions which have been unconditionally foistered on the public

    Has it? Although I’m not a resident of the USA I believe more people believe that God created life then believe in Darwinism. So how does that equate with your statement? It might well be being foisted on the public but they are not believeing it!

    could not have logically been based upon the evidence which has yet to appear

    Much of the evidence is very technicial and not suitable for a lay-audience.

    What is your normal source of information? Are you subscibed to any research repositories? What journals do you typically read? Do you keep up with developments, if so, how? What are you qualifications that you can judge this? Do you work in the biological sciences?

    Or, is this kind of simple observation just too obvious to be tolerated?

    Tolerated? Do you mean tolerated by the same people enforcing the orthodoxy? Or some other group of people?

  95. h.pesoj @93:

    Your reference to qualifications is typical. All specialists, when they are bluffing, say: “You’ll have to take my word for it, because you wouldn’t understand the details.” In fact, any literate scientist who desires to communicate rather than obfuscate can put the main outline of an argument in layman’s language, before getting into all the technical details. And I have sufficient understanding of evolutionary theory and of basic biology and biochemistry to tell whether or not a particular biological argument is in the right *form* to be at least a *possible* answer my question.

    The *form* of answer I need to hear goes like this: “In order for a camera eye to evolve out of a sheet of light-sensitive tissue, 50 previously nonexistent parts, 400 new neuronic pathways, 18 new hormones and 450 brain modifications must be created or established. In the following 500 pages I will discuss the majority of these items, one by one, showing which sectors of the genome govern the necessary alterations (directly or in conjunction with epigenetic processes which I will specify). I will give an estimate of the number of mutations needed to produce each morphological change, specify the mutations needed, provide hard numbers for the probability of each mutation and a calculation for the probability of the entire set of mutations, and I will show how the dozens of hypothetical intermediate forms, where the new organ is as yet incapable of its ultimate function, provide survival advantages, or at least are not deleterious. All the hypothetical intermediate forms will be illustrated with diagrams, so that the physiologists and ecologists may inspect them for functionality and selection plausibility I will also estimate how many million years each step should be expected to take, and verify that there is enough time in the fossil record for the total process.”

    If the explanation is set up along these lines, and carried out with the precision indicated, and seems plausible, I would be inclined to trust the specialist regarding most of the details.

    Do you know of any explanation set up in this form for any major organ, organelle, system, or organism? If you do, tell me where it is, and let me worry about my qualifications to understand it. I’ll let you know if I need your help. But I don’t expect it will get to that stage. I’ve been asking Ph.D.s in biology for about three years now to produce such an account, and they always either make excuses for why they can’t, or they change the subject.

    T.

  96. 97

    Yes, of course.

    Now that you have that of your chest, please post the relavent research links. I’ll be back later in the day to purchase the research.

  97. T

    And I have sufficient understanding of evolutionary theory and of basic biology and biochemistry to tell whether or not a particular biological argument is in the right *form* to be at least a *possible* answer my question.

    100 years ago you may have had a point. However to obtain even a partial understanding of current theory requires considerably more then a “basic” understanding.

    I take it then you have no such relevant qualification. A pity, as the data really speaks for itself with no need for a narrative such as you demand.

    There may be such an example as you demand however, for the bac flag.

    A passive, nonspecific pore evolves into a more specific passive pore by addition of gating protein(s). Passive transport converts to active transport by addition of an ATPase that couples ATP hydrolysis to improved export capability. This complex forms a primitive type-III export system.

    The type-III export system is converted to a type-III secretion system (T3SS) by addition of outer membrane pore proteins (secretin and secretin chaperone) from the type-II secretion system. These eventually form the P- and L-rings, respectively, of modern flagella. The modern type-III secretory system forms a structure strikingly similar to the rod and ring structure of the flagellum (Hueck 1998; Blocker et al. 2003).

    The T3SS secretes several proteins, one of which is an adhesin (a protein that sticks the cell to other cells or to a substrate). Polymerization of this adhesin forms a primitive pilus, an extension that gives the cell improved adhesive capability. After the evolution of the T3SS pilus, the pilus diversifies for various more specialized tasks by duplication and subfunctionalization of the pilus proteins (pilins).

    An ion pump complex with another function in the cell fortuitously becomes associated with the base of the secretion system structure, converting the pilus into a primitive protoflagellum. The initial function of the protoflagellum is improved dispersal. Homologs of the motor proteins MotA and MotB are known to function in diverse prokaryotes independent of the flagellum.

    The binding of a signal transduction protein to the base of the secretion system regulates the speed of rotation depending on the metabolic health of the cell. This imposes a drift toward favorable regions and away from nutrient-poor regions, such as those found in overcrowded habitats. This is the beginning of chemotactic motility.

    Numerous improvements follow the origin of the crudely functioning flagellum. Notably, many of the different axial proteins (rod, hook, linkers, filament, caps) originate by duplication and subfunctionalization of pilins or the primitive flagellar axial structure. These proteins end up forming the axial protein family.

    More detail required then that, presumably?

    But I don’t expect it will get to that stage. I’ve been asking Ph.D.s in biology for about three years now to produce such an account, and they always either make excuses for why they can’t, or they change the subject.

    Please provide links to examples, or I will suspect that you are simply bluffing. Or did it all happen “off-line” which would be terribly convenient.

    However, I would ask one question. Given that no such answer to your question is available, why do you suppose the majority of people working in the fields that could provide such an answer don’t have the same disbelief that you have regarding evolution? What do you know that somebody with a degree or two does not? Can you share that?

  98. T,
    A question:

    A sociology professor has hypothesized that wearing baseball hats follows the dynamics of a single locus with two alleles. Out of 5000 undergraduates sampled the professor observed the following phenotypic classes: 2 students not wearing a baseball hat, 196 students wearing baseball hats with the visor pointing forward and 4802 students wearing baseball hats with the visor pointing backwards. Using your expertise in population genetics clearly show why you agree or disagree with the hypothesis.

    Basic stuff. What’s your answer?

    Too simple? Here’s another:

    1. A researcher is interested in the inheritance of coat color in guinea pigs. A cross was conducted between a red female and white male to produce an F1 population all with intermediate coloration (medium pink). These F1s were then intercrossed to produce an F2 population. The F2s included white, red, and intermediate individuals in a ratio of approximately 1:2:1. These results would support which of the following:

    a) Mendelian Inheritance

    b) Inheritance of Acquired Characteristics

    c) Evolution by Random Mutation

    d) Blending Inheritance

    e) Catastrophism

    One more

    Regarding the phenomenon of ‘Multiple Hits’, which of the following is true in molecular phylogenetics:

    a) Uncorrected distance measures are likely to underestimate true distance

    b) Mutations will accumulate until taxa are 100% different

    c) Sequences of distantly related species will approach a maximum of 0.4

    d) Uncorrected distance measures likely overestimate actual values

    e) The more distantly related two taxa are, the less likely that uncorrected genetic distance will be in error

    If you can answer these questions then I’ll be happy to continue this conversation on the level you have set where you have sufficient understanding of evolutionary theory and of basic biology and biochemistry to tell whether or not a particular biological argument is in the right *form*.

  99. Upright

    Now that you have that of your chest, please post the relavent research links. I’ll be back later in the day to purchase the research.

    Quid pro quo. I’ll be happy to. Once you address my questions in post 95.

  100. Upright,
    Presuming you address my questions, in order for your purchase to be worthwhile could you tell me exactly what “question” you want answering, in your own words.

    Many questions have been asked on this thread.

    Which one is it you want an answer to?

  101. “Much of the evidence is very technicial and not suitable for a lay-audience.”

    Nonsense. If it exists then it could be put in a form accessible for the lay public. Darwin’s book was meant for the lay public and any assessment of current micro biology or genetics could also be made clear to this public. Your series of questions on genetics is irrelevant since we already have had evolutionary biologist tell us the answer is not in genetics which ID has no problem with but with the origin of variation. This a bogus explanation.

    As to enforcement, examine what happened to McWhorter when he strayed off the reservation and to Wright and his associate for allowing it. And to Behe, Minnich, Kenyon and Dembski and others. And to enforcement, examine what the climate gurus did to their colleagues who did not agree.

    So maybe you want to return to your community and tell everyone how dumb we are and continue on with your irrelevant explanations. Adios.

  102. Jerry,
    What are your answers to these questions then?
    http://www.uncommondescent.com.....ent-345566

  103. Jerry

    Nonsense. If it exists then it could be put in a form accessible for the lay public. Darwin’s book was meant for the lay public and any assessment of current micro biology or genetics could also be made clear to this public.

    Indeed, and it has been. But the “evidence” is of course the details that are far too tedious to include in any book. Lenski left alot of work out in his seminal paper on bacterial evolution, but the details are there if you care to look.

    I take it you’ve read Dawkin’s latest book then? What did you make of it? On what page was the first factual error?

  104. h. pesoj @99:

    Your reply shows a lack of discipline. You do not stay on the point that is being argued. Instead of answering my utterly clear and precisely focused question, you change the subject to “which of us knows more about population genetics?”. But population genetics is irrelevant to my question. My question was not about how alleles spread through populations. My question was about how radically new cellular machinery and radically new organs and systems are built. Either you have not understood what I am asking, or you are deliberately misdirecting the readers here, hoping to convince them that you must be right and I must be wrong if you know more about population genetics than I do. Such a tactic is intellectually shameless — but that is nothing new for Darwinists.

    Misdirection doesn’t work with me. I’ve been studying evolution for about 45 years now, am a former Darwinist myself, and know all the dodges and all the subterfuges.

    Can you produce an explanation of the sort I have asked for, or not? Do you have the slightest clue what parts of the genome would have to be altered to create an iris, a cornea, the various muscles and fluids, the biochemistry of the retina, etc.? Can you tell me, for example, which base pairs would have to be dropped, and which rearranged, to make it possible for a non-dilatable pupil to become dilatable? Can you tell me how the embryonic process that creates the octopus’s eye differs from the embryonic process that creates the human eye, and pinpoint the sectors on the corresponding genomes that account for this difference? If you cannot do these things, how can you say with any certainty that Darwinian processes could have made the necessary alterations to produce the camera eye?

    It’s of course antecedently very unlikely that you personally can provide what Dawkins, Gould, Orr, etc. cannot provide. If you could provide it, you wouldn’t be wasting your time in an amateur forum like this; you’d be up on a stand accepting your Nobel Prize, along with a tenured position at the world-class university of your choice. But I’m willing to be proved wrong. Maybe you are a scientific genius ranking up there with Darwin and Newton. Or maybe you are aware of a book or article that has somehow escaped the notice of Dawkins, Coyne, Orr, Eugenie Scott, etc., which has the detailed explanation I have asked for. If so, name me the book or article that shows a full pathway. Your choice of system or organ. You have the floor, in an ID forum, and the opportunity to demolish ID once and for all. You can’t ask for a more generous offer than that.

    T.

  105. H.pesoj, what did you think of Neil Shubin’s book Your Inner Fish? I found it to be a good read, despite it being simplified down alot for a lay audience.

  106. “Indeed, and it has been. But the “evidence” is of course the details that are far too tedious to include in any book. Lenski left alot of work out in his seminal paper on bacterial evolution, but the details are there if you care to look.”

    We have asked biologist here and we have had evolutionary biologist here and none have been able to provide an explanation for the origin of complex novel capabilities. So what are we to think.

    Then along comes someone who says it all exists. Well show us poor rubes the truth. My guess you will slink off like the rest of them when your bluff is exposed.

    “I take it you’ve read Dawkin’s latest book then? What did you make of it? On what page was the first factual error”

    Parts of it. It is what he fails to do that is damning. We recently had a discussion on the eye and Dawkins was a no show on that topic. He does not discuss anything of relevance regarding the building of information in a genome. There are two big holes he danced around. So far I have not read anything relevant but I will continue on. Maybe you could suggest a chapter that would be worthy of discussion.

    You have to understand that ID accept evolution but just questions the mechanism for parts of it. So a recitation of the obvious is not going to cut it. You can pile fact on fact and we will just nod our heads in agreement but what will be carefully avoided is the origin of complex capabilities. I just pointed out two areas where Dawkins disappeared when he was needed.

  107. Lenski’s work is considered good ID science. Behe when asked what should science do to support his thesis in the Edge of Evolution, he replied research like what Lenski is doing at Michigan State.

    So here we have a virulent anti ID scientists doing ID work and supporting basic ID propositions. Life is good.

  108. “Can you tell me a single fact about how those pupils, either of them, came to be via your purported intelligent designer?

    I thought not.”

    This is becoming silly. Haven’t you heard about synthetic biology? My guess that in a couple thousand years they will catch up to the designer.

  109. “I mean, did you never hear of “designed to evolve”?

    ID cannot be disproven in that way as it seems to retreat each time. Last resort is of course that the universe was designed”

    One of the hypothesis that I like to consider is that the current micro-evolutionary system is excellent design. The replication system and error correction process in the genome and the inheritance mechanism are such that it allows minor changes to genomes over time and then this allows organisms to adapt to new environments. Thus, one thing that makes sense is the idea “designed to evolve.” However, this process is limited so it cannot explain everything.

    I have written a few long comments about ID and how it is distorted by those who argue against it. It is not the best prose in the world but it lays out the issues and tries to dispel any misconceptions. Since they are long and I understand time constraints, you may not read them but some of your objections are covered in these comments. Here they are if you or anyone else wants to comment on them. The links are to the specific comments and one of them is a series of three comments.

    http://www.uncommondescent.com.....ent-296129

    http://www.uncommondescent.com.....ent-299358

    http://www.uncommondescent.com.....ent-326046

    http://www.uncommondescent.com.....ent-304029

  110. h. pesoj @ 108:

    As I expected, more misdirection. I asked for a Darwinian recipe for a complex organ or system. Instead of admitting: “I don’t have an adequate Darwinian explanation for any complex organ or system, and I don’t believe anyone else does, either”, you turn around and ask me how ID explains such systems. That’s a fair question — but not as a means of dodging the question posed.

    Your other remarks show a confusion between Darwinian evolution and front-loaded evolution. Front-loaded evolution is compatible with ID; Darwinian evolution, strictly understood (i.e., as Darwin understood it, and as major disciples like Gaylord Simpson and Dawkins and Gould understood it) is not compatible with ID.

    A full Darwinian explanation for several major organs or systems would not “disprove” ID in the strict sense, but the greater the number of successful explanations, the more ID would become explanatorily redundant. Indeed, Darwin’s purpose in writing was to make design explanatorily redundant. That is the purpose of Dawkins and Coyne today. At any rate, Darwinian theory is nowhere near success in that endeavour, as your silence regarding even one organ or system indicates. (By the way, the sketchy storytelling you gave about the flagellum — without a list of hypothetical intermediate stages (there would have to be many, not just the TTSS), and without both the genomic transformations and the utility of those intermediate stages nailed down — is worthless. Look at the model I provided. That’s the gold standard. Your example doesn’t even make the lead standard.)

    T.

  111. h.pesoj is a troll.

    h.pesoj is just my name spelled backwards. :cool:

    Could be a sock-puppet for Zachriel.

    But I will answer the backwards me:

    Providing a detailed pathway such as you demand would not disprove ID in any meaningful way.

    That isn’t about disproving ID, my backwards me.

    That is about figuring out what is required.

    Once you determine what is required then you try to reduce that to cause-and-effect- what could cause X number of components to come together to produce this system (the effect)?

    But anyway- thanks for the laugh…

  112. That isn’t about disproving ID, my backwards me.

    It is about finding which explanation best fits the body of evidence. If ID cannot be falsified in any way, shape, or form, it is indeed a poor candidate for a valid explanation, no matter how ramshackle the opposing explanation is.

  113. Joseph,

    Good catch. But while it lasted, it was good show prep.

    They are desperate. Even Nick Matzke was trying to belittle biological information the other day like it is a nebulous concept.

  114. Even Nick Matzke was trying to belittle biological information the other day like it is a nebulous concept.

    Can you give me a value for the biological information in a Stoat?

    Jerry, can you provide a list of biological artifacts in order of their “biological complexity”?

  115. backwards me:

    Why would you expect somebody to come and provide what you want on a blog dedictated to supporting ID? Why would anybody bother?

    I don’t care where they do it.

    But until they do it they cannot rule out the design inference.

    And they should bother if they want to support their position.

    But anyways-

    c-ya

  116. Jerry,

    Biological information could very well be a nebulous concept.- just sayin’

    But that has yet to be demonstrated. Bold proclamations are only good for propaganda- ie the ToE. :cool:

  117. Joseph,

    I was only pointing out the desperation when someone supposedly at the top of the food chain of naturalistic biological evolution has to be obstructive with transparently trivial objections. The biological community does not think it is nebulous, just look at all the bioinformation courses and majors. But the clowns here think they are on to something when they question something as simple and direct as information in biology. It is almost the first thing that Crick wrote about after discovering the double helix.

  118. Jerry

    But the clowns here think they are on to something when they question something as simple and direct as information in biology.

    If it’s so simple can you give me a few examples of the CSI/FSCI/Biological information in

    A) A banana.
    B) A Tree.
    C) A single cell.
    D) The bac flag.

    Or can you give me a list of organisims ordered by the amount of biological information in each?

    Or could you say what role the designer had in introducing this “biological information” and when?

    Or could you tell me how “biological information” is generated? When it increases? When it decreases?

    Can you give me an example of an increase or decrease in the amount of “biological information” in something, and tell me what it is?

    Etc etc.

  119. Yes, biological information does of course exist. You can find all sorts out about it if you search

    But the clowns here think they are on to something when they question something as simple and direct as information in biology.

    People who know something about it appear to disagree:

    A minority tradition has argued that the enthusiasm for information in biology has been a serious theoretical wrong turn, and that it fosters naive genetic determinism, other distortions of our understanding of the roles of interacting causes, or an implicitly dualist ontology. Others take this critique seriously but try to distinguish legitimate appeals to information from misleading or erroneous ones. In recent years, the peculiarities of biological appeals to information have also been used by critics of evolutionary theory within the “intelligent design” movement.

    http://plato.stanford.edu/entr.....iological/
    Nowhere there, or anywhere at a reputable website does it claim that biological information has any relationship with intelligent design.

    If you think it does, you know where to submit your papers. Peer review.

  120. “Jerry, can you provide a list of biological artifacts in order of their “biological complexity”?”

    I love the way to not answer a question, by asking another question. I am sure that if we posed this as a thread here at UD, we could get an interesting discussion. I am also sure if it was posted it on any biological oriented thread in the world there would be a huge response till they were told to be careful that it may provide fodder for ID. It would include all the ways various species can be complex and be very informative. But as soon as someone is told to be careful, you might be helping the ID people, they will be tongue tied.

    Let’s put out an arbitrary order in alphabetical order. We can add or subtract as someone sees fit.

    alligator
    amoeba
    ant
    ant eater
    archaea
    bacteria – Lenski latest
    bacteria – Lenski start
    bat
    beetle
    butterfly
    chimpanzee
    cichlid
    eagle
    eucaryote
    giraffe
    horse
    human
    humming bird
    koala
    lamprey
    lobster
    malaria
    mustela nivalis
    octupus
    prokaryote
    puffin
    rat
    shark
    shrew
    shrimp
    snake
    spider
    sponge
    star fish
    tasmanian devil

  121. Will that list have a different order if ordered by FSCI, CSI and biological information? Or will they all come out the same way?

    So, shall we begin then?

  122. Perhaps a good place to start is the UPB?

    If we say that the chance of all the components in each of those items in the list coming together randomly is greater then the UPB then that means all of those objects have the same “complexity” in some way. Is that “complexity” best defined by CSI, FSCI, biological information or “other”?

    Or is that just wrong? It does of course assume that “everything” is designed and as we know (because I asked earlier) ID cannot answer the question “name two biological entities, one designed, one not-designed”.

    Therefore the inference is that ID says that everything is designed, right?

    Or can you name a biological entity that is in fact no designed as decided by some method you can describe?

    I make an error, no?

    From that list, what is designed and what is not designed? Can you tell? Is it all designed?

  123. “If it’s so simple can you give me a few examples of the CSI/FSCI/Biological information in

    A) A banana.
    B) A Tree.
    C) A single cell.
    D) The bac flag.

    Or can you give me a list of organisims ordered by the amount of biological information in each?

    Or could you say what role the designer had in introducing this “biological information” and when?

    Or could you tell me how “biological information” is generated? When it increases? When it decreases?

    Can you give me an example of an increase or decrease in the amount of “biological information” in something, and tell me what it is?

    Etc etc.”

    This should not be too hard to do but it will take time except for the list of organisms ordered by the amount of biological information which is an interesting question. I suspect you will think I will make the C value mistake or judge complexity based on the number of proteins coded because I know you are probably ready to pounce.

    But once it is done, you will then have some other irrelevant objection. We have already seen this game before. I have posted a starting list for complexity. Maybe it should contain a banana or venus fly trap. I did discriminate against the plant kingdom. Is kingdom in vogue these days?

    This is getting a little silly here. I am the one who constantly references the Stanford Philosophy site and its discussion on information and biology. I believe the person mainly responsible for the site it Godfrey-Smith. It the person was honest who cites this site he would acknowledge that the site says there is one type of information concept that has value for biology and it is the one biology and ID uses.

  124. Joseph and I do not agree on everything. In fact a lot of people here defending ID do not agree on everything. But you are just amusement on a Saturday afternoon and it is now time to get some things done before the football games start.

    In the end all you will succeed in doing is helping us fine tune the arguments. For which we are always grateful. When you stop asking us question and start providing that esoteric stuff we are incapable of understanding then maybe we will take you seriously. Till then you are just a diversion and one that probably has been here before under another banner.

  125. But once it is done, you will then have some other irrelevant objection.

    Not at all. I want to see where this leads. I am geniunely interested to see the methodology used and the results.

    I suspect you will think I will make the C value mistake or judge complexity based on the number of proteins coded because I know you are probably ready to pounce.

    Look at it another way.

    If FSCI/CSI whatever is a measure of “useful” information then consider this. The Bible is held by some to be full of the sort of rich, useful information. Is it’s “information content” to be determined by finding the number of words and multiplying it against some probability of them being selected at random then that’s fine. But if you look at another book, and work the same number out, the longer the book the higher the “information content”.

    So the most boring book in the world if longer then the Bible would appear to have more “information content” by that simple measure.

    So no, please don’t take the number of proteins and then work out the probability of them coming together at random. That’s the same as what I just described and is pointless.

    I’m looking forwards to this!

  126. There are a couple more questions from us ignorant slobs here.

    Many have implied that people are banned here for asking embarrassing questions they ask on ID. This is nonsense of course. But it should not stop anyone from asking the same embarrassing question again to see if you too will be banned. So go ahead and tell us all the embarrassing questions that got people banned. Then we can see just how unreasonable the moderators are here. Please do not go back to when DaveScot banned people. I generally had some sympathy for many of his bannings but not all. He banned me once or twice, banned Timaeus, Ted Davis and some others who are sympathetic to ID or would defend many of our ideas here.

    So have at it and post all the embarrassing questions. I know many here would love to see what embarrasses us.

    Second, people invoked Dawkins’ book and I was point blank asked if I read it this morning and when I said I had read parts of it and wanted to know which parts were relevant to the debate, there was radio silence. So if anyone is up to pointing out a part of Dawkins book that is especially relevant and which some have held up as the gold standard, let me know. He abandoned the concept of cumulative evolution in the book and the term gradualism is hardly mentioned so I want to know what I should focus on as I read on.

  127. Is it possible to come up with some kind of metric regarding “effort”.

    For example a thing with 1 unit of biological complexity takes N units of “effort” from designer type A to create?

    Can the designer create these units of biological complexity without increasing entropy? Or does that apply anyway, do you suppose?

  128. h.pesoj @ 119:

    You wrote:

    “It’s hard to see how new understanding could come any faster then it current is, given that DNA itself was only discovered a few decades ago.”

    Let me get this straight. You are saying that we only discovered the nuts-and-bolts biological basics (DNA etc.), on the basis of which evolutionary theory could offer rigorous stepwise accounts, a few decades ago. And you are saying that it is unreasonable to expect complete understanding in so short a time.

    Let’s grant this, for the sake of argument, as reasonable. What follows? Well, the “modern synthesis” was achieved before we understood the DNA-protein mechanism. Yet Fisher, Mayr, Dobzhansky, Gaylord Simpson, etc. were very cocksure, very arrogant about neo-Darwinian theory. They pooh-poohed opposition as unscientific, ignorant, etc.

    Let’s go back further. Darwin and Huxley didn’t even understand Mendelian genetics, let alone DNA. They were much greater ignoramuses than those who put together the modern synthesis. Yet they were sure, oh so sure, that macroevolution required no design whatsover, that accidental variations and natural selection could create whole new orders, classes and phyla without any problem at all.

    What justified the certainty, the apodictic tone, of Darwin, Huxley, Fisher, Mayr, Simpson, in your view, given that the earlier group didn’t have the slightest clue how normal reproduction occurred (Darwin had never even seen a cell, except possibly as an obscure dot in a primitive microscope), and the later group didn’t understand the DNA-protein mechanisms which are basic to the processes of life?

    Would it be reasonable to say that these early Darwinians asserted far, far more than they could prove about the powers of chance and natural selection? And that their “science” was largely based on hunch and faith that evidence would some day be forthcoming? Is that a model for serious science? To assert far more than you can prove, ridicule your opponents as religious dolts, etc.? Funny, I thought science was supposed to be modest, cautious, tentative.

    T.

  129. Something biological that wasn’t designed?

    Sickle-celled anemia.

    It is what happens when random effects creep into the design.

    In this case a single nucleotide switch- ie a point mutation.

  130. Timaeus

    Rather than a world-view, what links ID supporters together is a skepticism about the creative powers attributed to chance by most evolutionary biologists, and a sort of engineer’s or architect’s instinct about the significance of what we see in nature.

    Engineering is not based on a premise that nature was engineered. That idea has not been found useful.
    So what makes you think that IDists’ instincts are like those of engineers and architects?

  131. Something biological that wasn’t designed?

    Sickle-celled anemia.

    It is what happens when random effects creep into the design.

    In this case a single nucleotide switch- ie a point mutation.

    Most (or all?) mutations are supposed to be harmful. What process/method, if any, (how could there not be?) is responsible for the fact that the human population is increasing instead of going extinct?

  132. Joseph

    Something biological that wasn’t designed?

    Sickle-celled anemia.

    Um you do know about the relationship between that and malaria don’t you? You should given that this site uses the example of malaria alot.

    http://en.wikipedia.org/wiki/Sickle_Cell_Disease

    Sickle-cell disease, usually presenting in childhood, occurs more commonly in people (or their descendants) from parts of tropical and sub-tropical regions where malaria is or was common. One-third of all indigenous inhabitants of Sub-Saharan Africa carry the gene[2], because in areas where malaria is common, there is a survival value in carrying only a single sickle-cell gene (sickle cell trait).[3] Those with only one of the two alleles of the sickle-cell disease are more resistant to malaria, since the infestation of the malaria plasmodium is halted by the sickling of the cells which it infests.

    No doubt it’s, as you say, just a random effect.

    I wonder what the chances of that happening purely by chance are?

  133. Cabal:

    Most (or all?) mutations are supposed to be harmful.

    Nope.

    What process/method, if any, (how could there not be?) is responsible for the fact that the human population is increasing instead of going extinct?

    Design.

  134. backwards me:

    Um you do know about the relationship between that and malaria don’t you?

    Umm absolutely- do you have a point?

    Dr Spetner covers this in “Not By Chance”- point mutations caused by copying errors are perhaps the only mutation that can be called “random”, or a genetic accident.

  135. jerry @ 133

    Many have implied that people are banned here for asking embarrassing questions they ask on ID. This is nonsense of course.

    Would it not be better to take this discussion to a neutral or unmoderated forum where both sides could present their respective cases without fear or favor?

  136. Freelurker @137:

    I didn’t say that engineers work on the premise that nature was engineered. Whether or not nature is engineered is not a question that engineers need to address in order to do their work. So they won’t necessarily have a consciously formed opinion on that question.

    However, all engineers do have an instinct about design, for their work consists in adapting means to ends in subtle and complicated ways. That instinct whispers to them, if only at the unconscious level, that complex integrated systems don’t arise by chance, but by design. A software engineer knows that his word processor didn’t evolve by accident from a series of random changes in his Solitaire program. A hardware engineer knows that computers don’t arise by the accidental melting of blobs of copper and silicon onto a flat surface, blobs that just happen to fall and harden into structures that allow information to be transmitted. A chemical engineer knows that pharmaceuticals do not arise by accident, as random chemical slosh around in tanks until something medically useful comes up. All engineers know in their bones, without having to make a formal argument for it, that complex integrated systems, and products that require such systems, are designed. Thus, the premise that complex integrated systems don’t arise by chance is a premise in line with an engineer’s instinct, even when it’s held by a biologist, a mathematician, a lawyer a chemist, etc., and even when it’s applied to living things rather than factories and machines.

    This does not mean that all engineers agree that living systems are designed. However, I would wager a small sum that of those engineers who have taken the time to study the interior of the cell, or the protein-DNA machinery, or the physiology of the cardio-vascular system, a higher percentage finds design credible than is the case among evolutionary biologists. That is, the people who actually know something about the technical constraints involved in constructing complex systems are more likely to be open to the design inference than the pure theoretical biologists, who are imaginative speculators in the world of “might have”, “maybe”, and “hypothetically” and have no training in system design.

    Another group which appears much more favourable to the design inference than the evolutionary biologists is the medical profession. A large number of practicing doctors, dentists, chiropractors, surgeons, professors of physiology, professors of medicine, professor of pharmacology, and veterinarians are Darwin skeptics, and many of them incline to ID. Again, unlike Chicago or Yale biology professors who are blackboard jockeys, playing with equations of population genetics, those in the medical profession deal with the complex workings of the human body (or in the case of veterinarians, the animal body) every day. They see the integrated complexity, the dependence of everything on everything else, and are unimpressed by the notion that such systems could come into existence one step at a time. The accidental integration of scattershot unguided changes into eyes and lungs is incredible to them.

    I’m not saying that it’s a majority of medical doctors yet who are anti-Darwin or ID supporters. After all, en route to becoming a doctor, most students are brainwashed with years of biology curriculum (from ninth grade up, courtesy of your friendly neighbourhood ACLU and your Pennsylvania judges and your lobby groups) which tells them that Darwinism is unquestionable fact, and it takes some time for the experience of actual medical situations to slowly erode confidence in an empirically implausible theory. At first the doctors will of course try to interpret the discrepant data in terms of Darwinian notions.

    Similarly, engineers at most university campuses are predominantly male, and are taught a “macho culture” where it is cool to be insensitive to art, literature, philosophy, and religion, especially religion, which often in our culture connotes a sort of “feminine” dependence, anathema to the swaggering image of the wholly self-dependent engineer who can build his own power plant, computer, hang glider, hybrid car, etc. So many engineers are not going to easily admit, as they chug down those beers, that there may be some evidence in nature for the existence of God. If such thoughts cross their minds, they will suppress them in order to maintain their masculine pose. But not all engineers buy into the puerile “drunken stud” culture of undergraduate engineering programs; some of them are quite thoughtful individuals, and many of those, as time goes on, will have cause to interact with biologists, and will look at what they see and say “Wow! That’s design more superlative than anything humans have produced!”

    The more important the various branches of engineering become in our culture, and the more that engineering interacts with the life sciences, the more the opinion of engineers will swing away from Darwinian belief and toward belief in design. That’s my prediction. I think some sociologist should take a poll of engineers’ beliefs in Darwinian evolution and ID today, and then again 10, 15 and 20 years from now. I think that the shift away from Darwinian theory and toward ID will be decisive.

    T.

  137. 138

    I returned to the thread with my research money burning a hole in my pocket, but alas, no links to the requested research were posted. (What a surprise). In place of the links, we find the standard fare of evasion and dismissal. In order to see the pattern repeat itself yet again, I didn’t need to look any further than the very first sentence offered as a response to my post.

    Above in comment #93, h.pesoj responded to Timaues’ request for a detailed account of the evolution of an organ, organelle, etc by Darwinian (purely unguided) mechanisms.

    h.pesoj responded by saying that such “research is ongoing”. I then offered a simple observation:

    “Since the [unguided] mechanism is [already] enforced as orthodoxy, wouldn’t the relevant research already be in the bag?
    If the answers to these questions are forthcoming, then it would follow that the hardened conclusions which have been unconditionally fostered on the public for the past number of generations could not have logically been based upon the evidence which has yet to appear.”

    Without missing a step, h.pesoj leaned headlong into the first line of evasion. He completely ignored the obvious point that the results of ongoing research could not have logically been used as the basis of a conclusions already made.

    I mean…to the greater population of English speaking people, the results to come from “ongoing” research are not yet available to form a conclusion. It doesn’t matter if that research has been ongoing for a year or a hundred years. You cannot make a conclusion unless you actually have the evidence to support it.

    I’m not trying to be overqualified here, but as a person with almost 30 years experience as a Research Director in private enterprise, this is just one of those little structural disciplines that one picks up on when one wants to do good research; a premise is not supported by the conclusion until the conclusion is actually concluded.

    One would think that someone like h.presoj, who consequently demands to voir dire the qualifications of his intellectual opponents prior to answering their questions, might already know this little tidbit of practical knowledge.

    In any case, what was h.pesoj’s response to this structural deficiency (of making conclusions prior to having evidence that they can indeed achieve the results claimed for them).

    He evaded:

    “Enforced by who?” …as if this there is no conclusion behind which a national science association has been dedicated to enforce its mandate. As if scientists have not been abusing their status in society to promote unsupported certitude to the public for years. As if science associations have proven to be indifferent to legal disputes where unguided processes are examined.

    “Can you be more specific?” …as if he somehow now does not understand that the conclusion of purely unguided mechanisms is what is at issue. As Jacques Monod once famously concluded for the public: “chance alone is at the source of every innovaton, of all creation in the biosphere” and that “man knows at last that…he has emerged by chance”.

    “Has it?” …as if the public has not been told for generations on end (without disclaimer of any kind) that life is the result of pure blind unguided processes.

    “Tolerated?” …as if the scientific community has shown any discipline whatsoever in the ability to tolerate a proposition that unguided processes are perhaps insufficient to explain life and bio-diversity.

    - – - – - -

    In other words, these evasions are not made by someone interested in either truth or empirical evidence. These are the comments of a common ideologue, hell bent on preserving the echo chamber.

    The entirety of his remaining comments signal the same.

  138. jerry,

    Many have implied that people are banned here for asking embarrassing questions they ask on ID. This is nonsense of course.

    Is asking for the reason why Voice Coil was banned and his last post never appeared here an embarrassing question?

  139. Upright Biped:

    I’m not trying to be overqualified here, but as a person with almost 30 years experience as a Research Director in private enterprise

    Cool. What type of research do you do?

  140. backwards me:

    And yet you have come to the conculsion that an intelligent designer is required for life.

    That is what all the evidence supports.

    So that is what we infer.

    Now to refute that inference all you have to do is to actually support your position.

    Yet you can’t so you have to come here and whine.

    Go figure…

  141. backwards me:

    The only reason I can see for banning Voice Coil was that he was making people look foolish.

    How did he(?) do that?

    All I saw was a bunch of references to Zachriel and Zachriel is FoS.

    There still isn’t any genetic data which supports the alleged evolution of the mammalian middle-ear from a reptilian jaw.

    There still isn’t any way to test the premise.

    So what do you have?

  142. “The only reason I can see for banning Voice Coil was that he was making people look foolish.”

    You have made my point. The only person that Voice Coil made look foolish was himself. Is Voice Coil the same person as Reciprocating Bill who kept repeating the same nonsense argument last year. The constant harping on the word “entail” is similar. The essence of the comment which was delivered with disdain and contempt was nonsense. The comment was probably not banned because of the content but because of how it was delivered. There is nothing threatening to ID by the actual questions in the comment. It was a stupid comment and has been answered many times before. So is delivering a stupid comment with obvious disdain a cause for banning?

    Most of the comments banned and persons banned are for reasons of treating others with contempt. It is interesting because I have no respect for the anti ID people here. They hardly ever present anything relevant and dodge and deflect. So I get sarcastic often because they have never provided anything that is appropriate to their point. And they often go negative from the start with their own sarcasm.

    I would not have banned Voice Coil but deleted the comment and asked him to ask it differently but that takes a lot of time to police everything. I have a lot of sympathy for the moderators who have to deal with all sorts of crap that comes in.

    But thank you for making my point.

  143. #154 Jerry

    Most of the comments banned and persons banned are for reasons of treating others with contempt. It is interesting because I have no respect for the anti ID people here.

    I think the conclusion is obvious.

  144. Let’s put out an arbitrary order in alphabetical order. We can add or subtract as someone sees fit.

    I’ve been checking back in on this thread periodically. So far, no one has gotten started on ordering this list by complexity/FCSI. Is there someone in particular we are waiting on?

  145. 146

    efren ts at 155,

    I’ve been checking back in on this thread periodically. So far, no one has gotten started on ordering this list by complexity/FCSI. Is there someone in particular we are waiting on?

    I’ve also been monitoring this thread since I’m very interested in learning how to calculate CSI for real biological systems. Thus far, I have been unable to find any worked examples.

    From the list, I think these two:

    bacteria – Lenski latest
    bacteria – Lenski start

    would be particularly interesting to measure. I would also be interested in seeing the calculation for a small virus like AAV.

    The calculation for a snowflake or The Giant’s Causeway would be good future additions to the list.

    Just a worked example of calculating CSI, as described in No Free Lunch for a real world biological system or component, taking into account known evolutionary mechanisms, would be fantastic, though.

  146. backwards me:

    Can you tell me a single thing about how you think the mechanisms were “guided”

    Dr. Spetner discussing transposons:

    ”The motion of these genetic elements to produce the above mutations has been found to a complex process and we probably haven’t yet discovered all the complexity. But because no one knows why they occur, many geneticists have assumed they occur only by chance. I find it hard to believe that a process as precise and well controlled as the transposition of genetic elements happens only by chance. Some scientists tend to call a mechanism random before we learn what it really does. If the source of the variation for evolution were point mutations, we could say the variation is random. But if the source of the variation is the complex process of transposition, then there is no
    justification for saying that evolution is based on random events.”

    It appears transposons carry the sequence(s) for some of enzymes required for them to jump around- cutting and splicing.

    Think targeted search- an excellent design mechanism demonstrated to get the results expected.

  147. “I’ve been checking back in on this thread periodically. So far, no one has gotten started on ordering this list by complexity/FCSI. Is there someone in particular we are waiting on?”

    We are waiting on you. Why don’t you start by saying what you think might be the dimensions of complexity and which would be most appropriate to analyze.

  148. Jerry:

    We are waiting on you.

    Say what? I am new to ID and here trying to figure it out.

    Since you have stated elsewhere that you have been interested and involved in ID for over 10 years, I figured you were experienced in calculating FCSI for biological things. If you aren’t the person who is capable of calculating FCSI, then perhaps you could let us know who is and we can wait for them to start the discussion.

  149. “If you aren’t the person who is capable of calculating FCSI, then perhaps you could let us know who is and we can wait for them to start the discussion.”

    Oh, I can make a calculation but I am not an expert. I suggest you google, Hazen, Abel, Kalinsky.

    ” I am new to ID and here trying to figure it out.”

    If you are trying to figure it out, go by this rule. Those who are anti ID have only one purpose, try to catch an pro ID person in something they said that is wrong or they can not back up. And never admit the someone from the other side has a point. The other side keeps answering their questions till they get fed up.

    You say you are new but you already know about FCSI. That is a start. FCSI is nothing more than the transcription/translation process.

  150. Jerry:

    Oh, I can make a calculation but I am not an expert.

    So, you won’t?

    I suggest you google, Hazen, Abel, Kalinsky.

    Well, there were a total of 4 links in 10 pages of results. I found one link to an actual article, but it didn’t have any calculation of CSI (or FCSI) for any real biological entity. It might be easier to point me to the calculation you are thinking of.

    If you are trying to figure it out, go by this rule. Those who are anti ID have only one purpose, try to catch an pro ID person in something they said that is wrong or they can not back up.

    I guess then there is no point in me trying to cajole you into calculating FCSI for any of the items on the list in comment 121 above, huh?

  151. Can someone, anyone, tell me the relevance of “calculating the CSI of the listed organisms”?

    The reason I ask is that isn’t even correct terminology.

    The calculation was to see how many bits of specified information met the UPB.

    Now all you have to do is count the bits of specified information in the object under investigation- if there is any.

  152. As for worked examples of CSI- may I inquire as to the worked examples of blind, undirected processes?

    I mean it’s only been 150 years since Darwin’s publication and all you can show us is disease and oddities.

  153. Timeaus @137,

    Whether or not nature is engineered is not a question that engineers need to address in order to do their work.

    This is true, and it’s refreshing to hear an IDist say such a thing.

    I’m used to hearing specious claims that engineers do what IDist do or that IDists do what engineers do. I think that’s because there’s a tactical problem for IDists if they acknowledge that engineering disregards ID and that engineering does so for practical reasons (not as a matter of atheism.) The problem is that it raises questions as to why scientists doing science should be any different. If methodological materialism is good enough for engineering then why isn’t it good enough for science? (Scientists don’t expect to discover Ultimate Truth any more than engineers expect to build the Ultimate Airplane or the Ultimate Computer.) This would undermine IDists’ claim their work has been rejected due to ideology rather than due to a lack of useful results.

    Engineers build and maintain systems without investing any time investigating whether or not their system has been, or will be, affected by non-material intelligences. But then IDist engineers get on their blogs at night and berate scientists for behaving the same way when doing science.

    There are indeed engineers working with biologists, today, in the field of Systems Biology. They describe their work as model-building. This is what engineers and scientists do — build models — and ID offers no help. Are these engineers personally inspired to believe that some unspecified non-material intelligence must have done some unknown thing at some unknown time for some unknown reason to bring about biological systems? I would guess that some are and some aren’t. But I’m very confident that you won’t see an intelligent agent in their models of nature, regardless of their religious beliefs.

    Thus, the premise that complex integrated systems don’t arise by chance is a premise in line with an engineer’s instinct, …

    You don’t appear to understand what it means when some aspect of nature is modeled as a random variable. It doesn’t mean that the modelers claim that that aspect happens accidentally, or that there is some Cosmic Random Event Generator (i.e., that it “arises by chance”). It isn’t even a denial that there could possibly be some Cosmic Event Chooser operating beneath the model’s level of detail. It means that the modeler can do no better than to model that aspect as a random variable, either due to a lack of information, or as a simplifying assumption. I think that the general population of engineers understands this. [But I don't claim, as you do, to be able to see what's in their bones or to be able to hear the whispers of their instincts.] Furthermore, I expect that engineers, like scientists, would seek, as much as possible, to model nature in terms of regularities (which you don’t mention.) Leave it to an IDist to describe the world in terms of chance versus design.

    Comparing ID with an obviously practical field like engineering illuminates ID as an exercise in apologetics. But perhaps that’s not an issue for you. Your primary hope for the future is not that we will have some better model of the history of life, but that there will be more widespread belief in design.

  154. “I guess then there is no point in me trying to cajole you into calculating FCSI for any of the items on the list in comment 121 above, huh?”

    That would be absurd. There are in eukaryote multi-celled organisms several thousand genes and in humans about 20,000-25,000. The improbability of the average one exhausts the physical resources of the universes through random self assembly. So the anti ID people have to either show that the particular gene, while improbable, is just the lucky winner of a lottery of assembling amino acids or nucleotides by accident and for that to have meaning, there must be untold functional proteins. However, some have estimated that functional proteins are about 1 in about 10^80 or in other words very rare. Then there is the attempt to say that there is a strong affinity from certain nucleotide sequences in an RNA polymer and a specific amino acid. That is once tRNA assembles it will have a natural affinity to a specific amino acid. This still does not explain what then assembles the appropriate amino acid and it assumes the RNA world was somehow functionally operative and this chemical affinity then drove amino acids to have a similar affect. Unbelievably speculative but that is what they are left with at the moment to claim ID is mumbo jumbo.

    Obviously, once a gene arises, it is not the same issue but how did these 20,000+ genes arise in the first place when most of them defy the probability resources of the universe let alone the planet. So to get an estimate of the complexity, take the average gene and calculate the population of genes from which it could have come and then factor in such things as there is often more than one codon for each amino acid and also that some amino acids are fungible for one or two others in certain situations so that the same function could arise via different gtenes. The number would still be incredibly highly improbable given all these considerations. And then it has to be repeated again and again for the other genes.

    Now some of the other genes could have arisen from mutations of an already existing gene and the subsequent protein may be functional in some sense. However, some believe that finding a unique gene this way would again be like starting all over again and be beyond the probabilistic resources of the universe. To see what is meant by estimating these probabilities, here is an article that just appeared in the last month.

    http://www.tbiomed.com/content/6/1/27

    And here is someone else’s set of links on the calculating the improbability of a gene – functional protein relationship (V. J. Turley from this site).

    (1) “The Capabilities of Chaos and Complexity” by Dr. David Abel, in International Journal of Molecular Sciences, 2009, 10, pp. 247-291, at http://mdpi.com/1422-0067/10/1/247/pdf?(2) “Measuring the functional sequence complexity of proteins” by Kirk Dunston, David Chiu, David Abel and Jack Trevors, in Theoretical Biology and Medical Modelling, 2007, 4:47 at http://www.tbiomed.com/content.....2-4-47.pdf and?(3) “Intelligent Design: Required by Biological Life?” by Dr. K. D. Kalinsky at http://www.newscholars.com/pap.....rticle.pdf .

    And don’t forget Hazen who is anti ID and also working with these concepts.

    Sorry for the delay but had to go to a concert and then watch the Jet’s game. How about those Jets.

  155. Jerry:

    That would be absurd.

    Well, I guess that would explain why everyone talks about CSI and FCSI, but I can’t find any examples of anyone actually calculating it. Unfortunate, but maybe I don’t need anyone to actually calculate anything to dig into the details here.

    So the anti ID people have to either show that the particular gene, while improbable, is just the lucky winner of a lottery

    Having lurked for a while before registering here, I have seen a theme of anti-ID people saying ID is only a negative argument against evolution and the ID people saying it is a positive argument. But, I am can only interpret the rest of your comment as “evolution can’t do this.”

    So, since you are unwilling to calculate CSI/FCSI (and assuming no one else is going to step up the task), let me see if there is another way. How would an ID proponent determine the complexity of the things listd in comment 121 if not by CSI/FCSI?

  156. 157

    Jerry at 155,

    “I guess then there is no point in me trying to cajole you into calculating FCSI for any of the items on the list in comment 121 above, huh?”

    That would be absurd. There are in eukaryote multi-celled organisms several thousand genes and in humans about 20,000-25,000. The improbability of the average one exhausts the physical resources of the universes through random self assembly.

    Once again, no biologist suggests that these genomes came together de novo or randomly.

    More importantly, CSI is supposed to be “a reliable, empirically observable sign of intelligence” and there supposedly exists “a probability and information theory based explicitly formal model for quantifying CSI” (both quotes from the glossary). CSI is supposed to be the positive evidence supporting ID, but despite repeated requests and searching through the available literature, I have not yet found an example of how to calculate CSI, as described in No Free Lunch, for a real biological system, taking into account known physics, chemistry, and evolutionary mechanisms.

    efren ts is correct in 156 that your argument in 155 is primarily an attempt to refute modern evolutionary theory rather than to positively support ID. Modern evolutionary theory is certainly an interesting topic, but this site is supposed to be where people can learn about the positive evidence for design.

    Could you provide a CSI calculation for just two of the smaller organisms in the list, namely Lenski’s e. coli? I believe the genome changes are well-documented.

  157. Mustela:

    Could you provide a CSI calculation for just two of the smaller organisms in the list, namely Lenski’s e. coli? I believe the genome changes are well-documented.

    This is an intriguing idea, but I still would like to understand how best to rank order the list according to complexity.

  158. “Well, I guess that would explain why everyone talks about CSI and FCSI, but I can’t find any examples of anyone actually calculating it”

    Well I did tell you how to calculate it and all that is needed is one good size gene-protein coding region to get a number that is so improbable that naturalistic evolution becomes unreasonable. Didn’t you see that? Calculating it for a whole genome is what would be absurd and not necessary when one gene would be enough. All would be estimates but it is possible to lower bounds on it. I also gave you some references of scientists who are using essentially the same idea to calculate probabilities for functional sequences.

    “Having lurked for a while before registering here, I have seen a theme of anti-ID people saying ID is only a negative argument against evolution and the ID people saying it is a positive argument. But, I am can only interpret the rest of your comment as “evolution can’t do this.””

    As to what ID is about, I have written four long comments about that and offered it up again on this thread. Go to comment # 110. After reading this ask any questions you want. ID is both a positive and a negative approach but because of the nature of intelligent intervention, it is mostly an examination of the laws of nature and where they did not or could not have produced certain effects. But that is not all it is. If you want the time and place and method of the designer in action then you will have to go here.

    http://www.uncommondescent.com.....ent-342686

  159. If you had been here for a long time, there are some of us who want to abandon the term CSI for evolution and use instead FCSI or FSCI instead which is a subset of CSI and easily measured. The reason being is that CSI is too general a concept to be operationalized for every instance of intelligence. For example, how does one operationalize CSI for Mount Rushmore? But it is possible to get an estimate for this paragraph or for a DNA sequence. Both are FCSI.

    Some DNA sequences would have very low complexity since they are just repeating elements while other sequences would have extremely high complexity since each nucleotide seems to be independent of the previous and next one in the sequence. Now some of these highly complex sequences specify a function, namely they specify an amino acid sequence that has function using an intermediary system of about a thousand parts to do so. So the sequence is complex, is information, specifies a completely independent entity that has function.

    Because FCSI or FSCI is so simple and clear and easy to calculate, many of us recommend it as a substitute for the much more general concept of CSI when discussing evolution or origin of life. The designation of FCSI is nothing more than pointing to the transcription/translation system in biology so if you do not like our name for it, then just use that concept. And as I said, Hazen, Abel and others are working on this area and giving probabilities to various sequences.

    If you want to do a whole genome, then have at it. A bacteria genome would probably take a couple months to set up but once set up a computer could make an estimate in a short time. But just one or two elements of the genome would exhaust the resources of the universe since the beginning of time so estimating the entire genome would be sort of pointless. Just try ATP synthase for a start.

    And as far as

    “Once again, no biologist suggests that these genomes came together de novo or randomly.”

    Well then just how did these systems come together. As I said just one coding region is so improbable that it exhausts the resources of the universe since the beginning of time. Saying it happened step wise does not solve the problem. The probabilities are essentially the same.

  160. 161

    Hi Jerry,

    You posted a peer-reviewed paper by Dunston, Chiu, Abel and Trevors entitled “Measuring the functional sequence complexity of proteins”. If I remember right, that paper is where they measure the functional sequence complexity of proteins. (Although I don’t think our friends here intend to read it).

    If I remember correctly, Dunston, Chiu, Abel and Trevors measure some 30 or 40 proteins as examples. Also if I remember right, the E coli proteome is something like a couple thousand proteins and the pan-proteome is on the order of near 20,000 proteins. Do you think you could have that done by noon so this conversation can move along?

    I’m ready for these guys to get back to telling us how all those proteins formed by unguided processes and then coordinated themselves into a functioning whole. Since their idea is the ruling paradigm (and must be accepted as fact) I am certain they are just waiting for a chance to show their work (where the results confirm their conclusion).

    Besides that, the conversation grows stale at their question. It’s a little like doubting the veracity of a light year because we haven’t measured the distance to all the stars.

  161. jerry

    Because FCSI or FSCI is so simple and clear and easy to calculate, many of us recommend it as a substitute for the much more general concept of CSI when discussing evolution or origin of life.

    Many of us may recommend FSCI/FISC but what do qualified ID-researchers like Dr. Dembski think of it? On another occasion he described your contributions in the following way:

    Your approach, by contrast, from what I can make of it, strikes me as hamfisted.

  162. 163

    jerry at 160,

    “Once again, no biologist suggests that these genomes came together de novo or randomly.”

    Well then just how did these systems come together.

    Incrementally, building on previous versions that worked well enough to reproduce in their environments.

    Remember that modern evolutionary theory deals with how allele frequencies in populations change over time. It presumes the existence of imperfect replicators and differential reproductive success. It does not postulate that complex molecules like genomes arise de novo.

    Saying it happened step wise does not solve the problem. The probabilities are essentially the same.

    Mathematically speaking, that’s incorrect.

    Warning! Warning! Analogy Alert! Warning! Warning!

    The contents of this analogy are to be taken as demonstrative only, not to suggest that dice games are identical to biology in every respect.

    Have you ever played Yahtzee? The goal of the game is to get a set of dice with particular patterns (four of a kind, five of a kind, full house, run, etc.). The odds of getting one of these patterns by chance on a single roll of five dice are low. In Yahtzee, though, you get three rolls and, most importantly, you can choose to re-roll only a subset of the dice. This greatly increases the odds of getting one of the patterns.

    Evolution operates on existing, working components and proceeds in small, incremental steps. The odds of getting something that works by making a small change to something that already works are much better than getting something that works by randomly combining a large number of components.

    Therefore, any calculation of CSI for real biological constructs must take into consideration the incremental processes identified by modern evolutionary theory. Calculations based on de novo creation of large molecules are simply not applicable to the real world.

  163. jerry,

    If you want the time and place and method of the designer in action then you will have to go here.

    http://www.uncommondescent.com…..ent-342686

    Out of curiosity I took a look at it but it made no sense to me. One thing all arguments for ID forget is the difference between human design and hypothetical designers designing from scratch without having any previous knowledge about what they want to design.

    Just look at us mere mortals. Can anybody point to an example of complex, human design where no previous knowledge was required? Before biology existed, how could any designer know where to begin, where would he have got the idea that biology was a possibility?

    Would we ever have invented automobiles unless we had seen carriages? A wheelbarrow before we had seen a wheel? A knife before we had seen (and used) a flake of flint?

    How could the complexity of life enter anyone’s mind? Before a satisfactory explanation; a theory about the stages of practical experience and intellectual effort required is presented I can only assume it all is magic beyond comprehension.

    Besides, if my understanding of the implications of the theory of FCSI is right an impossible amount of human-style work would be required for design of even even the most primitive creature.

    Just-so stories about design and designers fail to impress me.

  164. 165

    Cabal,

    How could the complexity of life enter anyone’s mind? Before a satisfactory explanation; a theory about the stages of practical experience and intellectual effort required is presented I can only assume it all is magic beyond comprehension.

    Besides, if my understanding of the implications of the theory of FCSI is right an impossible amount of human-style work would be required for design of even even the most primitive creature.

    Just-so stories about design and designers fail to impress me.

    Of course we know that the carriage, car, and knife were invented by intelligent people. The other option is that the flint turned into you and me by itself. Just so stories about flint turning into The Flintstones fail to impress me, much less flint turning into you and me.

  165. 166

    Warning! Warning! Analogy Alert! Warning! Warning!

    The contents of this analogy are to be taken as demonstrative only, not to suggest that dice games are identical to biology in every respect.

    translation: “Just because I am going to completely ignore the issue and present a designed search as an example of random processes, doesn’t mean random processes can’t do the same thing”

  166. 167

    Cabal, where did we get the idea for a wheel, or an alphabet?

  167. 168

    LoL

    Sorry Cabal, I am still amused.

    You make a case that it would require an incredible intelligence to create life, so vast and incredible in fact, that none was necessary!

    That’s just rich.

  168. I’d like to add to what Mustela is saying.

    When Jerry at 160 wrote, “Saying it happened step wise does not solve the problem. The probabilities are essentially the same.”,

    Mustela replied,

    “Evolution operates on existing, working components and proceeds in small, incremental steps. The odds of getting something that works by making a small change to something that already works are much better than getting something that works by randomly combining a large number of components.”

    Mustela is right, and Jerry is wrong in regards to how the real world works. If these are changes along the way that lead via steps from a starting state to an ending state, then the probabilities of something happening stepwise are most definitely not the same as having the result happen all at once.

    Over on the ID and Common Descent thread, I gave the following example, which is like the Yahtze example, but a little more straightforward:

    [start repost]

    Here’s an example to provide some more detail to explain what I mean.

    Throw 10 dice. What is the probability that they will be all sixes?

    Easy problem: (1/6) ^ 10 = 1 out of 60,000,000, approximately

    Harder problem: The ten dice are in a box which periodically jiggles hard enough to toss all the dice. However, the sides with a 1 on it are sticky, so if a dice comes up six, it sticks.

    Now the box jiggles five times. What is the probability that after five jiggles you have all sixes.

    This is more complicated. First, for any one throw you need to calculate the probability of getting no sixes, one six, two sixes, etc., so you have to use the binomial probability theorem. Then, for each subsequent throw you have a different number of dice being jiggled (ten if no sixes, 9 if one six,etc.), so you have both a continued use of the binomial probability theorem and a complex probability tree that branches ten times on the first throw and some varying numbers on each of the subsequent throws.

    This second situation is more like the real world: it has a sequence of events (it models the passage of time in a very simple way) and it has laws (the one side sticks) that add an element of direction and selection.

    At a vastly more complicated level, this is what ID advocates needs to be trying to do if they want to meaningingfully provide a probability calculation that might imply design. Such calculations need to take into account a sequence of steps over time and they need to take into account that various laws of physics and chemistry create changes along those series of steps that then change the types of changes that can further happen. Only by trying to do such will ID advocates being working towards an accurate mathematical model of the world.

    This is why Mustela and others are asking for a method – one that can be reliably used by any interested party, to measure the CSI of a biological entity. Until such a method is developed, shared, and tested by multiple sources, the idea of CSI will be unusable.

    [end repost]

    Note also, that in reply to Mustela’s comment that “no biologist suggests that these genomes came together de novo or randomly”, Jerry said “Well then just how did these systems come together.”?

    That is a legitmiate question, and the big question of interest, but it has no bearing on the issue at hand, which is that calculations based on a pure chance hypothesis are irrelevant and certainly are not an argument for ID.

    That is, ruling out chance (which no one believes happens anyway) can leave us in a state of not knowing how something happens, but it is not evidence that therefore ID happened.

  169. Edit – last sentence was supposed to say “ruling out pure chance” and not just “ruling out chance”. My apologies.

  170. 171

    Upright Biped at 166,

    translation: “Just because I am going to completely ignore the issue and present a designed search as an example of random processes, doesn’t mean random processes can’t do the same thing”

    You really should read the entire post before attempting to flame.

    Jerry said “Saying it happened step wise does not solve the problem. The probabilities are essentially the same.”

    I noted that his claim was incorrect and provided an analogy explaining why.

    If you’d like to defend Jerry’s claim, I’d be interested to hear your argument.

  171. “Warning! Warning! Analogy Alert! Warning! Warning!”

    I am sorry but your analogy does not work because it is not parallel. It assumes two things, that sub parts are all functional and are added to a functional part so each part on the way up is functional. This is a new form of irreducible complexity in the sense that a complicated part, a protein or protein RNA polymer combination, has a countless functional sub parts. That could be true but what are the odds of finding the final combination at each stop along the way. And for proteins how big is each new addition and how big was the first step. It is the same probability unless you suppose as in Yahtzee that there are a myriad of possible combinations that are appropriate, or you know the end result and you have multiple chances at each step. Are there alternative worlds in biology? We have no evidence for that so to assume it is so is just another form of speculation. What kind of resources would it take to build something like that if multiple worlds were feasible?

    No one has identified another DNA world let alone another life/chemical world so the best we can assume is that there might be a couple. Thus, to get to the one we have you can roll the dice once or roll it umpteen bazillion times and the probabilities would essentially be the same or lo and behold you might reach a countless bazillion dead ends instead of the one or few viable end states.

    Take a number, 49728506937167167054, which I got by hitting keys in a frantic fashion. How long would it take to get to this number, one roll at a time and when the number was correct you were told so. On average it would take 5^20 times. (average of 5 times each to get to the correct number but only if you knew the right number when it was rolled.) Now it would take much less if you knew the number ahead of time and rolled several dice each time and took only appropriate numbers as Dawkins does in Weasel. But we do not know what the number is so even with multiple rolls we do not know if we are on the correct path to the right number. So the probabilities of getting this number all at once or through multiple rolls is still 1 in 10^20. And we do not know if we choose a certain number that it will not lead to a dead end because unlike Weasel there may be no viable path for a correct number once you are down a certain path. If at 4972 you choose 3 instead of 8 all the subsequent rolls are useless if 49723 led to no viable end state.

    No for your scenario to have any semblance of meaning it must assume a bazillion to the umpteenth power of viable states. We know we have one. Are there any others? I doubt there are too many if any others.

    Good try though. I know it is the standard answer but it does not hold up under examination.

    Since you claim you are a computer programmer, try a sort of real life example. Generate combinations of letters of the alphabet and use a dictionary to determine when a viable word is formed. See how long it takes to get a 175 word ten sentence paragraph and then see how it reads. I do not know how you could write a program that would recognize a coherent sentence let alone a coherent paragraph such that each subsequent sentence related to the one before it. But I bet those monkeys would be still be typing when the next millennium came around. We know there is more than one viable paragraph. We do not know if there is more than one viable life system.

  172. Clive,

    The fallacy of Cabal’s inane objection is that he is claiming what can happen by random events could not happen with an intelligent intervention which by the way could observe some random processes and maybe get some ideas. Sort of shows how desperate they have to get some times.

  173. 174

    Mustela,

    I read your entire post. You presented an intelligently designed search as example of a random search. A player achieves function (a full house) by intelligently picking up the die that don’t match the intended and necessary pattern.

    If pointing this out was inappropriate, I do apologize.

  174. 175

    jerry at 172,

    I am sorry but your analogy does not work because it is not parallel.

    It works to show that your claim that the two probabilities are the same is incorrect.

    It assumes two things, that sub parts are all functional and are added to a functional part so each part on the way up is functional.

    That is core to modern evolutionary theory. Only those organisms that survive to reproduce are represented in subsequent generations. Every generation consists of viable individuals, by definition.

    This is a new form of irreducible complexity in the sense that a complicated part, a protein or protein RNA polymer combination, has a countless functional sub parts. That could be true but what are the odds of finding the final combination at each stop along the way.

    Only those viable combinations will be represented in subsequent populations. Bad combinations result in the death of the host. Nature is quite profligate in that way.

    And for proteins how big is each new addition and how big was the first step. It is the same probability unless you suppose as in Yahtzee that there are a myriad of possible combinations that are appropriate

    The great diversity of life we observe suggests strongly that this is the case.

    However, you’ve gotten to an important point. If you want to posit CSI as a characteristic that is unique to designed objects, you must calculate it in such a way that non-designed mechanisms are taken into account. We have observed significant amounts of mutation, including speciation, both in the lab and in the wild. Any calculation of CSI that ignores those mechanisms does not reflect reality and hence can’t be used to identify design in the real world.

    If you want to retain the naive calculation of two to the length of the genome for CSI, you need to provide solid evidence that those mechanisms we observe cannot affect the probability of the artifact under consideration forming.

    That is going to be difficult because we have observed mutations that change the length of genomes. Using the naive calculation, this means that natural processes can create CSI.

  175. 176

    Upright Biped at 174,

    You presented an intelligently designed search as example of a random search.

    No, I contrasted the efficacy of an incremental search versus a random search. That was the only point of the analogy and the reason why I clearly labeled it as limited.

  176. “It assumes two things, that sub parts are all functional and are added to a functional part so each part on the way up is functional.

    That is core to modern evolutionary theory. Only ”

    Hello, there. We are not talking about evolution. We are talking about the origin of a protein. You wanted some FCSI and the place to start is with the parts.

    To keep you busy for the rest of your life, consider the ribosome or ATP synthase. Just stick to these two and report back when you have something. I will leave a note for my great great great grand children to be on the look out for it.

  177. My example purposely removed the decision making that goes on in Yahtze and replaced it with analogies to natural processes, FWIW.

  178. Just in case anyone is interested there is no evidence for evolution by accumulation of small changes. That is why this site remains a thorn in the side of the know nothings. We keep repeating the obvious.

    So to resort to such a cumulative process is just an assertion that has no basis in the data. A much more likely cumulative process is an improvement in the design every now and then or an adjustment in the design to meet ecological requirements. Some times this adjustment can be done naturally but it appears that sometimes it cannot. Explaining the latter and how did new functional complexity arose is what the whole debate is about. Gradual cumulative processes strike out every time.

    So to assert it is just a waste of time. No one can demonstrate it so please let’s not hear it again unless you have evidence to back it up.

  179. 180

    Mustela,

    You analogy placed an incremental search powered by design, to a random search powered by chance.

    Why not posit an incremental search powered by chance, since that is the basis of your position?

  180. Aleta,

    If I understand you correctly and I haven’t any more time today to devote to this, you are saying that there may be properties of the chemistry involved which are inherent in the four laws of physics that would drive the building not just of random combinations but specific viable to life combinations.

    If so a whole lot of people would be over joyed but the atheists and Darwinists would say I told you so but deny there was any design in the four laws that led to that. The theistic evolutionists would be ecstatic and I don’t blame them.

    There is one problem, and that is all this would leave a trail and we see no trail. If there were a trail all would be on board and it would be easy to shout down the Darwinists as we pointed to the incredible design than not only is friendly to life but actually dictated that it must happen. Maybe there will be such a day.

  181. Mr Jerry,

    Just in case anyone is interested there is no evidence for evolution by accumulation of small changes.

    A request for clarification, do you mean only macro-evolution here?

  182. Jerry at 159:

    Well I did tell you how to calculate it and all that is needed is one good size gene-protein coding region to get a number that is so improbable that naturalistic evolution becomes unreasonable. Didn’t you see that? Calculating it for a whole genome is what would be absurd and not necessary when one gene would be enough. All would be estimates but it is possible to lower bounds on it. I also gave you some references of scientists who are using essentially the same idea to calculate probabilities for functional sequences.

    Well, yes, but if I take what you say at face value, then you are basically saying that the changes in Lenski’s bacteria could not have happened. Obviously it did, so it would seem that your explanation needs to be revisited, or at least spelled out in more detail. Which brings me back to Mustela’s comment at 157. Can you calculate the FCSI changes between the before and after e.coli in Lenski’s experiment?
    While I appreciate you sharing the very interesting links with me, I think it is a little odd that you would suggest to me, a newcomer, that I should do the calculation when you have been involved in intelligent design for 10 years and have stated it is an easy calculation. I guess I am coming back around to asking you, the ID scientist, to show me how it is done. Could you, please?

    Unless of course, this is some secret initiation ritual for newcomers. Like sending me on a snipe hunt or asking me to go to the tool room and get a left-handed screwdriver. Surely, that isn’t the case?

  183. 184

    jerry at 177,

    “It assumes two things, that sub parts are all functional and are added to a functional part so each part on the way up is functional.

    That is core to modern evolutionary theory.”

    Hello, there. We are not talking about evolution. We are talking about the origin of a protein.

    A protein generated by transcription of a genome. Any calculation of CSI is going to have to take into consideration known physics, chemistry, and, yes, evolutionary mechanisms (including incremental change over generations of populations) in order to be applicable to real world biological artifacts.

    You wanted some FCSI and the place to start is with the parts.

    What I am ideally looking for is a calculation of CSI, as described in No Free Lunch, for a real biological artifact, taking into account known physics, chemistry, and evolutionary mechanisms. Given how often it is asserted here that CSI is a clear indication of design, I am amazed that no one can simply point me to a dozen worked examples.

    I’m not sure what “FCSI” is, and the glossary is no help. If it is simply the naive calculation of two to the power of the length of the genome, it’s not applicable to the real world, for the reasons described above.

  184. 185

    jerry at 179,

    Just in case anyone is interested there is no evidence for evolution by accumulation of small changes.

    Lenski, the discoverers of nylonase eating bacteria, the developers of flu vaccines, and researchers into observed speciation, among many others, would disagree with you.

    And they have evidence.

  185. 186

    Upright Biped at 180,

    You analogy placed an incremental search powered by design, to a random search powered by chance.

    Hence my warning that the analogy should not be taken as an exact match. The point was to address Jerry’s mathematical error, which it did.

    Why not posit an incremental search powered by chance, since that is the basis of your position?

    Thomas Schneider’s ev is a good example of that.

  186. “Well, yes, but if I take what you say at face value, then you are basically saying that the changes in Lenski’s bacteria could not have happened.”

    Not in anything I ever wrote or anyone in ID ever wrote. Lenski’s work is excellent ID research. Behe, himself, said this is the type of thing that should be done and my example of mapping genomes would be another.

  187. “Can you calculate the FCSI changes between the before and after e.coli in Lenski’s experiment”

    It shouldn’t be too hard if one had all the information (I am sorry I am not supposed to use that word because it doesn’t exists) and my guess is that there should not be too much differences between before and after. If there was, they would have made a big deal of it. That is, if a new protein with remarkable characteristics developed.

    “While I appreciate you sharing the very interesting links with me, I think it is a little odd that you would suggest to me, a newcomer, that I should do the calculation when you have been involved in intelligent design for 10 years and have stated it is an easy calculation.”

    Depends what you mean by easy. It is just 4^n where n is the number of nucleotides. This is a crude measure and it has to be modified by the fact that multiple codons translate to the same amino acid and that for some functions, similar amino acids are interchangeable. So to get a ball park estimate is not hard. But to get a refined estimate will take a lot more effort and requires a detailed knowledge of the chemistry of amino acids.

    The number 4^n gets incredibly large real quick so reducing it by a few magnitudes for multiple codons and interchangeable amino acids has little effect on the implications that each coding area is extremely, extremely rare. It is not rare because it is complex, but rare because it is complex and specifies another entity which is functional.

    You said you wanted to understand ID, so I sent you to the links. They were not going to answer everything. But now that you have read them you can eliminate a lot of the BS that comes up. For example, the fact that Mustela Nivalis resorted to micro evolution when he knows that has nothing to do with anything. ID has no problems there. So now you are ahead of Mustela Nivalis and can see through his irrelevant comments. Remember what was said about Tier 5.

  188. 189

    Mustela 185,

    Once again, you’ve positd an incremental search powered by design as an analog to a random search powered by chance.

  189. Jerry @ 186

    Not in anything I ever wrote or anyone in ID ever wrote.

    Okay, my misunderstanding. Sorry about that. So, then it would be possible to calculate the CSI/FCSI changes then? Cool.

    Lenski’s work is excellent ID research.

    I was led to understand that he wasn’t trying to generate the ability to process citrate, so it would probably be more accurate to say that it is excellent “pure dumb luck” research, no? ;)

    Jerry @ 187

    It shouldn’t be too hard if one had all the information (I am sorry I am not supposed to use that word because it doesn’t exists) and my guess is that there should not be too much differences between before and after. If there was, they would have made a big deal of it.

    Umm, the announcement was right around when I started lurking around evolution and ID sites and it seems that a big deal was made of it. Search tells me that there was at least 7 posts here on UD that addressed Lenski directly. There is a few dozen posts referencing him at Evolution News and Views. Apparently, Behe commented on it. Even Conservapedia went after Lenski. I am not sure how much of a bigger deal could be made of it.

    You’ve seemed helpful so far, so I don’t want to draw the conclusion that saying there “should not be too much differences” is just a way to get out of actually doing any CSI calculations. So, come on, dust off your calculator. You have a captive audience.

  190. 191

    Upright Biped at 188,

    Once again, you’ve positd an incremental search powered by design as an analog to a random search powered by chance.

    Where’d I do that?

  191. Freelurker @ 154:

    Of course engineers and scientists do not factor intelligent agents into their “models of natural systems”, if by that you mean models of how natural systems work. Scientists and engineers look for efficient causes of things, and they don’t expect those efficient causes to be the actions of personal beings. I have never heard an ID proponent argue that the planets are pushed around the sun by angels, and I have never heard an ID proponent argue that any natural process that we observe today is managed locally by intelligent agency.

    Let’s take a biological example. The efficient cause of embryonic development isn’t personal, in my view. I don’t suppose that St. Anthony or Zeus or Krishna pushed molecules around in my developing embryo to make me the way that I am. I assume that sound engineering principles govern the way that matter is assembled and organized, to be turned into organisms, with the result that species regularly produce offspring like themselves.

    However, while I see no need to imagine the operation of an active intelligent agent in the embryonic development of any particular individual, I do infer that the embryonic *system* which produced me (and billions of other creatures) was intelligently designed. If someone tells me that the embryonic system arose by a series of evolutionary accidents, I ask them to provide a plausible pathway by which that could have happened. I never get an answer.

    When someone with an engineer’s mind-set looks at the reproductive system of, say a human being, with all the complex interacting parts whose workings must be precisely timed, with genetics, developmental processes, and the physiology of the mother all playing well-defined roles, that person is going to see design. No sense denying it. The system is clearly goal-directed, with an exquisite adjustment of means to ends. There are, as far as I know, only two ways of explaining this. First, the embryological system arose by sheer chance, but, because it was useful, it became a fixed part of living organisms. Second, the embryological system was put in place by someone or something which operates in a mode analogous to the mode of human intelligence. No one on earth can either prove or disprove either of these alternatives. But from an engineering perspective, intelligent design is the instinctive “default” explanation. It makes the most sense of what we can see plainly with our eyes. And since the greatest evolutionary biologists living cannot even come close to proving that such a system could have come about by chance, I go with the engineering mind-set. Computers and cars and running shoes and symphonies and five-course meals don’t come about by chance, not even chance aided by “natural selection”. Without much stronger evidence than we have, I see no reason to believe that living systems did, either.

    As for your remarks about random variables being treated as random only for the purposes of a model, and your suggestion that evolutionary biologists are not really making any claim that the evolutionary process is genuinely accidental, if you buy that, well, I have a nice bridge in Brooklyn to sell you. In fact in the specifically Darwinian understanding of evolution (and please note that I have never criticized evolution *except* in its Darwinian form, or in analogous forms which are heavily chance-dependent), the understanding is that the randomness is real, not merely a scientific convenience. Darwin wrote with the explicit intention of getting design out of the picture, as I have ascertained from a careful study of his writings (a study very few modern biologists have undertaken). And when Russell wrote *A Free Man’s Worship*, he was not speaking about only *apparent* chance or accident. Read the essay if you doubt it. Thus also, Gaylord Simpson, Isaac Asimov, Carl Sagan, Dawkins, Dennett, Singer, Weinberg, Provine, etc. It is not only ID people who discuss the alternative of chance vs. design. For anyone who knows anything about the history of ideas, the discussion about chance vs. design is 2500 years old, and is a fundamental and inescapable debate. Either you believe that mutations that are random (random with respect to selective fitness, as Ken Miller puts it) can build up complex new systems and organisms, or you don’t. Darwinism says they can. ID says they can’t, or least, that Darwinians have come nowhere close to proving that they can.

    By the way, design is not incompatible with evolution. Read Michael Denton, who advocates a wholly naturalistic model of evolution, based on pre-set parameters of nature which have been designed with life and man in mind. One can have design without miracles, and one can have design directing evolution. But one can’t have design with Darwinism. One must choose one or the other. I think that would be obvious who anyone who thinks in the mode of an engineer.

    T.

  192. Lenski, the discoverers of nylonase eating bacteria, the developers of flu vaccines, and researchers into observed speciation, among many others, would disagree with you.

    Lenski- no new complex protein machinery

    Nylonase- loss of specification from an existing protein

    Observed speciation- no new body plans

    IOW you can’t offer anything except that which supports baraminology.

    Go figure…

  193. EV- a trageted search.

    That is covered in “Signature in the Cell”.

  194. Mr Timaeus,

    Ah yes, the engineer’s perspective. The appeal to our sense of wonder. Our reproductive system is so marvelously balanced, it is a wonder we can reproduce at all.

    But what of birds? I’m sure you know your Latin, Mr Timaeus, and that cloaca means sewer. Should we have the same sense of marvel for the engineering design that uses the same tube for reproduction and defecation? What would the basis for that sense of wonder be? Is this good design or bad design? Is the answer self evident? Is the answer so self evident that it cannot be articulated, it just is?

    It seems to me that this argument of the “mode of the engineer” is either making an assumption of an incredibly anthropomorphic designer (the opposite of “His ways are not our ways”) or assuming that good design is universally obvious. But we can see from arguments about giraffe nerves that this is not true.

    I’m left with the conclusion that you have designed the designer in your own image.

  195. Some response to Jerry:

    at 179, you write, “Just in case anyone is interested there is no evidence for evolution by accumulation of small changes.

    So to resort to such a cumulative process is just an assertion that has no basis in the data. A much more likely cumulative process is an improvement in the design every now and then or an adjustment in the design to meet ecological requirements. Some times this adjustment can be done naturally but it appears that sometimes it cannot. Explaining the latter and how did new functional complexity arose is what the whole debate is about. Gradual cumulative processes strike out every time.”

    I’m not sure what you are asserting here. First you say that there is no evidence for evolution by accumulation of small changes, but later you say that sometimes an “adjustment” in the design “can be done naturally.” Would not this be called evolution? For instance, when bacteria evolve resistance to antibiotics, as is a serious problem in hospitals these day, does this or does it not happen by the accumulation of small changes, naturally and not by design.

    Can you clarify, please?

    Also, at 181 you say some things that are well beyond the scope of the particular conversation we are happen, but which interest me and may illuminate some differences in our perspectives.

    You write, “If I understand you correctly … you are saying that there may be properties of the chemistry involved which are inherent in the four laws of physics that would drive the building not just of random combinations but specific viable to life combinations.”

    First of all, I am not sure what “four laws of physics” you are talking about, as I can think of quite a few more laws of physics than four. But leaving that aside, I’m curious about what you mean by “inherent.”

    Let’s take a non-biological example: gases in space sometime coalesce under the influence of gravity to form an approximately spherical body. If enough of them coalesce, under the influence of the pressure and heat caused by the increase in gravitational attraction, nuclear fusion occurs. At that point the body, now a star, emits heat and light, produces heavier elements which are released into space much later in the life of the star, etc.

    Would you say that the star and its by-products were “inherent” in the atoms of the original gases?

    Also, would you agree, or not, that the original gases turned into something quite different than just individual gas molecules by a succession of small steps?

    And in 188, when someone asked, “Can you calculate the FCSI changes between the before and after e.coli in Lenski’s experiment”, you answered, “my guess is that there should not be too much differences between before and after. If there was, they would have made a big deal of it. That is, if a new protein with remarkable characteristics developed.”

    Well first of all, Lenski and other biologists don’t even think about calculating FCSI because, for one thing, as we are discussing, there is no methodology for doing so for situations where things change over time, which surely happened in Lenski’s research.

    But what about this: if CSI is merely a number based on the chance that all the base pairs happened by chance to be as they are, then if the length of the genome in the starting state was the same as at the ending state, irrespective of either the particular genetic changes or the resulting change in functioning, then the CSI wouldn’t have changed: the end state would be just as improbable as the starting state.

    So when you say, “My guess is that there should not be too much differences between before and after,” do you simply mean that the length of the starting and ending genomes would be almost the same?

    And does this mean that if the bacteria in Lenski’s research had not developed any new functionality at all but retained the same length, they also would that they would have the same CSI.

    If so, in this case CSI does not distinguish between signficant change and functioning and no change in function at all, because it is based merely on the number of base pairs. Is that a very useful number? If so, how is that useful?

    And last, back in the first paragraph, you wrote, “Explaining the latter (adjustment by design) and how did new functional complexity arose is what the whole debate is about.”

    That has not been the focus of our discussion. Our discussion has been much more limited: do calculations of some quantity called CSI which merely look at the configuration of something without regard to its history as it arose from previous states adequately model the real world? The question is merely whether the kinds of “blind chance” calculations for CSI that ID advocates offer allow us to infer anything about the world.

  196. That is really great that you are doing a king of great topic close to this good post. Furthermore I opine that that should be very good if some students buy the best dissertation and just dissertation with you help.

  197. “I was led to understand that he wasn’t trying to generate the ability to process citrate, so it would probably be more accurate to say that it is excellent “pure dumb luck” research, no?”

    Yes, but you see the change is no big deal. After than many generations and mutations you would expect some changes. And one small change enabled citrate to pass over the boundary of the cell and then it could be metabolized. ID does not say that micro evolution does not work nor will there be no changes. I personally think it is great design. It is just that the changes that happen have always been quite simple and even simple changes can sometimes have a major effect. Again nothing ID denies.

    But what has not happened with Lenski’s cultures is anything really novel developing. If it did then ID would acknowledge it and find it interesting. If you read my links, you will know that there is no science on the planet that ID objects to, only a few of the conclusions which it says are not warranted.

    Behe, said after the publication of his book,The Edge of Evolution, that what researchers should do to follow up on his thesis is research like Lenski’s. That is why I say Lenski’s research is good ID research carried out by an anti ID group of researchers. In that vein I wrote the example of what a secret ID supporter might do in terms of genome research. All as a way of supporting or disputing Behe’s thesis.

    I already told you how to do it. Suppose there is a sequence (I am making this up)

    GGAGCTTAGCAAGCTTGAACTGGACGTAACTA of 33 nucleotides.

    The number of possible strings of length 32 is 4^32 or 7.4 x 10 ^19 or over a billion x a billion combinations. It is possible to divide the DNA up into codon or groups of three and there are about 22 different possibilities including start and stop possibilities but we will only consider the coding ones or 30 combinations. So we could look at this same sequence as 20^11 or 2 x 10^14 or roughly 200 trillion possibilities.

    Now a small gene is 100 amino acids or 300 nucleotides and the calculations get really large

    20^100 or 1.3 x 10^130 and 20^200 or less than average gene or 1.6 x 10^260 total number of combinations. So whatever process that leads to the appearance of a protein of length 200 must find it out of this incredibly large number.

    Now these calculations are very rough and should be reduced by a few magnitudes for the fact that some amino acids can replace each other in certain situations.

    You notice I keep asking Mustela Nivalis about ATP Synthase.. This enzyme has about 2000 amino acids. I won’t attempt the calculation but believe me it exhausts nearly all the probabilistic resources in all the multi verse scenarios they can dream up. So just imagine what the initial cell must have consumed in terms of probabilistic resources to arise naturally.

    As I said it is rough but it eats up all the multi verses no matter how you calculate it.

  198. aleta,

    I suggest you go to the links in #110 and see if any of your questions are answered. When you are finished I will be glad to answer any other questions you have.

    The four forces of nature are gravity, electromagnetic, the weak force and the strong force. All other physical things flow from these.

  199. Just a couple quick comments before leaving for the night. They made a big deal out of Lenski’s trivial change when it was announced. That should tell you something. Why so much hoopla over such minor stuff. The reason, is that they have nothing else.

    The second part of my comment was based on the comment by efren ts which I meant to include.

    “You’ve seemed helpful so far, so I don’t want to draw the conclusion that saying there “should not be too much differences” is just a way to get out of actually doing any CSI calculations. So, come on, dust off your calculator. You have a captive audience.”

    Also I said 30 when it should have been 20 coding amino acid combinations.

  200. Also I said 30 when it should have been 20 coding amino acid combinations.

    You surely don’t have to met that pathetic level of detail, Jerry.

  201. jerry,

    The four forces of nature are gravity, electromagnetic, the weak force and the strong force. All other physical things flow from these.

    In other words, strict reductionism, is that what you advocate?

    Organizational phenomena like, say, the phases of matter (like liquid, vapor or solid), are they not organizational phenomena? For example, Water ice has at least eleven eleven distinct crystalline phases. Emerging properties, where microscopic rules can be true and yet quite irrelevant to macroscopic phenomena.

    How is it possible to be dead certain about what nature can or cannot do as long as we can say with certainty that we still have a lot to learn, maybe even things that we won’t ever be able to learn?

  202. Something biological that wasn’t designed?

    Sickle-celled anemia.

    It is what happens when random effects creep into the design.

    In this case a single nucleotide switch- ie a point mutation.

    But what process/mechanism is responsible for the preservation of this mutation in some populations while it is disappearing from other populations?

  203. Timaeus,

    . It makes the most sense of what we can see plainly with our eyes.

    How do you rate that as a relevant argument? I can see plainly with my own eyes that the Sun revolves around the Earth.

    MY experience however, is that we need to go beyond our senses if we want to understand nature. Knowledge of nature is not intuitive, it often is counter-intuitive.

    What does that mean for may of the things that people believe without really knowing if they are true?

  204. Quick comments to Jerry, and then I’ll retire.

    1. You had said four laws of physics, not four forces. That’s why I was confused.

    2. YOu didn’t answer my question directly about some confusion in your position: you said there is no evidence of evolution by small changes, but you’ve also said that there can be. Which is it?

    3. Your example of the codon calculation is still just a calculation based on configuration – on pure chance combination: you have not addressed my questions about thngs changing from one state to another such as with the Lenski research. Dismissing it as not a big deal glosses over the fact that it is a counter-example the proves that mere pure chance calculations are inadequate.

    But no one at this site seems willing to address these issues, so I won’t keep repeating myself.

  205. Mr Jerry,

    Now these calculations are very rough and should be reduced by a few magnitudes for the fact that some amino acids can replace each other in certain situations.

    I think they have to be reduced by many orders of magnitude, not a few.

    The part of the CSI calculation that you are glossing over here is how many molecules have the same function in a given environment. As you point out, the entire sequence is not important.

    Further, you assume the whole sequence had to come together at once. But we know that there are motifs to protein structure, such as alpha sheet and beta helix, that are used over and over again. If you expand the alphabet from 20 amino acids to 20+alpha+beta, then suddenly the sequences shorten dramatically, with important reduction in probabilistic resources.

  206. Jerry @ 196:

    I already told you how to do it.

    So, is it some sort of hazing ritual around here that someone who has been an ID scientist for 10 years makes the newcomer do the work?

    Suppose there is a sequence (I am making this up)

    Sigh. Do you or do you not agree that the ability to process citrate is a change in function and, therefore involves a change in CSI/FCSI? Can you or can you not calculate it? Because it seems to me you are trying to avoid doing so.

  207. Let’s take the first question first. Forget the calculation part for a bit:

    Do you or do you not agree that the ability to process citrate is a change in function and, therefore involves a change in CSI/FCSI?

  208. “Do you or do you not agree that the ability to process citrate is a change in function and, therefore involves a change in CSI/FCSI?”

    It is a trivial change. From what I know and this might have changed, they do not know yet the change that caused the ability to process citrate. It is not something I have followed. Why don’t you investigate it and before you comment further on this, you should also read the Edge of Evolution.

    There are certainly many small changes in genomes that can cause large morphological changes in an organism or can cause the processing of a nutrient or some other simple function. No one is disputing that.

    That is not the debate. Whatever caused the change in Lenski’s organisms is most likely relatively simple. That compared to the eye which takes the coordination of about 2000 proteins or the transcription/translation process which requires the presence of about a 1000 protein/RNA polymer combinations one of which is the ribosome which rivals ATP synthase in complexity.

    As far as the change in the FSCI of the Lenski organism that now processed citrate, it may not involve any change in FSCI. Take a simple English paragraph of 180 characters. Mutate it by a couple letters at a time. Most will do nothing more than indicate some proof reading is necessary and some typos had to be corrected. It is however possible to change parts of its meaning so that it is a valid paragraph. Maybe change “not” to “net” and “or” to “on” and the meaning might change a little or significantly. But no random changes will ever get a paragraph on economics into a love sonnet.

    You can calculate the FSCI of the paragraph the same way I did the DNA sequence and two 180 letter paragraphs will have the same FSCI. So it is possible there will be no change. All you have done is substitute one of the other possible paragraphs for another and that is no big deal. But if you took all the possible 180 word combinations, the meaningful paragraphs would represent an incredibly tiny percentage of all possible combinations. I would bet you that if you designed a computer program to generate 180 character combinations (using the 26 letters and the space and period) you would be lucky to find one after a thousand years of trying. Probably take a hundred years of so to get a meaningful 10 word sentence. But intelligence creates the works of Shakespeare, the novels of Jane Austin, the Philosophiæ Naturalis Principia Mathematica in relatively short time.

  209. “So, is it some sort of hazing ritual around here that someone who has been an ID scientist for 10 years makes the newcomer do the work?”

    Most who come here to comment, come here to criticize. I have little patience for them. No one and I mean no one in the 4 1/2 years that I have been commenting here and who has been anti ID has been honest in their discussions except for one person. People keep asking the same inane questions over and over again and they never present anything of relevance. They always criticize.

    The Lenski stuff is good ID work and he doesn’t know it. The changes are so trivial that it beggars belief that they would tout it as a change in something meaningful. By doing so they are admitting they have absolutely nothing. It is like in a football game when a player from the losing team makes a tackle and then starts beating himself on the chest saying look at what I just did.

    If you are American, you will know what I am talking about and that is a good analogy for the anti ID people here and in the science world. They beat their chests over trivia.

  210. “If you expand the alphabet from 20 amino acids to 20+alpha+beta, then suddenly the sequences shorten dramatically, with important reduction in probabilistic resources.”

    We have just cut out a few of the multi verses and that is all and you know it.

  211. Jerry, at 179, you wrote, “Just in case anyone is interested there is no evidence for evolution by accumulation of small changes.”

    Now when asked,”Do you or do you not agree that the ability to process citrate is a change in function and, therefore involves a change in CSI/FCSI?”,

    you write, “It is a trivial change”, and you go on to write, “There are certainly many small changes in genomes that can cause large morphological changes in an organism or can cause the processing of a nutrient or some other simple function. No one is disputing that.”

    But you were disputing it at 179 when you wrote that there was no evidence for “evolution by accumulation of small changes.”

    So will you agree that what you wrote at 179 was wrong, and that there is evidence of evolution by the accumulation of small changes? It would be helpful if we got clear about this.

    Note that whether this is a “trivial” or “relatively simple” change is not important to the point I am addressing, which is about the exclusive use of the pure chance hypothesis in calculating CSI. The fact that you acknowledge that some change can happen through the accumulation of small changes is the important result of this exchange.

    Also, thanks for directly answering the question about FCSI. Your answer is that if two genomes have the same length they have the same FCSI, irrespective of function.

    Therefore if genome A changed into genome B through an accumulation of small changes, which you now agree can happen, and in the process acquired some different functioning, as in the Lenski research, but had no net change in the length of the genome irrespective of whatever genetic changes happened, a calculation of FCSI would not tell us anything about that because the FCSI would be the same number for both cases. True?

  212. Aleta,

    All your questions are trivial and obvious.

    “2. YOu didn’t answer my question directly about some confusion in your position: you said there is no evidence of evolution by small changes, but you’ve also said that there can be. Which is it?”

    No one is denying small changes to genomes and no one is denying that these small changes may have huge effects in medicine, genetics, food production etc but they are trivial changes in the evolution debate. That debate is over the origin of complex novel characteristics. I asked that you read the links I pointed to because it short circuits having to answer obvious questions.

    “Your example of the codon calculation is still just a calculation based on configuration – on pure chance combination: you have not addressed my questions about thngs changing from one state to another such as with the Lenski research. Dismissing it as not a big deal glosses over the fact that it is a counter-example the proves that mere pure chance calculations are inadequate.”

    They have to originate some way and chance, natural laws or intelligence are the options. I just eliminated chance as a source so where are the natural laws and any evidence that they can do. We know intelligence can do it. If there was any evidence that natural laws could do it then the whole discussion would be much different.

    I have said the Lenski is trivial and is no counter example to anything. It is not the stuff that builds eyes or the avian oxygen system and has been seen millions of times before with microbes and they never led anywhere either.

    Your questions were answered but you did not recognize it. Maybe now you will see the reasoning behind it.

  213. Jerry at 209:

    It is a trivial change.

    How do you know that, since you have yet to perform the complexity calculations?

    Whatever caused the change in Lenski’s organisms is most likely relatively simple.

    How do you know that, since you have yet to perform the complexity calculations?

    As far as the change in the FSCI of the Lenski organism that now processed citrate, it may not involve any change in FSCI.

    How do you know that, since you have yet to perform the complexity calculations?

    Take a simple English paragraph of 180 characters.

    I get your point, so let’s put aside analogies and deal with the list of biological entities in your comment 121. How would you, the ID scientist rank order that list by complexity? Then take just two items from the list and do the calculations to put them in order relative to each other. If the changes in the Lenski bacteria are so trivial, then do two different things like, say, giraffes and horses.

    Seriously, Jerry, you aren’t helping me learn anything here. This is incredibly frustrating.

  214. “But you were disputing it at 179 when you wrote that there was no evidence for “evolution by accumulation of small changes.””

    This is getting tiring and you are revealing your stripes. When we use the word evolution here, it often takes on several meanings. The official meaning of evolution in genetics and evolutionary biology is a change is the percentage of alleles in a population gene pool. That is the hard and fast definition and it is a trivial distinction when an allele frequency changes. But when we discuss it here we often mean it in a different sense, namely the origin of complex novel characteristics. And in no way does what happened in Lenski’s population of bacteria reach that level.

    After all Darwin wrote his book and called it the Origin of Species but we know that is not what is at stake. It is where did the obviously large changes come from and they do not come from changes in allele frequencies. So often the use of evolution here is in that context.

    Your incessant attempt to make a non point into something is indicative of an attitude that I just mentioned is typical of anti ID behavior. A constant attempt to criticize rather than to understand. You have just placed your self into a box. If you were really trying to understand anything you would have proceeded very differently.

  215. “Seriously, Jerry, you aren’t helping me learn anything here. This is incredibly frustrating.”

    That is so obviously nonsense. Someone else has just revealed his stripes.

    I suggest you read the Edge of Evolution as a starter and then come back.

  216. Jerry @ 216:

    “Seriously, Jerry, you aren’t helping me learn anything here. This is incredibly frustrating.”

    That is so obviously nonsense. Someone else has just revealed his stripes.

    I am not sure what stripes I am supposed to have revealed. But, I will be glad to add zebra to the list in 121.

    We could probably get off high center if you would just give a yes or no answer to the following question: Would an ID scientist use CSI and/or FCSI calculations as the basis for rank ordering the list of biological things in comment 121 above?

  217. Aleta:

    Do you or do you not agree that the ability to process citrate is a change in function and, therefore involves a change in CSI/FCSI?

    It very well could.

    IOW the total number of bits could have stayed the same while the way those bits are arranged could have changed.

  218. 219

    jerry at 215,

    “But you were disputing it at 179 when you wrote that there was no evidence for “evolution by accumulation of small changes.””

    This is getting tiring and you are revealing your stripes. When we use the word evolution here, it often takes on several meanings.

    The same is true for a number of words, including “information.” That suggests that one should be very careful to specify the definition of the terms one is using. Your original claim that there is no evidence for evolution by accumulation of small changes is incorrect on its face. Evolution is the accumulation of small changes in populations over time.

    But when we discuss it here we often mean it in a different sense, namely the origin of complex novel characteristics. And in no way does what happened in Lenski’s population of bacteria reach that level.

    Why not? What is your definition of “complex novel characteristics” that excludes the ability to utilize a new food source?

  219. 220

    jerry,

    To avoid further confusion over definitions, I have a couple of questions about FCSI. Please take them in the tone they are asked; I simply want to understand FSCI better.

    1) How is FSCI related to CSI, if at all?

    2) CSI is claimed to be a unique characteristic of designed systems. Is FCSI also supposed to only be present in designed systems?

    Thanks.

  220. to Jerry:

    You write, “All your questions are trivial and obvious.”

    This is not true, but I will honor your implicit request and quit discussing them with you.

    You write, “This is getting tiring and you are revealing your stripes,” and you write, “Your incessant attempt to make a non point into something is indicative of an attitude that I just mentioned is typical of anti ID behavior. A constant attempt to criticize rather than to understand. You have just placed your self into a box. If you were really trying to understand anything you would have proceeded very differently.”

    First of all, if this is getting tiring, then you can quit responding. I’m certainly not forcing you to keep responding to my posts.

    As for “my stripes”, in general if one has a novel idea in science, one can expect, and in fact welcome, critical analysis of one’s ideas. Only if one can successfully respond to the criticism can one expect one’s ideas to grow in acceptance.

    Also I note, although you seem to not have registered this, that my criticism and resulting questions have been about the narrow issue of using a pure chance hypothesis to calculate the probability that something in the real world could have happened. I have not talked about, or expressed an opinion about, many of the things you have brought up, such as the “origin of complex novel characteristics” or whether design is true about any part of the world, or about the positions of atheism or theistic evolution, or any of those things.

    This is another characteristic of genuine scientific dialog – it looks closely at the details. I have questioned specific issues concerning calculating probabilities of events. If you find this tiring and my questions trivial and obvious, then perhaps it’s because you haven’t/won’t look deeply enough at the details of the theory you are proposing and trying to support

    I first came to this site after reading about it at First Thoughts in a thread by Stephen Barr, a theistic critic of ID. I was interested (still am, but discouraged) that people here would be open to critical discussion of various ideas associated with ID. Having my attempts to do so derided, trivialized, and dismissed is not encouraging.

    But I will quit discussing with you.

  221. “Would an ID scientist use CSI and/or FCSI calculations as the basis for rank ordering the list of biological things in comment 121 above?”

    Probably not but it could help in some places. There are lots of issues here and I proposed the list to bait a clown who came here to disrupt the site. He never took the bait because he knew if he did he would have to present something first. That is the modus operandi here. All the critics ask questions but never answer questions. That way they can always criticize. By answering questions they put themselves on record and they commit to something and they cannot have that. Because they know they cannot defend anything they hold that is contrary to ID. We will cream them if they do. So criticism is the way of life for an anti ID person. Feigning ignorance is also another tactic. It is so ironic that is the reality of life when the perception engendered elsewhere is just the opposite. That should be a lesson to learn here.

    You could not calculate the total FSCI for those genomes except for the prokaryotes since I believe every bit of DNA is coding. I could be wrong on that but I believe most are coding and as such a rough estimate of the FSCI could be calculated. However, some of the sequences are redundant in some respects so it would have to reduced by this somehow. That would be part of the debate. To calculate it for the whole genome of a eukaryote would be incredibly difficult but that does not mean a low estimate could not be made and this low estimate is beyond any natural processes to create by chance even as I used the hyperbole that it would take all the multi verses to generate the necessary options.

    Some issues for classifying complexity would be size of genome (recognizing C value paradox), coding region size, number of proteins coded, number of cell types, number of behaviors/systems within and external behaviors of the organism. There are probably plenty of others and these might come up in a discussion. But I did not seriously mean to start a discussion but was only humoring the idiot who came here. It would make an interesting discussion though.

  222. Jerry at 222:

    Would an ID scientist use CSI and/or FCSI calculations as the basis for rank ordering the list of biological things in comment 121 above?”

    Probably not but it could help in some places. There are lots of issues here and I proposed the list to bait a clown who came here to disrupt the site. He never took the bait because he knew if he did he would have to present something first.

    Well, okay. But, it does leave unanswered the question as to how an ID Scientist would rank order the list by complexity if CSI and FCSI only can partially help.

    That is the modus operandi here. All the critics ask questions but never answer questions. That way they can always criticize. By answering questions they put themselves on record and they commit to something and they cannot have that.

    So, are you willing to put yourself on the record as to how the list should be rank ordered by complexity? I also think it would be an interesting discussion.

    Feigning ignorance is also another tactic.

    Hey now, my ignorance is genuine! I would like an insight into the day of an ID scientist, such as yourself, with regard to how to determine the relative complexity of various biological entities.

  223. Timaeus @192

    But from an engineering perspective, intelligent design is the instinctive “default” explanation. It makes the most sense of what we can see plainly with our eyes.

    That is your opinion. But you presume to speak for the general population of engineers. The argument by analogy works for you, so you insist that it should work for everyone else. As you well know, it doesn’t. It is the height of arrogance for you to set yourself up as the judge of true engineering instincts. It’s a fraud to be telling scientists and other non-engineers that belief in ID is instinctive for engineers.

    Engineering is materialistic and mechanistic. ID is neither. Engineers want to know “what happened” and “how does it work.” ID doesn’t provide that. The theory of evolution is an extrapolation backwards in time of processes we see in action today. You don’t agree with it, but that’s not the point; it is perfectly natural for engineers to accept that theory, and they largely do.

    You and I both look forward to getting more engineers in on the discussion. The problem is that it’s been too easy for engineers to ignore the ID movement. IDists need to be much more public about the way they are describing engineering to non-engineers. You need to get the DI to put out some press releases disseminating such IDist notions as “ID is an engineering science,” “ID is reverse engineering,” and “Engineers instinctively believe in ID.” You will get plenty of negative attention from engineers, including those of faith, when they see you misrepresenting their profession to advance your cause.

  224. aleta,

    I made several comments at First Things on the Stephen Barr site. So I am well aware of what went on there. There and here I suggested people read my comments about ID that I posted in #110. It removes a lot of the misconceptions about ID. Barr perpetuated many of the misconceptions about ID and for that he should be held accountable.

    We can debate whether the changes in Lenski’s bacteria population is of any consequence or not. No one here supporting ID would think it was much of a change and I would bet that most evolutionary biologists would not think so either. You can hold that it is but the issue is what has built systems like the eye, neural system, digestive systems, flight not whether a mutation has occurred and changes the properties of a protein. These complex systems require massive amounts of information to accomplish. The English paragraph example was meant to show you how difficult it is to get to something functional and once something functional is present, to get to something else.

    I don’t know what your basic understanding of biology is but you must understand the transcription/translation process at a minimal to understand what is going on.

    As I suggest to all comers here, read Behe’s book, The Edge of Evolution. A lot of my reasoning is based on the simple logic presented there. If you want you can purchase it using Barnes and Noble Nook reader for $10 and then use your computer to read it. You can get the reader for nothing from Barnes and Noble. The Barnes and Noble reader is far superior to the Kindle. My wife has a Kindle that she got for Christmas and it is vdery useful also and most books can be downloaded for $10 but the computer version is crap. You can also avoid any of the physical readers and use an ITouch if you have one to download the readers and the books. The books will usually cost $10 but the Signature in the Cell cost $15 and is only available with Kindle now so I do not recommend their reader.

    If you get the Barnes and Noble reader, you can also get Sean Carroll’s book on evolution and he is anti ID. You will see how limited his examples are and this is what makes our point.

  225. “So, are you willing to put yourself on the record as to how the list should be rank ordered by complexity? I also think it would be an interesting discussion.”

    I never put myself on record. Otherwise I might have to defend something. But I will say that a humming bird is more complex than anyone of Lenski’s bacteria.

    You should get the Edge of Evolution. It might help with your confusion. But it will not solve the complexity issue completely.

    We had a discussion some time ago about whether it should be stripes or spots. Take your pick.

  226. Jerry:

    I never put myself on record. Otherwise I might have to defend something.

    So, I guess your version of the Golden Rule is “Do unto others before they do unto you.” Nicely played, sir, nicely played.

  227. Jerry: “I never put myself on record. Otherwise I might have to defend something.”

    I noticed that. :-)

  228. “So, I guess your version of the Golden Rule is “Do unto others before they do unto you.” Nicely played, sir, nicely played.”

    Meanwhile if you read all my comments you will notice I never say anything of substance or try to explain anything and continually try to evade answers.

    You will do well here with one half of the crowd as it seems you are learning quickly. Good luck on your education experience.

  229. 230

    jerry at 228,

    Meanwhile if you read all my comments you will notice I never say anything of substance or try to explain anything and continually try to evade answers.

    And here I thought you just missed my questions at 220 due to the high volume on this topic.

  230. jerry,

    Meanwhile if you read all my comments you will notice I never say anything of substance or try to explain anything and continually try to evade answers.

    Isn’t that just the way it should be?

  231. Mr Jerry,

    We have just cut out a few of the multi verses and that is all and you know it.

    I don’t think so. If the average alpha helix or beta sheet uses 10 amino acids (sorry for saying alpha sheet and beta helix earlier), and a protein uses several of these motifs, then the probability calculation is losing 20^10 each time, while at the same time making it more likely to find similar levels of function in other molecules also formed of these motifs, but with slightly different sequences of AAs.

    As an example, myoglobin is 153 AA in length and contains 8 alpha helices (according to the fount of all wisdom). 20^153 is 90 orders of magnitude bigger than 22^80, so this is not about a ‘few multiverses’. These motifs are another step in the chain of building complexity upwards from AAs binding directly to RNA oligomers.

  232. “1) How is FSCI related to CSI, if at all?”

    It is a type of CSI

    “2) CSI is claimed to be a unique characteristic of designed systems. Is FCSI also supposed to only be present in designed systems?”

    Yes

    In the spirit of not explaining anything and never responding to question here is a brief assessment of CSI. CSI is a general concept that is supposed to work with any designed entity. Life forms are just a small part of the world and non life designed entities are also a small part of the world. But CSI is meant to apply to all designed processes not just life forms so by its nature it must be very broad and thus lies the problem. Because of this generality it can be very vague in how it is applied. It can be quite daunting to look at certain poker and bridge hands, coin tosses, voting outcomes, sculptures, arson evidence etc. and try to find some common way to describe them in terms of their intelligent input but it actually turns out to be much less of a problem to take some aspects of life and to describe them in terms of intelligent input.

    What does the words complex, specified and information mean in a layman’s language? And how can it be generalized so everyone can understand it. I am not the one to provide a complete answer for this and I am on record to abandon the very generalized form of this concept for life because it is too broad. We had a few very long exchanges on CSI about three years ago (several hundred comments) and I agued there was no clear definition for it and no one could step up here to find the commonality between bridge hands, voting outcomes, coin tosses, DNA, computer programming and language. Many people tried but no one could solve the problem. Which is why the confusion over the appropriateness of CSI manifests it self. When its proponents can’t define it so it fits everything, how can you expect their adversaries to do so.

    No one really cared about bridge hands, coin tosses or Mt. Rushmore but did care about DNA and biology. Then two people pointed something out. That DNA specified something just as did computer programs and language. This was bfast. Then kairosfocus appeared for the first time and provided his thoughts and soon the term FSCI or functional specified complex information was being used. It was complex information that specified a function in something else. It was so obvious a child could understand it and the vagueness problem disappeared. So we have been using it ever since. But we continually get those who want to ram the term CSI down our throats because they know the problem it has because it is too general.

    So this subset of CSI, which we can call life CSI for the time being has some unique properties. It is complex, information and it specifies a function in some other entity in a very direct relationship. Function is usually assumed in CSI because a pattern is often specified because it has a purpose but sometimes it is hard to define the purpose and distinguish it from random events. But for the information in DNA that is not a problem since the function or external pattern is so obvious. So some called it FSCI and some called it FCSI. It is information that is complex and specifies function in another entity.

    Now the original intent of the term “specified” in CSI is that the information itself was specified by some intelligence. In other words the specify in CSI refers to some intelligence specifying the DNA not that DNA specifies something else. But any information besides DNA in this world that specifies something else is also specified. So is DNA specified. That is the issue. Why isn’t DNA specified. Good question and ID says it most likely is.

    Now what other things in this world fit this pattern. If we exclude life, none in the natural fit this pattern but in the intelligent world there are two very prominent examples, language and computer programming. Two processes very heavily identified with intelligence. There are probably others such as control processes in industry but these two examples are easy to understand and obvious. In nature, other than life, no such process exists. So could the DNA in a cell be specified and thus life be designed by an intelligence.

    For atheists, the problem is obvious. If FSCI is in reality CSI, then their whole world falls apart. Thus, the intense fight to discredit the obvious and to insist that it had to arise naturally. Now ID says FSCI could arise naturally but is highly unlikely for obvious reasons and there has never been an instance of it arising naturally. If such was common in nature then ID’s claim would be suspect but there are none let alone frequent examples. So we get the usual nonsense here trying to discredit what is very obvious. The whole thing comes down to the anti ID people saying there was no intelligence who could have done it so it must have happened naturally and ID says there must have been an intelligence because nature is unlikely to have done it.

    So we throw out ATP synthase and there is really no answer for it or even the average size protein or especially the transcription/translation process itself with its thousand parts. ATP synthase had to appear early and it is so incredibly complicated and efficient. ATP synthase is beyond the resources of all the universes predicted by string theory let alone our universe. So is the transcription/translation process. But a very, very intelligent person might have been able to figure it out.

    Sorry not to put myself on record again and to continue avoiding questions that are embarrassing.

  233. Jerry #225:

    But I will say that a humming bird is more complex than anyone of Lenski’s bacteria.

    Obviously, the humming bird is more complex. A child could see that. But does it make any difference if the bird is living or deceased?

  234. “But does it make any difference if the bird is living or deceased?”

    The following is relevant

    http://www.youtube.com/watch?v=npjOSLCR2hE

  235. Now that Jerry has pointed out one of the elephants in the room, I’d like to offer some thoughts.

    Jerry writes,

    “For atheists, the problem is obvious. If FSCI is in reality CSI, then their whole world falls apart. Thus, the intense fight to discredit the obvious and to insist that it had to arise naturally. Now ID says FSCI could arise naturally but is highly unlikely for obvious reasons and there has never been an instance of it arising naturally. If such was common in nature then ID’s claim would be suspect but there are none let alone frequent examples. So we get the usual nonsense here trying to discredit what is very obvious. The whole thing comes down to the anti ID people saying there was no intelligence who could have done it so it must have happened naturally and ID says there must have been an intelligence because nature is unlikely to have done it.”

    Jerry brings up what I consider one of the key metaphysical flaws in ID theory: the belief that if everything happens naturally there is no room for God. Once you accept this premise, then the theist must find some way to claim that at least some things take some special input, outside of what natural processes can accomplish, in order for God to have a role, or at least for some aspect of his role to be scientifically visible. Thus advocating for ID becomes a battle against atheism, and of course on that front no ground can be given.

    Of course, many theists don’t accept the premise, and have no conflict with the idea that God acts within natural processes, and that the rainbow is no more or no less designed than DNA.

    I was led to register here, as I have said before, from a thread over at First Thoughts by Stephen Barr, a theist who believes in design from a theological perspective but has strong disagreements with ID.

    I believe one of the points made in that thread, with which I agree, is this: that if ID advocates make claims about design in the interest of supporting their belief in God, and those claims are in fact seen as flawed by many religious and non-religious people alike (as I think is the case), then such ID advocacy actually sets back the cause of those who would like to advocate for theism.

    This is in part, and I said this earlier in the thread, why I focused on what I think is a false proposition – that the blind chance hypothesis is an accurate and adequate model for calculating the probability that something came to be. If one wants to argue for design, tying one’s case to a faulty proposition and then defending it at all costs, because failing to do so supposedly cedes ground to atheists, prevents a broader set of design arguments from being advanced and prevents a larger group of people from joining the cause of design advocacy.

    The flaw is that ID is trying to justify itself scientifically. The irony here is that in doing so the ID movement is in fact ceding the epistomological framework to the very people who they perceive as their adversaries – the people of science. The attitude is that science (and those materialistic atheists who embrace) are wrong, and by golly we going to show scientifically that we’re right!

    In my opinion, those people who are straightforwardly acknowledging that their belief in design is metaphysical, whether it be Christian theism or Buddhism or some type of New Age spiritualism, have a much better chance of influencing the culture than IDists who are abusing science and logic because they think they have to beat the scientists at their own game.

    All for the record.

  236. Aleta:

    In my opinion, those people who are straightforwardly acknowledging that their belief in design is metaphysical, whether it be Christian theism or Buddhism or some type of New Age spiritualism, have a much better chance of influencing the culture than IDists who are abusing science and logic because they think they have to beat the scientists at their own game.

    I think that is a little unfair to Jerry. Just because he doesn’t have enough confidence in his ability to calculate CSI for the record doesn’t mean he is a science abuser. Sure, ID science will advance very little until ID scientists are willing to stand behind their convictions and publish their CSI calculations. But, that is hardly sufficient grounds to assume that they are willfully obfuscating. Maybe their mathematical reach just exceeds their grasp at this time.

  237. Jerry, nice post at 231.

    Aleta, you are reading something into this that is not there. It is not that the Christian supports ID to prove God. It is the atheist that opposes ID for metaphysical reasons.

    ID is science. It can be falsified. It does not the involve the supernatural in the least.

  238. Hi efren.

    I agree that abuse is too strong a word – I apologize for that: perhaps “misuse” is better.

    But I don’t think it is the case that Jerry et al (the et al being the many other people who use the pure chance hypothesis as part of the same argument) merely lack the mathematical reach to calculate a meaningful measure of improbability. It may not be willful obfuscation, but it certainly strikes me as dogmatic and stubborn to resist acknowledging that how things change over time must be taken into account in their theory.

    Also tribune writes, “Aleta, you are reading something into this that is not there. It is not that the Christian supports ID to prove God. It is the atheist that opposes ID for metaphysical reasons.”

    It is true that some atheists oppose ID on both scientific and metaphysical grounds (although why would an atheist oppose it on metaphysical grounds if it weren’t about God?), but one of the main points of my post was that many non-atheists (Christians as well as members of other religions) oppose ID because they think it is wrong scientifically, and because they don’t like hitching a belief in design, which they support in the broad sense, to a flawed argument. (Many also oppose ID on metaphysical grounds also, but that is an another topic.)

    Also tribune writes, “ID is science. It can be falsified. It does not the involve the supernatural in the least.”

    But if ID doesn’t involve the supernatural “in the least”, then why would an atheist oppose it? Your two claims contradict each other.

    My answer to that question is that it is a subterfuge to claim that ID is not about the supernatural. Over at First Thoughts, Stephen Barr listed some of the possible non-supernatural intelligent agents, and then concluded, “I think that list is pure smoke. One is talking about a supernatural intelligent being.”

    I agree with Stephen on this.

  239. tribune7

    ID is science. It can be falsified.

    1. Unfortunately nobody here realized when it happenend.
    2. Still it doesn’t follow that ID were in any way science.
    3. Maybe sciency but then it doesn’t need to be falsified.

  240. Aleta,

    but one of the main points of my post was that many non-atheists (Christians as well as members of other religions) oppose ID because they think it is wrong scientifically,

    Then they would not be using non-scientific means to stop it such as refusing to allow ID advocates to publish in journals even as a letter-to-the-editor response to an article criticizing their ideas , cutting off resources solely because they are perceived as sympathetic to ID etc.

    But if ID doesn’t involve the supernatural “in the least”, then why would an atheist oppose it?

    I guess you’d have to ask an atheist. I’ll certainly agree that it isn’t rational for them to do so.

  241. Aleta,

    Some of your comments are a little arrogant. You make claims that ID does not hold. How can you do that unless you think you know better since you said you are here to find out about it. Yet you pontificate.

    “Jerry brings up what I consider one of the key metaphysical flaws in ID theory: the belief that if everything happens naturally there is no room for God”

    I never said that and I never ever thought that. How could you interpret that from what I said. I said if FSCI is true, the atheist world falls apart and that is true. They posit no intelligence before humans and look at a past intelligence as most likely God. ID does not make that claim though many who support ID will. It is beyond what ID can show. I did not say the theist world falls apart but for many it apparently would. I have said elsewhere, on the Steve Barr thread that science for many of the theistic evolutionists is ideological just as it is for the atheist and the young earth creationist. Not all TE’s fit this pattern but some definitely do as Darwinian evolution is part of their theology. God must have done it naturally not that He could have done it that way. It is an absolute.

    ID does not deny that God could have done it naturally, just that the evidence does not indicate this. The theistic evolutionist denies the evidence because of theological reasons not because science supports it.

    ID is based on science. My theology says God could do it anyway He wants and he gave us the ability to see His works. To deny them would be contrary to the abilities that God have us. The TE’s deny this and say it must have happened naturally. If you want to see my full answers go back to the Fiirst Things thread and read them.

    “I believe one of the points made in that thread, with which I agree, is this: that if ID advocates make claims about design in the interest of supporting their belief in God, and those claims are in fact seen as flawed by many religious and non-religious people alike (as I think is the case), then such ID advocacy actually sets back the cause of those who would like to advocate for theism.”

    If ID claims are flawed, then I suggest you argue them out. I find them religiously neutral and compelling based on science and logic. I believed in Darwinian evolution till about 10 years ago. I read an ad by a Catholic organization in New York City that a friend showed me and we went to the presentation. That was my introduction to ID and since then my religious beliefs have not changed one iota but my beliefs about evolution and science changed dramatically. Why should religion and science be at odds. I believe in one truth and that God would not fool us.

    My background in college and grad school was physics and mathematics and my current job requires knowledge of some biological processes. I am blown away by the science, not the ideology and there are several more here like me. Barr has to speak to his Maker, because he distorted what ID is about and you apparently have a false impression about it too as you are here one day and have made a lot of non sensical claims. So Barr is mis representing things and maybe he is responsible for you going astray. Not in your religious beliefs but in just what ID is. If ID is saying someone had a hand in life, I find that theistic evolutionist saying that this is bad theology, quite remarkable since there is supposedly God’s intervention with the conception of each human and then there is prayer. Prayer is nothing more than an expectation that God will intervene as He sees fit in human affairs. Is prayer bad theology, these days. Not where I attend church as there are prayers offered every Sunday for various things.

    To say that ID is setting back the cause against atheism is a joke coming from people who blindly accept the same bad science that atheist say is the basis of their fulfillment. Have you checked how many youth attend church after high school when they are not forced to by their parents. They have bought into the materialist philosophy that is taught in the schools, public and private. ID has nothing to do with this.

    “what I think is a false proposition – that the blind chance hypothesis is an accurate and adequate model for calculating the probability that something came to be. If one wants to argue for design, tying one’s case to a faulty proposition and then defending it at all costs, because failing to do so supposedly cedes ground to atheists, prevents a broader set of design arguments from being advanced and prevents a larger group of people from joining the cause of design advocacy.”

    I haven’t a clue what is meant here. ID is a periphery idea, not main stream. So how can it prevent broader sets of design arguments from being advanced. How can ID prevent a larger group of people from joining the cause of design advocacy when most are now laughing at it because they are told false things about it. We do not defend anything at all cost. We defend what is reasonable from a science and logic point of view. If the evidence is bad or lacking, ID will not support it.

    “In my opinion, those people who are straightforwardly acknowledging that their belief in design is metaphysical, whether it be Christian theism or Buddhism or some type of New Age spiritualism, have a much better chance of influencing the culture than IDists who are abusing science and logic because they think they have to beat the scientists at their own game.”

    We never lose at science because we respect it and demand evidence. That is the irony of this. Stephen Barr will not debate ID on science and as soon as science came up on his thread, he bailed out. There has never been a person who came here in 4 1/2 years who has been able to defend the science of evolution as it is taught in the schools of this country. Not one and we have had biologist and evolutionary biologists fail to do so. None have been able to do it. Does that tell you anything. Probably not because you will think we are deluded. But I can make a safe bet. You will not be able to find anyone who can do it. The way to make yourself to feel comfortable is to make up a lot of nonsense about ID. There are a lot of engineers here, software programmers, business owners, etc who spend a lot of time in the world and have to deal with reality. Darwinian evolution of complex novel capabilities is not reality.

    “It may not be willful obfuscation, but it certainly strikes me as dogmatic and stubborn to resist acknowledging that how things change over time must be taken into account in their theory.”

    This is pure nonsense. On what basis do you make this claim? This is another case of pontification without understanding. Is that how you were brought up. When I was young, it was called bigotry. To make false claims about some people without any knowledge.

    “Christians as well as members of other religions) oppose ID because they think it is wrong scientifically, and because they don’t like hitching a belief in design, which they support in the broad sense, to a flawed argument.”

    More nonsense. How do you know ID is wrong scientifically. I just said that we have found no one who can make a coherent argument for naturalistic evolution of complex novel characteristics. Did you read the links I told you to read. Did you read the Behe book. Then if you didn’t then you should refrain from knowing it all. I have not found one theistic evolutionist who will risk an argument on evolution. We have asked many and none have attempted to take the challenge. I take that back, a couple did but had to admit they couldn’t defend their beliefs. They just appealed to experts after they realized their arguments were wrong. The experts who supposedly have the knowledge never write anything to back up their beliefs.

    The thread on First Things started when a Theistic Evolutionist praised the latest book by Dawkins. I have asked many here, what is in Dawkins book that makes the case for naturalistic evolution and no one could point anything out. So before you question our judgment on science, find someone who can back up Darwinian evolution. Because we would love to see it.

    “But if ID doesn’t involve the supernatural “in the least”, then why would an atheist oppose it? Your two claims contradict each other.”

    Because the atheist think it does. Not because any claim that ID makes. ID just wants to get Darwinian evolution out of the curriculum because it is bad science and is acknowledged to lead people to atheism. ID can not make any claims about the designer being God because the science does not support it. If one wants to personally think it is God, then that does not flow from ID.

    So I think you know little of what you are talking about. You have no idea about the science involved and till you do, you should refrain making any claims about it. The people here who support ID are Protestants, Catholics, Jews, Muslims, some agnostics and some who have little use for religion.

    I am sorry this is so long but one usually does not get so much nonsense in one day.

  242. I am not certain it was in this thread, but I was challenged about my claim that Nmatural Selection is not the only mechanism in evolution. Well, here is what Darwin wrote:

    “As my conclusions have lately been much misrepresented, and it has been
    stated that I attribute the modification of species exclusively to
    natural selection, I may be permitted to remark that in the first
    edition of this work, and subsequently, I placed in a most conspicuous
    position—namely at the close of the Introduction—the following
    words: “I am convinced that natural selection has been the main but not
    the exclusive means of modification.” This has been of no avail. Great is the power of steady misrepresentation.”

    Further details are not hard to find for anyone interested.

  243. Aleta said,

    “Jerry brings up what I consider one of the key metaphysical flaws in ID theory: the belief that if everything happens naturally there is no room for God”

    I already commented on this above but had some further thoughts on this. Actually this is the atheist point of view and unfortunately has become the prevailing one with most of society. Why does one need God, when all can happen naturally. ID does not make any claim like that at all. The following comment from a couple years ago is relevant.

    http://www.uncommondescent.com.....ent-190514

    ID has no problem with any naturalistic approach to anything. If God, as the Great Clockmaker, decided to use a certain naturalistic process to effect evolution, ID would have zero problem with it. But ID says that there is no known mechanism of naturalistic evolution that can explain the history of life on the planet. Not that it won’t be found but that any of the currently proffered explanations is lacking. And to offer them up as explanations is at best speculative and to assert them as supported by the empirical data is at best advocating bad science but to accept them for philosophical reasons, beyond the pale. Especially when this acceptance is the engine that drives atheism.

    In 4 1/2 years here I have seen no one able to dispute that assessment. Certainly not those who support what we call Darwinian evolution. Darwinian processes explains small changes but large changes are not just the accumulation of small changes. We say that for two reasons. There is no evidence of the accumulation of small changes leading to large changes which we call the origin of novel complex capabilities. And there are very good physical and logical reasons why it cannot happen.

    The latter are best discussed under the topic of information in biology. Complex capabilities requires large scale systems of controls of unique parts so not only the information for all the parts has to be developed but also the control processes for their expression and placement. So when a new capabilities appears it is not just a few little additions here and there but the sudden appearance of thousands of intricately designed sub systems and parts.

    As of the present there is no process that is capable of making these large scale changes. The micro evolution process is quite capable of making small changes but as Behe has shown in the Edge of Evolution it is not capable of doing much beyond the simple. Now these simple and relatively trivial changes can have dramatic affects on many organisms but they do not produce the massive changes seen in the very complicated biological systems in nature.

    So if one is going to challenge ID, then one has to have a mechanism that will explain these changes. To just offer up Darwinian processes is ludicrous when there has never been an example where it could even come close to making the massive changes necessary for these complex novel capabilities. One of our favorite anti ID people here graciously said the eye is the result of a cascade of 2000 different proteins in precise order. Not a trivial mechanism nor one that was built up one change at a time or even a few fortunate changes at a time. It appeared out of nowhere 520 million years ago and no new eyes have developed since. Similarly the very complicated ATP synthase and the transcription/translation process are so complicated and so essential for life that it boggles the mind how such intricate process arose gradually or in any naturalistic way.

    But to say that ID does not accept naturalistic explanation is absurd. ID is perfectly willing to accept naturalistic explanations when they are reasonable and supported by the data and will point to all the micro evolutionary processes that produce changes in microbes or that are used by species to adapt to a new environment or unfortunately explain too many diseases or human medical problems.

    It is ironic that theistic evolutionists have chosen a path proclaimed by the atheists as the basis for their beliefs when there is no evidence for this path. If the evidence was there, then any logical person would accept it but the reality is that it is not there. And yet we have people here telling us how dumb we are and how we accept flawed ideas when they cannot support their ideas. Ironic at best. But in reality it is insulting to have someone come here and call us flawed like we are somehow lacking either in intellect or good intentions when they cannot defend their ad hominems.

    When I grew up that was very bad behavior.

  244. Aleta:

    My answer to that question is that it is a subterfuge to claim that ID is not about the supernatural.

    ID is about the DESIGN, not the designer.

    The design exists in this physical world.

    It can be observed and tested.

    As for the designer we just do not know.

    However we do know that even the materialistic postion regresses to the SAME POINT.

    Ya see natural processes only exist in nature and therefor cannot account for its origins.

    The best we can say about any designer of the universe is it was PRE-natural- pre- meaning before.

    It could also be other-dimensional.

    That said what would happen if we allowed the design inference, followed the data/ evidence and it led to the supernatural?

    It would be too late to disallow the inference after that.

  245. efren ts:

    But, it does leave unanswered the question as to how an ID Scientist would rank order the list by complexity if CSI and FCSI only can partially help.

    What is the relevance of the list?

  246. Jerry writes, “…But in reality it is insulting to have someone come here and call us flawed like we are somehow lacking either in intellect or good intentions when they cannot defend their ad hominems.”

    I said the argument was flawed, not the people here. And I have made no ad hominem arguments. I have addressed the issues, not attacked people.

  247. Cabal -I am not certain it was in this thread, but I was challenged about my claim that Nmatural Selection is not the only mechanism in evolution.

    I don’t know who’d agree with that. I think everyone on this board would concede that random genomic changes are an integral factor.

  248. “I said the argument was flawed”

    But the argument is not flawed, so where is the apology? You made a claim about us that does not exist.

  249. 250

    jerry at 231,

    Thanks for the detailed answer. I’ll take the first question first, just to be different.

    “1) How is FSCI related to CSI, if at all?”

    It is a type of CSI

    CSI is defined in No Free Lunch as a specific characteristic that indicates design. While I wasn’t able to get enough detail from that description to implement a CSI calculator in software, Dembski seems to have a particular calculation in mind. Given that, how can there be different “types” of CSI?

    CSI is a general concept that is supposed to work with any designed entity. Life forms are just a small part of the world and non life designed entities are also a small part of the world. But CSI is meant to apply to all designed processes not just life forms so by its nature it must be very broad and thus lies the problem. Because of this generality it can be very vague in how it is applied.

    If CSI is “vague” then it isn’t useful for uniquely identifying design. If we can’t use it to make a qualitative measurement, as Dembski and others suggest is possible, how can it reliably distinguish between designed and non-designed objects?

  250. 251

    jerry at 231,

    “2) CSI is claimed to be a unique characteristic of designed systems. Is FCSI also supposed to only be present in designed systems?”

    Yes

    In that case, the naive calculation of two to the number of bits required to describe the artifact is not a good definition of FCSI. There are several known and observed types of mutations that can increase the size of a genome, for example, including fully replicating it. Those would increase FCSI by that measure, with no intelligent intervention required.

  251. Jerry writes, “But the argument is not flawed, so where is the apology? You made a claim about us that does not exist.”

    But we disagree about whether the argument is flawed. One doesn’t apologize for disagreeing with someone. We both made our points, and agreed, I think, to bring the discussion to a close. Others who were participating in the discussion, or who were just reading along, can come to their own conclusions about which of us is more correct, and I’m sure there will be disagreement there also. There’s nothing wrong with disagreement – it’s the fuel for further refinement of understanding.

  252. Mustela, While I wasn’t able to get enough detail from that description to implement a CSI calculator in software,

    Do you believe complexity can be calculated?

  253. 254

    tribune7 at 236,

    ID is science. It can be falsified.

    In order to be considered science, there must be a scientific theory of ID from which falsifiable predictions can be made. Paul Nelson, one of the leaders of the ID movement, stated in 2004 that no such theory exists:

    We don’t have such a theory right now, and that’s a problem. Without a theory, it’s very hard to know where to direct your research focus. Right now, we’ve got a bag of powerful intuitions, and a handful of notions such as ‘irreducible complexity’ and ‘specified complexity’ — but, as yet, no general theory of biological design.

    Has a scientific theory been created since then? If so, what is it? What predictions does it make? How would a test of those predictions put ID at risk of disconfirmation?

  254. 255

    tribune7 at 250,

    Do you believe complexity can be calculated?

    I know that ID proponents, including Dembski and you, in another thread ( http://www.uncommondescent.com.....ent-345510 ), claim that CSI can be calculated. I would like to see an example of that.

  255. tribune7,

    ID is science. It can be falsified. It does not the involve the supernatural in the least.

    Fine, no more God or magic, is that it?

    don’t know who’d agree with that. I think everyone on this board would concede that random genomic changes are an integral factor.

    I wrote ‘mechanism’ and did not mention any integral factor. Coal is required to make trains run, but I’d hesitate to call it a mechanism.

  256. Cabal, Fine, no more God or magic, is that it?
    You’re an atheist, aren’t you? Why would you put God and magic in the same sentence. They are irreconcilable.

  257. Mustela –

    I know that ID proponents, including Dembski and you, in another thread ( http://www.uncommondescent.com…..ent-345510 ), claim that CSI can be calculated.

    Complexity is not CSI. Do you believe complexity can be calculated?

    What predictions does (ID) make?

    That nothing that is irreducibly complex will be found not to have been designed.

    That no pattern of a particular complexity showing a specificity — and functionality is a fine maker for it — the probability of which occurring being, well, very, very low, will be found not to have been designed.

    How would a test of those predictions put ID at risk of disconfirmation?

    It would show that the things that ID says shows design, don’t.

  258. “In that case, the naive calculation of two to the number of bits required to describe the artifact is not a good definition of FCSI. There are several known and observed types of mutations that can increase the size of a genome, for example, including fully replicating it. Those would increase FCSI by that measure, with no intelligent intervention required.”

    What drivel! The FSCI applies to the specific coding areas, not the whole genome. You and efren ts got hung up in the joke I was having with Backwards Joseph. The complexity of a total organisms is not at issue with FSCI. Each coding region would have its own FSCI and from that the conclusion of design can be made, not the whole genome. It is obvious you haven’t a clue what this is about since your function here is to nit pick. But you should try to understand what you should concentrate your nit picking on.

    Duplication in all its forms are not an issue. How much new functionality comes from duplication is the issue. ID recognizes that the new section can then mutate since it is not under selection pressure and theoretically a new coding region can come into existence with new function. I said theoretically because I believe the examples are few and there are tens of thousands of genes that have to be explained including unique genes for nearly every species.

    So you are essentially making the ID case and don’t know it. Good try though. Maybe you should read a basic biology book to get up to speed.

  259. The FSCI applies to the specific coding areas, not the whole genome.

    Only coding regions?

    Those of us on the other side of the ID debate still want to see how FCSI can be calculated for a realistic biological problem. Can you show us a calculation?

  260. Mustela, we may be talking past each other.

    You seem to see CSI as being designed to create a graduated scale of items according to design content.

    I see it as being meant to be more of a Boolean logic gate — if it has CSI it is designed, if it doesn’t, it may/may not be designed.

    I don’t think anybody has tried to scale it, or felt it necessary to do so.

  261. “Those of us on the other side of the ID debate still want to see how FCSI can be calculated for a realistic biological problem Can you show us a calculation?”

    It has already been done but I tell you what. When anyone who opposes ID can give me a coherent defense of naturalistic macro evolution (our definition,) I will do it again. In the mean time, pick your coding region and have at it. Just follow the instructions. They are written in English.

  262. Mr Jerry,

    Each coding region would have its own FSCI and from that the conclusion of design can be made, not the whole genome.

    Are you saying non-coding regions have no function?? I thought most ID supporters were fairly certain that function would eventually be found for the whole genome or most of it – that there is no such thing as junk DNA. Or are you just saying that operationally, we can’t calculate FCSI unless we’ve already determined function. Once we determine the function of non-coding regions we can expand the FCSI calculation to those areas as well? Thanks in advance for a clarification!

  263. Nakashima

    I thought most ID supporters were fairly certain that function would eventually be found for the whole genome or most of it – that there is no such thing as junk DNA.

    Jerry shouldn’t be mistaken as a real ID supporters. IMO he always has just presented his personal point of view which may occasionally have been compatible with ID. However, in most cases I would judge his contributions rather as a burden than a help for ID.

  264. Heinrich (#260)

    Thank you for your post. You wrote:

    Those of us on the other side of the ID debate still want to see how FCSI can be calculated for a realistic biological problem. Can you show us a calculation?

    Sure. Jerry (#155) included a link to a paper by K. D. Kalinsky, entitled, “Intelligent Design: Required by Biological Life?” which can also be accessed online at http://www.arn.org/blogs/index.....cal_life_r . It’s well worth reading; you’ll find the biological applications in section V.

    Proteins are one of the central illustrations used by Kalinsky of structures requiring intelligent design. After rigorously defining “functional information,” he examines two particular proteins, SecY and RecA, which are found in all living things, and would therefore be required in a minimal genome. Kalinsky cites calculations estimating that the functional information in the two proteins is 832 bits and 688 bits for RecA and SecY respectively, and concludes that the average 300-amino acid protein has around 700 bits of functional information. He calculates that “ID is 10^155 times more probable than mindless natural processes to produce the average protein,” and concludes that “if natural selection is invoked to explain the origin of proteins, a fitness function will be necessary that requires intelligent design.” He goes on to estimate that the simplest life form would have had 267,000 bits of functional information.

    Earlier on in this thread, someone asked about the amount of FCSI in the following:

    A. a banana,
    B. a tree,
    C. a single cell and
    D. the bacterial flagellum.

    Fair enough. I’ve just quoted Kalinsky’s estimate for the simplest viable cell: 267,000 bits of functional information, so that answers C.

    Regarding the bacterial flagellum, I did some hunting around, and found an article in PNAS (April 24, 2007, vol. 104, no. 17, pp. 7116-7121) by R. Liu and H. Ochman, entitled Stepwise Formation of the Bacterial Flagellar System . (For Dr. Michael Behe’s reponse to this paper, please see here: http://www.evolutionnews.org/2.....er_se.html )

    Anyway, according to the article by Liu and Ochman, the ancestral bacterial flagellum would have had 24 genes. So that’s about 24 X 700 = 16,800 bits of functional information. But that’s probably an underestimate, if Dr. Behe is correct. (Nicholas Matzke is dubious of the article’s claims, too.)

    Regarding A: it appears that scientists are still sequencing the banana genome, according to this press release:

    http://www.cns.fr/spip/Septemb.....enome.html

    Luckily, I found another article saying that the common ancestor of all angiosperms would have had 12,000 to 14,000 genes. (I understand there’s been some duplication since then.) See here:

    http://www.sciencedirect.com/s.....19c45c8fc2

    So that’s 8,400,000 to 9,800,000 bits of functional information in each cell, for a typical flowering plant.

    A tree contains no more information than a single cell; apparently different genetic switches are turned on in different cells.

    By the way, everyone, this link on genome sizes might be of interest:

    http://users.rcn.com/jkimball......Sizes.html

    Hope that helps, Heinrich.

  265. “Or are you just saying that operationally, we can’t calculate FCSI unless we’ve already determined function. Once we determine the function of non-coding regions we can expand the FCSI calculation to those areas as well? Thanks in advance for a clarification!”

    You got it right.

  266. “I thought most ID supporters were fairly certain that function would eventually be found for the whole genome or most of it – that there is no such thing as junk DNA. ”

    I am not one to say that all non coding DNA will have function even though a high percentage of it is transcribed. Maybe much will but that is to be determined. It is probably not 0% and it is probably not 100% They keep on learning every day about different genomes.

  267. 268

    VJ at 264.

    As is usual, you’ve been succint. In fact, you’ve been too succint. There will be immediate objections. Most probably those objections will be that you haven’t been succint at all, or that at a minimum, you’ve flagrantly missed the point altogether. Whatever the objection, it should be understood that this a position for the opposition that must be defended at all costs.

    There is, of course, a reason for this. As anyone with even a furball of human intiuition can attest – the position being defenfed (that ID folks can’t do the math, and therefore ID isn’t real science) is hallowed ground, and cannot under any circumstances be reliquished.

    At the same time, all the tough questions posed to the opposition on this thread go un-noticed, ignored, and unanswered.

    This all takes the form of the defender’s skillful parsing of words – reaching the level of performance art – all self-servingly disguised as a legitimate search for clarity and understanding.

    Obfuscation rules the day…as well as a demonstrable refusal to address the pertinent questions asked in return.

    It’s a strategist field day.

  268. Sure. Jerry (#155) included a link to a paper by K. D. Kalinsky, entitled, “Intelligent Design: Required by Biological Life?” which can also be accessed online at http://www.arn.org/blogs/index…..cal_life_r . It’s well worth reading; you’ll find the biological applications in section V.

    That’s for functional information. Hoe does this differ from FCSI? Indeed, how does it differ from CSI?

    And what biological relevance does this have? I know what I’m about to write has been pointed out numerous times, but I’ve never seen an answer that is convincing. But here goes (again)…

    These calculations of F/CS/FCS I are based on the tornado in a junkyard scenario: if we randomly pick one configuration, what is the probability of picking one from our target? But evolution doesn’t work like that, it’s a process of improving fitness. So how do these calculation relate to the likelihood of an evolved structure?

    As far as I can tell (and I’m willing to be corrected), Dr. Dembski accepts this criticism, which is why he has pursued his active information ideas to quantify how much better natural selection does than random search, and then move the argument on to whether this improvement can be achieved through natural means.

  269. 270

    tribune7 at 218,

    Complexity is not CSI. Do you believe complexity can be calculated?

    I don’t care. I’m just here for the CSI.

    What predictions does (ID) make?

    That nothing that is irreducibly complex will be found not to have been designed.

    You left out the first question: What is the scientific theory of ID?

    You’ll also need to define exactly what you mean by “irreducibly complex” and identify one or more objects that meet that criteria.

    That no pattern of a particular complexity showing a specificity — and functionality is a fine maker for it — the probability of which occurring being, well, very, very low, will be found not to have been designed.

    This sounds like CSI, and there is still no extant example of CSI, as described in No Free Lunch, being calculated for a real biological artifact, taking into account known physics, chemistry, and evolutionary mechanisms.

    How would a test of those predictions put ID at risk of disconfirmation?

    It would show that the things that ID says shows design, don’t.

    Unless you can identify a specific biological construct that definitely exhibits irreducible complexity according to your (as yet to be elucidated) theory, this isn’t a prediction that would server to falsify it. If a particular biological artifact is explained through non-intelligent mechanisms, you can simply say “Okay, that one wasn’t really irreducibly complex, but this one over here is.”

    More rigor is needed.

  270. 271

    Sorry, that should have been “tribune7 at 258″ not 218.

  271. 272

    jerry at 259,

    “In that case, the naive calculation of two to the number of bits required to describe the artifact is not a good definition of FCSI. There are several known and observed types of mutations that can increase the size of a genome, for example, including fully replicating it. Those would increase FCSI by that measure, with no intelligent intervention required.”

    What drivel!

    There is no need for that kind of response in what should be a civil discussion.

    The FSCI applies to the specific coding areas, not the whole genome.

    That was not part of your original definition of FSCI (is it FSCI or FCSI?).

    Are you now saying that FCSI is defined as the number of bits required to describe the coding regions in a genome and that this measurement uniquely identifies intelligent intervention?

    Let’s avoid any more unpleasant misunderstandings by getting a rigorous mathematically definition of how to measure your metric and what it means.

  272. 273

    tribune7 at 261,

    Mustela, we may be talking past each other.

    You seem to see CSI as being designed to create a graduated scale of items according to design content.

    I see it as being meant to be more of a Boolean logic gate — if it has CSI it is designed, if it doesn’t, it may/may not be designed.

    I don’t think anybody has tried to scale it, or felt it necessary to do so.

    Fair enough. Since it appears to be measured in bits, from what I’ve read, it seems that it should be possible to rank objects in CSI order.

    A boolean measurement works as well, though. Do you have a worked example of such?

  273. 274

    jerry at 262,

    “Those of us on the other side of the ID debate still want to see how FCSI can be calculated for a realistic biological problem Can you show us a calculation?”

    It has already been done

    I dispute that. Thus far, despite reading all of the relevant material and asking repeatedly for further assistance on this blog, I have never seen a calculation of CSI, as desribed in No Free Lunch, for a real biological artifact that takes into account known physics, chemistry, and evolutionary mechanisms.

    If you have such a calculation, cite or reproduce it, don’t just assert it.

  274. A boolean measurement works as well, though.

    Mustela, you really don’t understand this, do you?

  275. 276

    tribune7 at 274,

    Mustela, you really don’t understand this, do you?

    Here’s what I do understand:

    1) Dembski and other ID proponents (including you, as cited earlier in this thread) assert that CSI is a qualitative measurement that uniquely identifies design.

    2) I have yet to find an example of CSI, as described in No Free Lunch, calculated for a real biological artifact, taking into account known physics, chemistry, and evolutionary mechanisms.

    Here’s what I don’t understand:

    Why will you not either produce such a calculation or simply admit that you do not have one?

  276. Mustela — Why will you not either produce such a calculation

    You mean like a boolean one?

  277. I’m confused again. In 264, vjtorley pointed me to a calculation of “functional information”, when I asked for a calculation of FCSI. I can only assume that they are the same. But then tribune7 is suggesting that FCSI is Boolean, but that’s something different.

    Can someone explain? What have I missed?

  278. 279

    tribune7 at 276,

    You mean like a boolean one?

    I was very clear about what I’m asking for, just as it is very clear that you continually ask questions in a transparent attempt to evade supporting your claims. If you’ve got an example calculation of CSI that meets the criteria I’ve specified, let’s see it. If you have a measurement of a different qualitative characteristic that you believe uniquely identifies design, let’s see that.

    If all you have is more evasions, we’ve already seen those.

  279. Mustela, I think you mean “quantitative measurement”in 1).

  280. I don’t have time at the moment to defend my two cents … but here they are anyway …

    CSI will most definitely give us a value when all the variables are inputted and the equation is worked out, however it can also be treated in a boolean fashion as someone has stated above. If the variables are treated in a way so as to give the ID critic the maximum benefit of the doubt, the CSI will be calculated so as to produce a lower limit of the CSI for that event. Further acquisition of data can help us to refine the measurement and make it more accurate and precise. However, we will have a “lowest possible value” of CSI for that event and if that value > 1, then we know that the event does indeed exhibit CSI. In this way, we can treat measuring for CSI, not as a strict comparative measurement (although this is possible and I believe useful), but as a yes/no answer to the question, “Does this event exhibit CSI?” or “Is this event intelligently designed?”

  281. “Let’s avoid any more unpleasant misunderstandings by getting a rigorous mathematically definition of how to measure your metric and what it means.”

    You were given a definition and an example. It is easy to take this example and expand it if anyone would want to but I made the point that this would be ridiculous just as many of the subsequent comments were ridiculous. The whole discussion of complete genomes was a joke, started by a clown and I humored him by providing a list to let the absurdity play out. Do you understand it was a way of showing up this clown as a clown. But some people seemed to think it was serious. And I did give a serious attempt by listing some criteria that would be the basis of a discussion. But the criteria were not FSCI specifically and I pointed out reasons for it not being appropriate. Any serious attempt at this would never be FSCI. It is not necessary.

    What you call unpleasant misunderstandings is the constant nit picking on irrelevant points. You were given an example, a calculation and told that the example exhausts all the resources of the universe and all the multi-verses they can logically dream of. Just one operating protein, actually a series of proteins acting in concert. Now if you want to continue and detail the information for every coding region, be my guess but I said it was absurd and it is.

    And a series of events have the same probability as all the events happening at once. I gave you the rationale for that. Yes the odds of winning the lottery twice is different if you once won the lottery but it begs the question of how did you win the first lottery. For someone to say that each step along the way is easy, I suggest the construction of sentences by random processes and here is a domain which we know contains untold number of functional elements but we have no such assurance for DNA and life.

    When I say what you say is drivel, is because you conveniently ignore anything that contradicts your proposition. Thus, to ignore criticism that is relevant means that subsequent comment are most likely drivel. So it was an accurate characterization and not necessarily unplesant misunderstandings but a consistent pattern.

  282. Heinrich –But then tribune7 is suggesting that FCSI is Boolean,

    What is it with anti-IDists and their inability to comprehend what they read?

    Hey Heinrich go back and try Post 261 again and tell us where I mention FCSI.

  283. “What is it with anti-IDists and their inability to comprehend what they read?”

    Oh they comprehend but they know that they have no belief of their own that they can hold up to dispute what is said, so they nit pick and criticize in anyway they think they can. Most of the time it is drivel as pointed out to Mustela Nivalis.

    It is an interesting phenomena that all their desires could be fulfilled if they only had something they could defend themselves. But since they don’t what we see is the continual attempts to belittle even the littlest of statements made by the pro ID people.

    It is the instant criteria to determine if someone is honest and legitimate here. Some think they can disguise it under the pretext of saying they are just trying to learn and understand but it is the lack of affirmation on anything that is the dead giveaway.

    So for you future anti IDers who are lurking, know that is how we can identify you very quickly. Agree that we have some good points and you will confuse us for awhile but then inconsistency will eventually reign and we will know.

    For me personally, I often ask questions or make answers that are designed to ferret out what one believes. It is not hard to bait the anti IDer, they oblige very quickly. Neutral or pro ID people behave much differently.

  284. It is interesting that Jerry has upped his level of being rude and insulting. I don’t see his opponents in these discussions behaving in this way.

    Just an observation.

  285. Jerry says, “And a series of events have the same probability as all the events happening at once.”

    This is not true if there is selection involved at each step. My dice example at 169 shows clearly that if each step has a law which selects for some state over another, then the probability of the final state being reached through a set of steps is most definitely not the same as if they happened all at once.

  286. 287

    jerry at 282,

    “Let’s avoid any more unpleasant misunderstandings by getting a rigorous mathematically definition of how to measure your metric and what it means.”

    You were given a definition and an example.

    But when I used that definition and example, you objected and changed it. Hence my request for a detailed definition.

    I’m doing you the courtesy of taking your claims seriously enough to investigate them more thoroughly. The minimal courtesy I would expect in response is for you to provide a more detailed explanation where necessary.

    Are you genuinely interested in discussing how to identify design objectively or not?

  287. Some think they can disguise it under the pretext of saying they are just trying to learn and understand but it is the lack of affirmation on anything that is the dead giveaway.

    Jerry, that is very true.

  288. “But when I used that definition and example,”

    What did I change and what example did I change. If you are referring to the whole genome, that was a joke and explained why no one in their right mind would ever do it or even be interested in it from a FSCI point of view. I did not change anything. Let me know what I said, so I can retract if I was wrong or explain it better to you.

  289. Mr Vjtorley,

    @265 you reference the paper by Kalinsky. I am reminded of joke about physicists where the punchline is “Asssume a spherical cow of unit radius…” As much as Kalinsky is on the right track to follow up on Hazen’s suggestion of how to measure functional information, it is the simplifying assumptions in his work which eventually make it useless for the purposes you wish to put it.

    The chief simplifying assumption goes into esitmating I_nat, the amount of functional information that can be created by a natural process. Kalinsky assumes this can be estimated by a number of repeated blind trials. This is the classic tornado in the junkyard assumption, and it is not “generous” of him to make it.

    A more realistic assumption would be of a fitness function that included the laws of physics and chemistry, but no intelligent process. Unfortunately, such an assumption is not simple. The inputs to year 2 are the outputs of year 1 (rather than starting from scratch) and so on for 500 million years in Kalinsky’s model.

    Kalinsky could of course justify his first assumption by stating that whatever products are created in year 1 will degrade before they can be used in year 2. But this gets to questions of specific rates of build up and degradation of molecules in some combination of atmosphere, solar flux, water temperature and pressure, etc. If kalinsky truly wanted to be ‘generous’ in his assumptions, he would assume that molecules do accumulate over time at rates related to their chemistry in specific conditions, and the presence of other molecules with different rates of reaction with the same feedstocks.

    Given a target of a 30,000 bit genome and a 500 million year time period, the process of net accumulation of functional information in the biosphere would have to proceed at the glacial pace of 1 bit every 20,000 years or so. Can the laws of physics and chemistry, using the resources of the entire planet, accumulate 1 bit every 20,000 years? The key word there is accumulate, and that is the idea that Kalinsky needs to incorporate into his assumptions to get more realistic estimates.

  290. tribune7 @ 283 –

    Hey Heinrich go back and try Post 261 again and tell us where I mention FCSI.

    Ah, my apologies – no F. It does raise the question of what is the difference between CSI and FCSI. And FI (Functional Information) too.

    CJYman @ 281 –

    CSI will most definitely give us a value when all the variables are inputted and the equation is worked out, however it can also be treated in a boolean fashion as someone has stated above. … However, we will have a “lowest possible value” of CSI for that event and if that value > 1, then we know that the event does indeed exhibit CSI. In this way, we can treat measuring for CSI, not as a strict comparative measurement (although this is possible and I believe useful), but as a yes/no answer to the question, “Does this event exhibit CSI?” or “Is this event intelligently designed?”

    This looks like a recipe for confusion: you’re using CSI to mean 2 different things. If CSI<1, then CSI=1!

    Informally, people will say things like this, but I think it would help if the two concepts were kept separate.

  291. “This is not true if there is selection involved at each step”

    What selection? Where could there have been selection in the first life form or before it? It was supposed to take place afterwards. Nakashima will help you out as some non life chemical processes require more resources than others and are more efficient at using them so they will be the ones to survive and use all the resources and are our true ancestors. It is not a single celled ancestor that started it all but a chemical reaction. We have to learn to respect our proper ancestor more.

    Of course I am being facetious. Where did ATP synthase come from? It had to be there at the start. There is no step wise assembly for these 2000 amino acids that make sense. How many steps were involved. And in order to use the step wise argument one needs to point out the near infinite number of possibilities that it could choose from and how we are just the lucky accident. But no one can point to one alternative life system let alone many.

    So did this hypothetical stepwise process really only have a few obvious steps that can be chosen at each hypothetical step, then that means that somehow life was built into the system or the dice are fixed and the theistic evolutionists would be ecstatic because they found out how God did it and they would then join the ID ranks because they could see the actual design. But people like Nakashima would be dismayed. Their whole inner being is to show we are just the luck of the draw whether it be the universe or the chemical reaction. Their step wise procedure requires a myriad of possibilities and so far there is only one. A universe that was so fine tuned to specify ATP synthase would be too much for them to take.

    And getting to ATP synthase is beyond the resources of the mutlt-verse. ATP Sythase is like a Shakespeare play and you and others are saying the equivalent of a Shakespeare play can be built one word at a time because as each word combination is assembled it will be selected by some unknown process for some unknown advantage. But in the end we will have Henry V and the famous speech at Agincourt. Just “We few, we happy few, we band of brothers” would be amazing but the whole play. Oh, I realize this is a little hyperbole. We would only have Act IV Scene 3.

    In case you think I am be discriminating by picking on ATP syntase, there are lots of others that could take its place. It is just one of the largest elephants in the room but there is whole herd in there.

  292. Heinrich — It does raise the question of what is the difference between CSI and FCSI.

    The glossary actually attempts to address that with FSCI being a subset of CSI and, to me, using functionality as a marker for specificity.

    Forgive the earlier snippyness, but, again to me, CSI was always meant to be Boolean as used by Dembski i.e. IF CSI THEN Design = True

    FSCI, and Jerry has a better handle on it than me, always struck me as being more descriptive and scalable.

  293. Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL

    In the preceding and proceeding paragraphs William Dembski makes it clear that biological specification is CSI- complex specified information.

    In the paper “The origin of biological information and the higher taxonomic categories”, Stephen C. Meyer wrote:

    Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information–that is, specified complexity from mere complexity. This review will use this term as well.

  294. 296

    jerry at 289,

    “But when I used that definition and example,”

    What did I change and what example did I change.

    Well, let’s see. At 209 and before you claim that FSCI is the number of bits required to describe the genome (actually you used 4 to the power of the length of the genome, but what’s a binary order of magnitude between friends?). At 222 you said “You could not calculate the total FSCI for those genomes except for the prokaryotes since I believe every bit of DNA is coding.”

    But then, at 259, you said “The FSCI applies to the specific coding areas, not the whole genome.”

    To clear up the confusion, please provide a clear, mathematically rigorous definition of FSCI.

    If you are referring to the whole genome, that was a joke and explained why no one in their right mind would ever do it or even be interested in it from a FSCI point of view.

    The thing about jokes is they’re, um, you know, supposed to be funny. At the least, they should be distinguishable in some way from apparently serious claims that are made here.

    I did not change anything.

    On the contrary, I provided references to where you did.

    Let me know what I said, so I can retract if I was wrong or explain it better to you.

    I would appreciate that, thank you. A clear, mathematically rigorous definition of FSCI, preferably with a worked example for a real biological artifact would be great.

  295. A short recap; (all post numbers subject to change, it seems, due to moderated posts being inserted later.)

    Jerry at 282 : “A series of events have the same probability as all the events happening at once.”

    Me at 286: “This is not true if there is selection involved at each step. My dice example at 169 shows clearly that if each step has a law which selects for some state over another, then the probability of the final state being reached through a set of steps is most definitely not the same as if they happened all at once.” I don’t want to repost this again, so I ask you to go look at this again if you don’t remember it.

    Jerry at 292: “What selection? Where could there have been selection in the first life form or before it?”

    Jerry, you have broadened the question way beyond the scope of my statement. I would like to explain a bit, and then re-state my point for your consideration.

    When we use math and logic to describe the world, there are two components to our work:

    1. We build a theoretical model and analyze it logically to see how it works and what logical conclusions we can reach from it, and

    2. We test the model against the real world to see if the model is accurate.

    I am talking about the first step – I’m talking about some mathematical considerations. Let us leave aside for the moment whether the model can be applied to such a difficult problem as the origin life. Let’s just talk theory.

    My dice example at 169 shows clearly, I think, that there is a mathematical difference between an event which happens all at once an a series of events in which some type of selection takes place at each step.

    So, if you don’t think about how the real world works, and just think about logical and mathematical models we make, can you agree that if the model includes selection of some sort after each step, the probability of the final state being reached through a set of steps is not the same as if they happened all at once?

  296. I read all of 209 and 222 again and haven’t a clue what you are talking about. I don’t see where I changed course at all or changed definitions.

    “The thing about jokes is they’re, um, you know, supposed to be funny.”

    Well apparently you fell for the absurdity even after I pointed it out a couple times. You continue to pursue the nonsense I said it was. I find the anti ID position absurd so I was, to use the phrase of a contemporary commentator, using an absurdity to illustrate absurdity.

    Why bother pursuing it when I said it was absurd to do so. I said it might be done for a prokaryote but it would take lot of time but said why would anyone want to do it because all it takes is to estimate it for one gene and the whole naturalistic argument falls apart. Why pile on and waste time by trying to calculate it for a whole organism even a simple one.

    You have a way to calculate it for an individual gene, so have at it and do it for every gene in a prokaryote. I believe a few have been completely determined. But what one would get is an incredibly high number that defies every logic of naturalistic approaches.

    If you want to calculate it for more than one gene, just multiply the probabilities with each other. I believe that in information theory they take the logs and then just add them to make it easier.

    Now I am well aware that two genes may not be independent of each other and so multiplying the two probabilities may not be entirely accurate but it gets to the essence of the problem. It is possible to reduce the probabilities somewhat by considering redundant segments, or the interchangeability of some amino acids.

    Theoretically this could be done for all the genes in a genome if someone has a large number of monkeys working for them but why bother. All that is needed is just one to make the point.

    I was using 4^ because of the four bases and in order to make it simpler, switched over to amino acids which is 20^. It’s been a while since I practiced my mathematics but they are close enough for this type of evolutionary work.

  297. Mr Jerry,

    But people like Nakashima would be dismayed.

    Please do me the courtesy of not pretending to read my mind, and I will continue to extend the same courtesy to you. I would be overjoyed if we learned so much that we could say confidently that life was inevitable on this planet.

  298. “So, if you don’t think about how the real world works, and just think about logical and mathematical models we make, can you agree that if the model includes selection of some sort after each step, the probability of the final state being reached through a set of steps is not the same as if they happened all at once?”

    You are new here and are maybe not picking up the differences between discussions. The term FSCI or FCSI refers to structures that either arise through natural processes or through intelligent intervention. ID will say that it is very unlikely for these structures to arise through naturalistic processes (maybe a few did but that is all.) The use of FSCI is mainly not about evolution per se but more about the origin of life and the origin of certain proteins and RNA polymers in a cell. What is their origin. When it is used for evolution, it is more about the origin of novel information needed to control new complex capabilities in organisms.

    As such when applied to the presence of certain genes, the concept of selection is less appropriate in the origin of proteins for the first cell and may in fact be not appropriate at all. So what I have been discussing is very real world and in certain cases one has to apply science and logic and know when to distinguish one argument for another.

    There are two places where the idea of gradualism seems appropriate and has been used to justify naturalistic processes. One is in evolution and we can get back to that in a couple paragraphs but the other is in the origin of these incredibly complicated molecules that were necessary at the get go for life to function. So to say they happened in steps is a completely different process and discussion from saying that evolution happened in steps.

    I am not confusing the two but it seems many here are confused, which is why I pointed out the term selection was not appropriate to use when discussing FSCI. And that the probabilities of the so called steps essentially added up to the probability of the whole. To dispute this, one would have to show two things, that the sub parts were indeed functional and remember we are talking at the very beginning probably before life existed and the individual steps were somehow favored by non biological processes. And secondly that there were other viable paths to transit to get to something resembling life. Otherwise this so called fantastic process just happened to find the one viable path to what Dawkins calls the Greatest Show on Earth at every step. The probability of that is the same as if it poofed into existence all at once. So by steps or in one fell swoop, the probabilities are the same. The person who questions the probability assessment has to show first that there is function all the way down part by part and secondly that there are zillions of alternatives so at each step along the way there was no big deal finding a viable next step. If there was only one viable next step or only a few then the probabilities are the same or essentially the same.

    Now for gradualism in evolution and FSCI. The amount of information difference between the cell of a mammal and a prokaryote is immense. How did all this new information arise? Again the individual coding regions can be assessed for FSCI and this concept could be used to compare the two but it probably isn’t necessary to go that far. It is overkill. There is no evidence that much of these differences arose naturally. They just appeared as in the Cambrian Explosion. There is zero evidence that gradualist processes over time led to the complexity seen in the microbe to man progression. Gradualist changes do happen but they essential reshuffle what is already there. They may make for some interesting changes but not the building of complex novel capabilities. For that one has to go elsewhere. I am currently reading Dawkins’ new book and it is very interesting reading but so far is completely compatible with ID. So here is a challenge to any anti ID person here, point out one thing in Dawkins’ book that undermines ID. So far I have not found any. I might change my mind as I move on.

    We mention every now and then the discussions of an evolutionary biologist who comes here named Allen MacNeill. By the way Allen has declared Darwin or Darwinian processes dead even though he believes in naturalistic evolution. He has identified about 50+ ways that genomes can be modified. Many just add or delete base pairs to a genome or change a SNP. The biggest addition is when the whole genome gets duplicated. The most frequently mentioned change by people challenging ID is gene duplication like it is something magic and it certainly does happen. Many of the various way that a genome is modified will usually not cause the organism to perform any differently in nature so it does not affect its survivability immediately. But these extra pieces of DNA can mutate away because they are extra and don’t affect surviving. And then the theory says they will eventually mutate into something new and viable and will then affect survival in a positive way and possibly very dramatically so a major phenotype change has taken place. Hence while a gradual process it it the antithesis of Darwinian gradualism which says the gradual changes are in the coding regions. So here is a way that FSCI could arise naturally and probably did a few times. The already in place transcription/translation system will then produce a new protein to this mutated segment of DNA and it will affect the phenotype positively. Over time enough of these changes will cause all we have seen in biology. Or so the theory goes.

    The only problem with this scenario is that there is no proof it ever happened except for an occasional or two change. There is no indication that even if it could produce the 2000 genes necessary for vision, that it could somehow coordinate this massive amount of information necessary for these complicated processes to take place. Evolutionary biology would go a long way to legitimize itself by showing how these paths arose and to show the process working even today to increase information in genomes. But we have radio silence and we have asked for any evidence of this here for years or anything published on this. No one steps up including Richard Dawkins in his book. So if Dawkins cannot do it, what are we to think.

    I just want to say that the previous comment is part of ID’s strategy of avoiding answers to difficult questions and banning anyone who does so.

  299. I take it your long response was not actually a response to my question even though you quoted me at the start.

    Notice the bolded and capitalized parts:

    So, if you don’t think about how the real world works, and just think about logical and mathematical models we make, can you agree that IF the model includes selection of some sort after each step, THEN the probability of the final state being reached through a set of steps is not the same as if they happened all at once?

    Notice that I am not asking you at all to accept the truth of the premise as a fact about the real world. I’m asking you merely to think about an abstract model irrespective of any connection to the real world: IF the model includes selection of some sort after each step, THEN the probability of the final state being reached through a set of steps is not the same as if they happened all at once.

    Does this statement seem true or false to you?

    Thanks

  300. “I would be overjoyed if we learned so much that we could say confidently that life was inevitable on this planet.”

    Would you be overjoyed to find out that the process which led inevitably to life was obviously part of the basic design of the universe and that extremely small perturbations from that would have destroyed that inevitability both here and everywhere?

    I am not implying that is true, only that it could be a finding some day. The current fine tuning argument is about the receptibility of this universe to life as we know it. I was implying something incredibly more fine tuned than that.

    By the way such a finding if true would upset some theistic evolutionists who insist that God left no trail. But that would be a trail.

  301. “Notice that I am not asking you at all to accept the truth of the premise as a fact about the real world. I’m asking you merely to think about an abstract model irrespective of any connection to the real world: IF the model includes selection of some sort after each step, THEN the probability of the final state being reached through a set of steps is not the same as if they happened all at once.”

    I answered this. Yes I would agree with that but indicated what that implies and what one has to show in order to make the cumulative argument. It is much easier to see the reasonableness of a cumulative argument when we know there are lots of different life forms possible in world. It is quite a different thing when we do not know if any step along the way is feasible and when there are no other possibilities.

    That is what I tried to explain to you and to others who make this specious argument for the origin of life, that it happened in steps and that it changes the probabilites. No one knows if there is an alternative to ATP synthase or the ribosome. And if there are no alternatives then the step wise argument falls apart on that particular problem but not on every other problem in the universe. So the step wise argument that Mustela Nivalis said was the answer to the highly unlikely probability of something like ATP synthase

    By the way there are varieties of ATP synthase and the ribosome and a lot of other proteins but generally they are very conserved over many different species and kingdoms.

    Since you are in the question asking mode which is the modus operandi of the anti ID people here, maybe you should start answering question. Why don’t you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye. And if you cannot, then would you admit that ID has a point by questioning naturalistic evolution for such capabilities. Remember ID does not say it never happened naturalistically only that there is no evidence that it did.

  302. Jerry writes, “I answered this. Yes I would agree with that …”

    Good, and thanks. I am glad to have this cleared up.

  303. Jerry writes, “Since you are in the question asking mode which is the modus operandi of the anti ID people here, maybe you should start answering question. Why don’t you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye. And if you cannot, then would you admit that ID has a point by questioning naturalistic evolution for such capabilities.”

    I have had a very limited goal in this discussion, even though you have painted me with a large, and stereotyped, brush.

    Answers to your questions:

    Question #1: “Why don’t you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye.”

    I’m not a biologist and I can’t answer that question. In fact, that topic has not been one I’ve addressed, or had any interest in addressing.

    Question #2: “And if you cannot, then would you admit that ID has a point by questioning naturalistic evolution for such capabilities.”

    As I have made clear, it is not design per se that I am arguing against. From the very beginning I have stated that what I am claiming is that the “pure chance hypothesis” that is used as an argument for ID is a flawed argument. I have in fact pointed out that I think it is detrimental to the larger goal, for those who have this goal, of advocating for design to hitch their wagon to the faulty “tornado in a junkyard” argument.

  304. “As I have made clear, it is not design per se that I am arguing against. From the very beginning I have stated that what I am claiming is that the “pure chance hypothesis” that is used as an argument for ID is a flawed argument. I have in fact pointed out that I think it is detrimental to the larger goal, for those who have this goal, of advocating for design to hitch their wagon to the faulty “tornado in a junkyard” argument.”

    ID makes no such argument.

  305. ““Why don’t you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye.”

    I’m not a biologist and I can’t answer that question.”

    I am not a biologist either but can understand the arguments made by biologists. None have ever made an argument to support the evolution of complex novel characteristics. Doesn’t that tell you something?

    “In fact, that topic has not been one I’ve addressed, or had any interest in addressing.”

    It should be because that is the essence of the debate. Essentially you have no idea whether one of ID’s core beliefs is based on good evidence or not. Someone should not criticize people when they do not understand why they hold their positions. It is like saying I do not care what is true, it is better done another way. That is a hard position to sell anyone. It is an extremely arrogant one to suggest that deception is better than truth.

  306. Mr Jerry,

    It is hard to predict what my reaction to that situation might be, but I find the combination of “inevitable” and “incredibly fine tuned” to be an unlikely one.

  307. tribune7,
    What is true, this:

    ID is science. It can be falsified. It does not the involve the supernatural in the least.

    or that:

    The theory of intelligent design (ID) holds that certain features of the universe and of living things are best explained by an intelligent cause rather than an undirected process such as natural selection.
    Other evidence challenges the adequacy of natural or material causes to explain both the origin and diversity of life.

  308. Mustela and Heinrich and others who may be interested:

    You’ve been asking for a clear definition of FCSI, and you’ve commented that the various definitions given in the ID literature employ different terminology, leading to confusion. You’ve also commented that some of the articles make too much use of examples drawn from card games. So I’ve decided to bring all the definitions together on this thread, in a simplified form, making use of extracts from the ID literature – the “meat” of the argument, as it were. After that, I’ll give some concrete examples of how FCSI can be computed in a biological context. Finally, I’ll argue that ID proponents have a very strong argument to show that life could not have originated by undirected processes and was therefore designed by an intelligent agent.

    Where should you start reading? If you want to read something online that attempts to quantify CSI, I suggest you consult the following articles:

    (1) Abel, D. “The Capabilities of Chaos and Complexity,” in International Journal of Molecular Sciences, 2009, 10, pp. 247-291, at http://mdpi.com/1422-0067/10/1/247/pdf .

    (2) Durston K., Chiu D., Abel D. and Trevors J., “Measuring the functional sequence complexity of proteins,” in Theoretical Biology and Medical Modelling, 2007, 4:47 at http://www.tbiomed.com/content.....2-4-47.pdf .

    (3) “Intelligent Design: Required by Biological Life?” by Kalinsky, K. D. at http://www.newscholars.com/pap.....rticle.pdf .

    (4) Hazen, R.M., Griffen, P.L., Carothers, J.M. & Szostak, J.W. 2007. “Functional information and the emergence of biocomplexity,” in PNAS 104, 8574-8581, at http://www.pnas.org/content/104/suppl.1/8574.full .

    (5) Abel, D. and Trevors, J. “Three subsets of sequence complexity and their relevance to biopolymeric information,” in Theoretical Biology and Medical Modelling, 2005, 2:29, doi:10.1186/1742-4682-2-29 at http://www.tbiomed.com/content/2/1/29 .

    (6) Dembski, W. A. “Specification: The Pattern that Signifies Intelligence.” August 15 2005. Version 1.22. Available at http://www.designinference.com.....cation.pdf .
    In addition, I’d recommend that you read the following books:

    (7) Dembski, W. A. and Wells, J. “The Design of Life.” 2008. Foundation for Thought and Ethics, Dallas.

    (8) Meyer, S. C. “Signature in the Cell.” 2009. Harper One, New York.

    Summary of my conclusions

    (a) There are five overlapping definitions floating around in the ID literature, but they are mutually compatible, and the differences between them are relatively unimportant. Meyer’s (2009) definition of specified complexity is the clearest. Meyer also defines functional complex specified information (FCSI).

    (b) Functional complex specified information (FCSI) is a subset of complex specified information (CSI). More precisely, FCSI is just CSI that meets a set of independent functional requirements. Some CSI is non-functional; it simply matches a pre-specified pattern.

    (c) Complex specified information (CSI) is information that is both complex (i.e. highly improbable) and specified. An event is “specified” if it exhibits a pattern that matches another pattern that we know independently – either because we have seen such a pattern before, or because it satisfies a functional requirement that we can readily understand from investigating it. Because we can readily make sense of a specified pattern, it follows that a specified pattern will be easily describable in our language.

    (d) Information is just a mathematical measure of improbability or complexity.

    (e) Looking at these articles, I have so far identified two methods for identifying patterns that require intelligent design: Dembski’s probability bound and Kalinsky’s approach. (There may be others; I’m still reading through Dembski’s and Meyer’s books.) Both of these methods make use of estimates regarding what undirected processes can do. Interestingly, Kalinsky’s method explicitly considers the possibility that natural selection may be rigged in favor of producing life and complex organisms, by an intelligent process. Kalinsky expressly says that we should not assume natural selection is mindless.

    Now let’s have a look at the various definitions of FCSI in the ID literature.

  309. DEFINITION ONE (Meyer, 2009)

    Meyer’s definition of CSI appears on pages 106-107 of Meyer’s book, and again on pages 352-353.

    Pages 106-107:
    Complex sequences exhibit an irregular, non-repeating arrangement that defies expression by a general law or computer algorithm… Information theorists say that repetitive sequences are compressible, whereas complex sequences are not. To be compressible means a sequence can be expressed in a shorter form or generated by a shorter number of characters… Information scientists typically equate “complexity” with “improbability,” whereas they regard repetitive or redundant sequences as highly probable
    In our parable, … Smith’s sequence [the ten digits comprising Jones's telephone number] was specifically arranged to perform a function, whereas Jones’s [a random sequence of ten digits] was not. For this reason, Smith’s sequence exhibits what has been called specified complexity, while Jones’s exhibits mere complexity. The term specified complexity is, therefore, a synonym for specified information or information content.

    Page 352:
    Dembski notes that we invariably attribute events, systems, or sequences that have the joint properties of “complexity” (or small probability) and “specification” to intelligent causes – to design – not to chance or physical-chemical necessity. Complex events or sequences of events are extremely improbable and exhibit an irregular arrangement that defies description by a simple rule, law or algorithm. A specification is a match or correspondence between an observed event and a pattern or set of functional requirements that we know independently of the event in question. Events or objects are “specified” if they exhibit a pattern that matches another pattern that we know independently.

    Pages 352-353 (referring to students at a lecture who inferred intelligent design – i.e. a set-up – when they saw a “randomly” selected student open a combination lock on the first try):

    When John (my plant) turned the dial in three ways to pop the lock open, the other students realized that the event matched a set of independent functional requirements – the requirements for opening the lock that were set when its tumblers were configured… My students perceived an improbable event that matched an independent pattern and met a set of independent functional requirements. Thus for two reasons, the event manifested a specification as defined above.

    Pages 359-360:
    Since specifications come in two closely related forms, we detect design in two closely related ways. First, we can detect design when we recognize that a complex pattern of events matches or conforms to a pattern that we know from something else we have witnessed… Second, we can detect design when we recognize that a complex pattern of events has a functional significance because of some operational knowledge that we possess about, for example, the functional requirements or conventions of a system. If I observe someone opening a combination lock on the first try, I correctly infer an intelligence cause rather than a chance event. Why? I know that the odds of guessing the combination are extremely low, relative to the probabilistic resources, the single trial available.

    Pages 364-365, 367:
    Do the sequence of bases in DNA match a pattern that we know independently from some other realm of experience? If so, where does that pattern reside? …
    While certainly we do not see any pattern in DNA molecule that we recognize from having seen such a pattern elsewhere, we – or at least molecular biologists – do recognize a functional significance in the sequences of bases in DNA based upon something else we know. As discussed in chapter 4, since Francis Crick articulated the sequence hypothesis in 1957, molecular biologists have recognized that the sequence of bases in DNA produce a functionally significant outcome – the synthesis of proteins. Yet as noted above, events that produce such outcomes are specified, provided they actualize or exemplify independent functional requirements (or “hit” independent functional targets). Because the base sequences in the coding region of DNA do exemplify such independent functional requirements (and produce outcomes that hit independent functional targets in combinatorial space), they are specified in the sense required by Dembski’s theory…
    The nucleotide base sequences in the coding regions of DNA are highly specific relative to the independent requirements of protein function, protein synthesis, and cellular life. To maintain viability, the cell must regulate its metabolism, pass materials back and forth across its membranes, destroy waste materials, and do many other specific tasks. Each of these functional requirements, in turn, necessitates specific molecular constituents, machines, or systems (usually made of proteins) to accomplish these tasks. As discussed in chapters 4 and 5, building these proteins with their specific three-dimensional shapes depends upon the existence of specific arrangements of nucleotide bases in the DNA molecule.
    For this reason, any nucleotide base sequence that directs the production of proteins hits a functional target within an abstract space of possibilities…. The chemical properties of DNA allow a vast ensemble of possible arrangements of nucleotide bases. Yet within that set of combinatorial possibilities relatively few will – given the way the molecular machinery of the gene-expression system works – actually produce functional proteins. This smaller set of functional sequences, therefore, delimits a domain (or target or pattern) within a larger set of possibilities. Moreover, this smaller domain constitutes an independent pattern or target, since it distinguishes functional from non-functional sequences, and the functionality of nucleotide base sequences depends on the independent requirements of protein function.
    Therefore, any actual nucleotide sequence that falls within this domain or matches one of the possible functional sequences corresponding to it “hits a functional target” and exhibits a specification. Accordingly, the nucleotide sequences in the coding regions of DNA are not only complex, but also specified. Therefore, according to Dembski, the specific arrangements of bases in DNA point to prior intelligent activity

    My comments:
    I think Meyer’s definition of “specified” marks an advance on Dembski’s definition (below), which defines “specified” as being easily describable. However, the difference between the two definitions is relatively insignificant: as I argued above, if we know a pattern independently, it will either be because we have seen such a pattern before, or because it satisfies a functional requirement that we can understand from investigating it. Because we can readily make sense of a specified pattern, it follows that a specified pattern will be easily describable in our language.

  310. DEFINITION TWO (Dembski, 2008; Dembski, 2005)

    The following extracts are taken from Dembski, W. A. and Wells, J. “The Design of Life.” 2008. Foundation for Thought and Ethics, Dallas. The definitions are similar to those in Dembski’s 2005 paper, “Specification” (see below).

    Information (p. 314)
    Literally the act of giving form or shape to something. Because giving form to a thing rules out other forms it might take, information theory characterizes information as the reduction of possibilities of uncertainty. In classical information theory, the amount of information in a string of characters is inversely related to the probability of the occurrence of that string. Hence, the more improbable the string, the more uncertainty is reduced by identifying it and therefore the more information it conveys. Information defined in this way provides only a mathematical measure of improbability or complexity. It does not establish whether a string of characters conveys meaning, performs a function, or is otherwise significant.

    Descriptive Complexity (p. 311)
    A measure of the difficulty needed to describe a pattern.

    Probabilistic complexity (p. 318)
    A measure of the difficulty for chance-based processes to reproduce an event.

    Specified Complexity (p. 320)
    An event or object exhibits specified complexity provided that (1) the pattern to which it conforms identifies a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). Specified complexity is a type of INFORMATION.

    Complex specified information (p. 311)
    Information that is both complex and specified. Synonymous with SPECIFIED COMPLEXITY.

    Functional information (p. 313)
    Information in the base sequence of a species’ DNA that codes for structures capable of performing biological functions; much of this functional information exhibits specified complexity. (2) More generally, patterns embodied in material structures that enable them to perform functions.

    (To be continued)

  311. Upright biped,

    Cabal, where did we get the idea for a wheel,

    Just looking at a dung beetle in action?

  312. Let’s take the alphabet too:

    (God gave Moses clay tablets, even twice. Now when we have the Internet, why can’t he have a web page?)

    Be that as it may, WRT invention of the alphabet: The ancients wrote on clay tablets too. To begin with, the sound A was represented by a pictogram of the head of an ox, Apis.
    Looking at the A, you may see the two horns at the bottom with the snout at the top. That actually is the ox head rotated 90 degrees. The cause for that is that the orientation of the tablet when being written at a later time was changed from a vertical to a horizontal position.

    I suppose it all means that the ancients realized that the sound ‘A’, the initial vowel of Apis, could be represented by the image of Apis.

    I learned this from John Allegro’s “The Sacred Mushroom and the Cross” that I read forty years ago. A fascinating book, although I don’t think I agree with all of Allegro’s inferences.
    But it gives a fascinating insight in how a scientist works. To recreate, to read and speak the Mesopotamian language!

    Inventing from scratch things we don’t have a clue about, don’t have a name for, that is not easy!

  313. DEFINITION TWO (continued)

    Dembski (2005) contains a very detailed discussion of what “easily describable” means with regard to specified complexity, as well as a context-independent procedure for ruling out chance. I’ve taken out the heavy math and the card examples, and tried to keep the focus on biology, as you requested:

    [Note: In the quotes below, PHI denotes “Phi with the subscript s,” and log denotes “log to base 2” – VJT.]

    Specificity
    [W]hat makes [a] pattern … a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance. It’s this combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes [a] pattern … a specification.

    This intuitive characterization of specification needs now to be formalized. We begin with an agent S trying to determine whether an event E that has occurred did so by chance according to some chance hypothesis H …S notices that E exhibits a pattern T…As a pattern, T is typically describable within some communication system. In that case, we may also refer to T, described in this communication system, as a description.

    S, to identify a pattern T exhibited by an event E, formulates a description of that pattern. To formulate such a description, S employs a communication system, that is, a system of signs. S is therefore not merely an agent but a semiotic agent.

    Each S [i.e. each agent – VJT] can therefore rank order these patterns in an ascending order of descriptive complexity, the simpler coming before the more complex, and those of identical complexity being ordered arbitrarily. Given such a rank ordering, it is then convenient to define PHI as follows:

    PHI(T) = the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.

    Thus, if PHI(T) = n, there are at most n patterns whose descriptive complexity for S does not exceed that of T.

    [Now] imagine a dictionary of 100,000 (= 10^5) basic concepts. There are then 10^5 1-level concepts, 10^10 2-level concepts, 10^15 3-level concepts, and so on. If “bidirectional,” “rotary,” “motor-driven,” and “propeller” are basic concepts, then the molecular machine known as the bacterial flagellum can be characterized as a 4-level concept of the form “bidirectional rotary motor-driven propeller.” Now, there are approximately N = 10^20 concepts of level 4 or less, which therefore constitute the specificational resources relevant to characterizing the bacterial flagellum. Next, define p = P(T|H) as the probability for the chance [i.e. undirected – VJT] formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial flagellar structure). Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms. We may therefore think of the specificational resources as allowing as many as N = 10^20 possible targets for the chance formation of the bacterial flagellum, where the probability of hitting each target is not more than p. Factoring in these N specificational resources then amounts to checking whether the probability of hitting any of these targets by chance is small, which in turn amounts to showing that the product Np is small.

    The negative logarithm to the base 2 of this last number, –logN*p, we now define as the specificity of the pattern in question. Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom PHI measures specificational resources, the specificity sigma is given as follows:

    sigma = –log[ PHI(T) * P(T|H) ].

    Note that T in PHI(T) is treated as a pattern and that T in P(T|H) is treated as an event (i.e., the event identified by the pattern).

    What is the meaning of this number, the specificity, sigma? To unpack sigma, consider first that the product PHI(T) * P(T|H) provides an upper bound on the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than T and whose probability is no more than P(T|H). The intuition here is this: think of S as trying to determine whether an archer, who has just shot an arrow at a large wall, happened to hit a tiny target on that wall by chance. The arrow, let us say, is indeed sticking squarely in this tiny target. The problem, however, is that there are lots of other tiny targets on the wall. Once all those other targets are factored in, is it still unlikely that the archer could have hit any of them by chance? That’s what PHI(T) * P(T|H) computes

    Note that putting the logarithm to the base 2 in front of the product (PHI(T) * P(T|H)) has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity a measure of information. This logarithmic transformation therefore ensures that the simpler the patterns and the smaller the probability of the targets they constrain, the larger specificity.

    Specified Complexity
    Let us now return to our point of departure, namely, an agent S trying to show that an event E that has occurred is not properly attributed to a chance hypothesis H. Suppose that E conforms to the pattern T and that T has high specificity, that is, – log [ PHI(T) * P(T|H) ] is large or, correspondingly, PHI(T) * P(T|H) is positive and close to zero. Is this enough to show that E did not happen by chance? No. What specificity tells us is that a single archer with a single arrow is less likely than not (i.e., with probability less than 1/2) to hit the totality of targets whose probability is less than or equal to P(T|H) and whose corresponding patterns have descriptive complexity less than or equal to that of T. But what if there are multiple archers shooting multiple arrows? …It depends on how many archers and how many arrows are available.

    More formally, if a pattern T is going to be adequate for eliminating the chance occurrence of E, it is not enough just to factor in the probability of T and the specificational resources associated with T. In addition, we need to factor in what I call the replicational resources associated with T, that is, all the opportunities to bring about an event of T’s descriptive complexity and improbability by multiple agents witnessing multiple events. If you will, the specificity PHI(T) * P(T|H) (sans negative logarithm) needs to be supplemented by factors M and N where M is the number of semiotic agents (cf. archers) that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen (cf. arrows). Just because a single archer shooting a single arrow may be unlikely to hit one of several tiny targets, once the number of archers M and the number of arrows N are factored in, it may nonetheless be quite likely that some archer shooting some arrow will hit one of those targets. As it turns out, the probability of some archer shooting some arrow hitting some target is bounded above by M * N * PHI(T) * P(T|H). If, therefore, this number is small (certainly less than 1/2 and preferably close to zero), it follows that it is less likely than not for an event E that conforms to the pattern T to have happened according to the chance hypothesis H

    Moreover, we define the logarithm to the base 2 of M *N * PHI(T) * P(T|H) as the context-dependent specified complexity of T given H, the context being S’s context of inquiry:

    CHI-tilde = –log[ M* N * (PHI(T)) *P(T|H)].

    Note that the tilde above the Greek letter chi indicates CHI-tilde’s dependence on the replicational resources within S’s context of inquiry. As defined, CHI-tilde is context sensitive, tied to the background knowledge of a semiotic agent S and to the context of inquiry within which S operates. Even so, it is possible to define specified complexity so that it is not context sensitive in this way. Theoretical computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history. This number sets an upper limit on the number of agents that can be embodied in the universe and the number of events that, in principle, they can observe. Accordingly, for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M*N will be bounded above by 10^120. We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as

    CHI = –log[ 10^120 * (PHI(T)) * P(T|H)].

    [T]here is never any need to consider replicational resources M·N that exceed 10^120 (say, by invoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do (for the details about the fallacy of inflating one’s replicational resources beyond the limits of the known, observable universe, see my article The Chance of the Gaps .

    It follows that if (10^120 * PHIS(T) * P(T|H)) < 1/2 or, equivalently, that if CHI = –log[ 10^120 * PHI(T) * P(T|H)] > 1, then it is less likely than not on the scale of the whole [observable] universe, with all replicational and specificational resources factored in, that E should have occurred according to the chance hypothesis H. Consequently, we should think that E occurred by some process other than one characterized by H. Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance. Since to do so, it must be the case that CHI = –log[ 10^120 * PHI(T) * P(T|H)] > 1, we therefore define specifications as any patterns T that satisfy this inequality. In other words, specifications are those patterns whose specified complexity is strictly greater than 1.

    As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum. Recall the following description of the bacterial flagellum given in section 6: “bidirectional rotary motor-driven propeller.” This description corresponds to a pattern T. Moreover, given a natural language (English) lexicon with 100,000 (= 10^5) basic concepts (which is supremely generous given that no English speaker is known to have so extensive a basic vocabulary), we estimated the complexity of this pattern at approximately PHI(T) = 10^20 (for definiteness, let’s say S here is me; any native English speaker with a some of knowledge of biology and the flagellum would do). It follows that –log[ 10^120 * PHI(T) * P(T|H)] > 1 if and only if P(T|H) < 0.5 * 10^-140, where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event, is the evolutionary pathway that brings about the flagellar structure (for definiteness, let’s say the flagellar structure in E. coli). Is P(T|H) in fact less than 0.5 * 10^-140, thus making T a specification? The precise calculation of P(T|H) has yet to be done.

    My comments:
    Now can you all see where the Boolean comes in? Either the pattern in question will satisfy the inequality or it won’t. Only if it satisfies the inequality can we infer intelligent design.

    It may also have occurred to some of you that the simplicity of description is dependent on the language in which it is formulated. Dembski is aware of this, and defines PHI(T) in terms of the simplest description in any of the languages used by the agent S.

    Dembski is also offering scientists a challenge: all you have to do is show me an undirected mechanism whereby the bacterial flagellum can arise with a probability of greater than 10^-140, and I’ll stop claiming that it was designed.

  314. DEFINITION THREE (Hazen et al., 2007 and Kalinsky, 2008)

    What Hazen and Kalinsky call functional information is one kind of functional complex specified information (FCSI).
    [Note: In the quotes below, E denotes “E with the subscript x,” and log denotes “log to base 2” – VJT.]
    Hazen’s definition:
    Quote from 2007 paper by Hazen et al.:

    Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define “functional information,” I(E), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, E (e.g., the RNA–GTP binding energy), I(E) = ?log[F(E )], where F(E) is the fraction of all possible configurations of the system that possess a degree of function >= E. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree.

    Kalinsky, 2008:
    [Note: In the quotes below, E denotes “E with the subscript x,” I denotes “I with the subscript nat,” and log denotes “log to base 2” – VJT.]

    Measuring functional information
    A method to measure functional information has recently been published by Hazen et al. whereby functional information is defined as
    I(E) = – log[M(E) / N]
    where E is the degree of function x, M(E) is the number of different configurations that achieves or exceeds the specified degree of function x, >= E, and N is the total number of possible configurations….

    (Kalinsky cites Hazen in footnote 2.)

    [T]he highest level of functional information that natural processes could reasonably be expected to produce for a given function would be the case where only one functional configuration would reasonably be found in R trials, or

    I = – log[1-(1-0.5)^1/R]. (3)

    Kalinsky also includes a method of inferring intelligent design for life on Earth:

    Given that there is no known upper limit for the amount of functional information a mind can produce, for any effect requiring or producing functional information, intelligent design is the more likely explanation if

    I(E) > I. (4)

    The greater the difference between I(E) and I, the more likely it is that intelligent design was required. It will be assumed, for simplicity, that the probability that mindless natural processes can achieve I is 1 and decreases probabilistically for I(E) > I. The probability that intelligent design can achieve I(E) will be assumed to be 1 for any finite amount of functional information. This is a reasonable assumption, given our observations of what intelligence can do and the apparent absence of any upper limit.

    It is usually assumed that the origin and diversification of life is not a blind search.Actual mutations, insertions, deletions, and genetic drift may be chance events, but natural selection essentially guides the search and, hence, the search is not blind… Of course, this raises the question, does natural selection, itself, require intelligent design? The fatal mistake made by many who appeal to natural selection is the assumption that natural selection, itself, does not require intelligent design.

    Although natural selection is credited with somehow discovering the right combination of nucleotides to code for, say, proteins like SecY or RecA, there is a great deal of vagueness about how it actually is supposed to do this, and not just for two proteins, but for thousands

    Natural selection requires a fitness function. If a given protein is a product of natural selection operating within a fitness landscape, then sufficient functional information required to find that protein in an evolutionary search must be encoded within the fitness function…To summarize; if natural selection or a fitness function are credited with producing a given amount of functional information, then if that functional information exceeds I, by the method proposed in this article, ID is required to properly configure the fitness function.

    Regardless of whether one prefers a genetic approach or a metabolic approach, we do know that at some point, proteins must be produced, or at least the information coding for stable, folded proteins must be achieved. We can, therefore, take all origin of life scenarios and put them into a ‘black box,’ which performs an evolutionary search and outputs the stable folded proteins that are permitted by physics. It is not necessary to know what the processes within this black box do; all we need to know is the output. The output can be evaluated two ways, one way is to assume that the black box is performing a blind search which, of course, requires no intelligent design, and the other way is to assume that some sort of fitness function is operating within the black box which may or may not require intelligent design, depending upon how much functional information is required for the output. To estimate I for a prebiotic, origin of life search, we must estimate the number of trials available for a blind search. We will then be in a position to estimate I and compare it with the functional information required to produced a minimal genome to see if a fitness function would be necessary that would require intelligent design. Since we do not know what processes could perform the search, let us be extremely generous.

    Taylor et al. have estimated that the mass of the earth would equal about 10^47 proteins, of 100 amino acids each. If we suppose that the entire set of 10^47 proteins reorganized once per year over a 500 million year interval (about the estimated time period for pre-biotic evolution), then that search permits about 10^55 options to be tried. Using Eqn. (3), I = approx. 185 bits of functional information. Of course, this scenario is much more generous than any scenario under consideration, but at least we will not be underestimating I. If I(E) requires more than I, then we can assume that either a fitness function requiring intelligent design must be included in the black box, or intelligent design is operating in some other fashion to properly encode the functional information.

    My comments:
    It should be clear from the foregoing that when Hazen (and by extension, Kalinsky) write about functional information, they mean information that is complex, specified and functional – i.e. FCSI. Although they don’t define “specified” as such, it is of course true that FCSI is a subset of CSI.
    It should also be clear that the authors’ definition of functional information pertains to a specific function x.
    It is also interesting that Kalinsky doesn’t see natural selection as something opposed to intelligent design, but as something which may require intelligent design, if it is “rigged” enough.
    Kalinsky’s design detection approach is different to Dembski’s, insofar as it makes use of estimates regarding the abundance of amino acids on the early Earth. (It also omits the possibility of life arising in space.) It would be interesting to compare the two methods and see if they come up with similar results for which patterns are products of intelligent design.

  315. DEFINITION FOUR (Abel and Trevors, 2005.)
    Quotes from 2005 paper by Abel and Trevors:

    “Sequence complexity falls into three qualitative categories
    1. Random Sequence Complexity (RSC),

    2. Ordered Sequence Complexity (OSC), and
    3. Functional Sequence Complexity (FSC).

    Random Sequence Complexity (RSC)
    A linear string of stochastically linked units, the sequencing of which is dynamically inert, statistically unweighted, and is unchosen by agents; a random sequence of independent and equiprobable unit occurrence.

    Ordered Sequence Complexity (OSC)
    A linear string of linked units, the sequencing of which is patterned either by the natural regularities described by physical laws (necessity) or by statistically weighted means (e.g., unequal availability of units), but which is not patterned by deliberate choice contingency (agency).
    Ordered Sequence Complexity is exampled by a dotted line and by polymers such as polysaccharides. OSC in nature is so ruled by redundant cause-and-effect “necessity” that it affords the least complexity of the three types of sequences. The mantra-like matrix of OSC has little capacity to retain information.

    Functional Sequence Complexity (FSC)
    A linear, digital, cybernetic string of symbols representing syntactic, semantic and pragmatic prescription; each successive sign in the string is a representation of a decision-node configurable switch-setting – a specific selection for function.
    FSC is a succession of algorithmic selections leading to function. Selection, specification, or signification of certain “choices” in FSC sequences results only from non-random selection. These selections at successive decision nodes cannot be forced by deterministic cause-and-effect necessity. If they were, nearly all decision-node selections would be the same. They would be highly ordered (OSC). And the selections cannot be random (RSC). No sophisticated program has ever been observed to be written by successive coin flips where heads is “1″ and tails is “0.”…

    Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC)….

    In summary, Sequence complexity can be 1) random (RSC), 2) ordered (OSC), or functional (FSC). OSC is on the opposite end of the bi-directional vectorial spectrum of complexity from RSC. FSC is usually paradoxically closer to the random end of the complexity scale than the ordered end. FSC is the product of non-random selection. FSC results from the equivalent of a succession of integrated algorithmic decision node “switch settings.” FSC alone instructs sophisticated metabolic function. Self-ordering processes preclude both complexity and sophisticated functions. Self-ordering phenomena are observed daily in accord with chaos theory. But under no known circumstances can self-ordering phenomena like hurricanes, sand piles, crystallization, or fractals produce algorithmic organization.

    Algorithmic “self-organization” has never been observed despite numerous publications that have misused the term. Bona fide organization always arises from choice contingency, not chance contingency or necessity.

    My comments:
    Neither RSC nor OSC constitutes an example of specified information. Only FSC does that. FSC is simply a biological version of FCSI. The authors provide a detailed account of bio-functionality, but do not attempt to provide a definition of the term “specified.” (See Definitions One and Two above for this.)

  316. DEFINITION FIVE (Durston et al., 2007; Abel, 2009):

    [Note: In the quotes below, X denotes “X with the subscript f,” and SUM denotes “sigma” – VJT.]

    Abel, 2009, cites the definition in Durston et al., 2007:

    1) Prescriptive Information (PI) … PI refers not just to intuitive or semantic information, but specifically to linear digital instructions using a symbol system (e.g., 0’s and 1’s, letter selections from an alphabet, A, G, T, or C from a phase space of four nucleotides). PI can also consist of purposefully programmed configurable switch settings that provide cybernetic controls.

    2) Bona fide Formal Organization … By “formal” we mean function-oriented, computationally halting, integrated-circuit producing, algorithmically optimized, and choice-contingent at true decision nodes (not just combinatorial bifurcation points).

    Note that statistical order and pattern have no more to do with function and formal utility than does maximum complexity (randomness). Neither order nor complexity can program, compute, optimize algorithms, or organize.
    A law of physics also contains very little information because the data it compresses is so highly ordered. The best way to view a parsimonious physical law is as a compression algorithm for reams of data. This is an aspect of valuing Occam’s razor so highly in science. Phenomena should be explained with as few assumptions as possible. The more parsimonious a statement that reduces all of the data, the better [188, 189]. A sequence can contain much order with frequently recurring patterns, yet manifest no utility. Neither order nor recurring pattern is synonymous with meaning or function.

    Prescriptive Information (PI) cannot be reduced to human epistemology. To attempt to define information solely in terms of human observation and knowledge is grossly inadequate. Such anthropocentrism blinds us to the reality of life’s objective genetic programming, regulatory mechanisms, and biosemiosis using symbol systems ….

    Well, what about a combination of order and complexity? Doesn’t that explain how prescriptive information comes into being?

    Three subsets of linear complexity have been defined in an abiogenesis environment. These subsets are very helpful in understanding potential sources of Functional Sequence Complexity (FSC) as opposed to mere Random Sequence Complexity (RSC) and Ordered Sequence Complexity (OSC). FSC requires a third dimension not only to detect, but to produce formal utility. Neither chance nor necessity (nor any combination of the two) has ever been observed to produce non trivial FSC. Durston and Chiu at the University of Guelph developed a method of measuring what they call functional uncertainty (H). They extended Shannon uncertainty to measure a joint variable (X, F) , where X represents the variability of data, and F its functionality. This explicitly incorporated the empirical knowledge of embedded function into the measure of sequence complexity:

    H(X(t)) = -SUM[P(X(t))* log P(X(t))] (2)

    where X denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). The state variable t, representing time or a sequence of ordered events, can be fixed, discrete, or continuous. Discrete changes may be represented as discrete time states. Mathematically, the above measure is defined precisely as an outcome of a discrete-valued variable, denoted as F={f}. The set of outcomes can be thought of as specified biological states.

    My comments:
    As for Definition Four, except that the authors’ discussion of functionality is much more technically advanced in this 2009 paper. FSC is simply a biological version of FCSI. The authors provide a very detailed account of bio-functionality, but do not attempt to provide a definition of the term “specified.” (See Definitions One and Two above for this.)

    That’s all for the time being. It’s taken many hours to put all this together. I’ll be back later.

  317. When I wrote that it is a mistake for ID advocates “to hitch their wagon to the faulty “tornado in a junkyard” argument,” Jerry wrote, “ID makes no such argument.”

    Yes, the “pure chance hypothesis” of just computing the odds of all the components happening at once rather than taking possible step-wise creation into account (configuration rather than history) is the “tornado in the junkyard” argument. That’s precisely what I have been discussing.

    But you have agreed that in theory step-wise creation will yield a different probability than pure chance, and I’m satisfied with that at this time.

  318. Aleta:

    Yes, the “pure chance hypothesis” of just computing the odds of all the components happening at once rather than taking possible step-wise creation into account (configuration rather than history) is the “tornado in the junkyard” argument.

    Yet ID does not make that argument.

    Ragardless of what you say or think the design inference considers both chance and necessity acting together.

    Now it seems that you refuse to accept that fact.

    And that is why discussing this with you is a waste of time.

    Also if someone could demonstrate there is a possible step-wise creation via blind and undirected processes then the design inference would be in trouble.

    Mutation and selection- step-wise.

    Well selection only “works” when function is present.

    And without selection all you have is some just-so story.

  319. In order to be a candidate for natural selection a system must have minimal function: the ability to accomplish a task in physically realistic circumstances.- M. Behe page 45 of “Darwin’s Black Box”

    He goes on to say:

    Irreducibly complex systems are nasty roadblocks for Darwinian evolution; the need for minimal function greatly exacerbates the dilemma. – page 46

  320. As to Aleta’s scenario in comment 169 what a load of tripe.

    How did the side with a 1 get sticky Aleta?

    So yes if there is agency involvement for the initial conditions then all bets are off and channecessity in this case would be due to the agency involvement.

    Jerry and VJ you guys are wasting your time.

    That is fine if you don’t care but trtying to eductae people who just don’t care to learn is a fruitless enterprise.

  321. Cabal

    The theory of intelligent design (ID) holds that certain features of the universe and of living things are best explained by an intelligent cause rather than an undirected process such as natural selection.

    Does not involve the supernatural in the least. Why would you hold intelligence to be synonymous with supernatural?

    Other evidence challenges the adequacy of natural or material causes to explain both the origin and diversity of life.

    I’d say the other evidence clause refers to evidence other than that presented by ID.

  322. Well done, vjtorley.

  323. Aleta, check out vjtorley’s posts starting at 310 and especially 315.

    Your concerns regarding CSI being a pure chance hypotheses are specifically addressed.

    Also here is a link to a pdf of a Dembski paper addressing them.

  324. 326

    vjtorley at 318,

    As for Definition Four, except that the authors’ discussion of functionality is much more technically advanced in this 2009 paper. FSC is simply a biological version of FCSI. The authors provide a very detailed account of bio-functionality, but do not attempt to provide a definition of the term “specified.” (See Definitions One and Two above for this.)

    That’s all for the time being. It’s taken many hours to put all this together. I’ll be back later.

    Thank you very much for taking the time to pull all this information together. It’s very convenient to have it all in one place.

    As it turns out, I actually have read all of your references in my search for a definition of CSI that ID proponents agree uniquely identifies design and that takes into account known physics, chemistry, and evolutionary mechanisms. Unfortunately, none of your referenced materials do that. Most suffer from the assumption of a uniform probability distribution (the tornado in a junkyard fallacy described by Aleta). The few that don’t are never shown to be applied to real biological artifacts, primarily because “specification” is such a vague concept.

    I look forward to you addressing those issues, when you have the time.

    Thanks again for the research.

  325. “Jerry and VJ you guys are wasting your time.”

    I often write these things to make clear my own thoughts. I am not a good writer and if I do it enough times and know where to look for it in the future then I can eventually make it clearer. People like Aleta and Mustela Nivalis and Jospeh Backwards are not really the target. They cannot be convinced of anything but others reading might be.

    The whole tone of the debate here and other places indicates that others and ourselves by answering these questions are having an effect. I see the answers by Dembski, Behe and those here mirrored on other sites more than I did 3-4 years ago. For example, people are using the term “information” in other places to describe the evolution problem. So I do not consider it wasted and will read VJTorley’s comments to see what can be distilled out of them for the future.

    The argument that it was done piece by piece via a cumulative processes is the stock argument for the origin of the cell and the complicated proteins in it. It is a specious argument and we have to spend more time on it because it keeps coming up and people like Mustela Nivalis thinks it is a given. However, it assumes certain things that are never enunciated. And when these are expressed it becomes clear that the basic step wise scenario has lots of problems and can not hold up. So this gradualist argument for the building of proteins will come up again and maybe next time I can answer it even more clearly.

    Well Backward Joseph took a dead thread and turned it into 320+ comments.

  326. Jerry,

    Understood- practice makes perfect and all of that.

    Good luck with that…

  327. Mr Vjtorley,

    Thank you for your efforts to bring all of this material into one place. I really appreciate it.

    I responded to your citation of Kalinsky above @290. With the conversation moving quickly, I hope you got a chance to see it.

    Abel never proves that his categories of OSC, RSC, and FSC do not overlap or are complete, nor provides an effective procedure for deciding into which category something will fall. This is the sad pattern of his papers, that they consist mainly of definitions and assertions about the categories defined, without proofs or evidence.

    If we examined digits 1000-1100 from the numbers 1/7, 22/7, 2^(1/2)phi, pi, and e, how would Abel sort them into those three categories and/or the additional category “none of the above”?

    He also would clearly like to believe that the field of genetic programming did not exist.

  328. Mr Jerry,

    And when these are expressed it becomes clear that the basic step wise scenario has lots of problems and can not hold up.

    I think you are right to focus on the actual steps assumed to happen, and forgo the negative log 2 of something to the power of bignum probability calculations. As I see it, some of those steps are:

    Amino acids form abiotically somewhere, and accumulate somewhere.

    RNA nucleotides form abiotically somewhere, and accumulate (and form chains) somewhere.

    Accumulated AAs and RNA chains are brought together somewhere, kept in proximity to each other and other feedstock molecules created abiotically, at appropriate temperatures and pressures.

    AAs and RNAs associate with each other in a non-random way that leads to the accumulation of functional chains of longer and longer length.

    Phospholipid bilayers become the common method of enclosing AAs, RNAs and other chemicals.

    These are some of the basic steps which need to be critiqued strongly, in my opinion.

  329. 331

    jerry at 327,

    People like Aleta and Mustela Nivalis and Jospeh Backwards are not really the target. They cannot be convinced of anything but others reading might be.

    This is insulting and unsupportable. If I weren’t genuinely interested in understanding the putative positive evidence for ID, I would spend so much of my limited free time participating here. If you have real evidence, I will consider it fairly.

    You need to understand that new claims get questioned and challenged. That’s a good thing. It helps to clarify the thinking behind them and provides an opportunity for the proponents of those claims to support them with objective, empirical evidence. The biggest insult that those of us unconvinced by current ID arguments could make would be to ignore you. Is that what you want?

    You also need to understand that pointing the finger of blame at your opponents ignores the very real possibility that you just haven’t made your case well enough. I still haven’t seen a calculation for CSI, as described in No Free Lunch, for a real biological artifact, taking into consideration known physics, chemistry, and evolutionary mechanisms. This despite repeated, polite requests. I’ve heard claims that it could be done and claims that it has been done, but no one has filled in that middle bit of actually doing it. Look to the beam in your own eye before laying your failures at the feet of your opponents.

    To be fair, there are ID proponents who understand this; vjtorley and CJYman come immediately to mind. If you want to really promote ID, you need to learn it too.

  330. Mustela Nivalis,

    The theory of evolution doesn’t have any such calculations.

    There isn’t any measurement for Common Descent via an accumulation of genetic accidents.

    EVERY test of the premise (Common Descent) is extremely subjective.

    That said there aren’t any known physics, chemistry nor evolutionary mechanisms* that can bring forth living organisms from non-living matter.

    There aren’t any known processes that can “evolve” a (bacterial) flagellum in a population of bacteria that never had one.

    Also you don’t “calculate CSI for a real biological artifact”.

    You measure the information to see if CSI is present.

    You do that by counting the bits- 2 bits per nucleotide- of a functioning biological system.

    Then to refute the design inference- if one is made- all someone has to do is demonstrate that some blind, undirected process can account for it.

    That said what Jerry said is true.

    You don’t have any interest in learning.

    That is evident with your “evolutionary mechanism” tripe.

    BTW Meyer takes into account exactly what you say IDists do not- read “Signature in the Cell”.

  331. “You also need to understand that pointing the finger of blame at your opponents ignores the very real possibility that you just haven’t made your case well enough. I still haven’t seen a calculation for CSI, as described in No Free Lunch, for a real biological artifact, taking into consideration known physics, chemistry, and evolutionary mechanisms.”

    You are addressing someone who is critical of using the CSI concept for evolution for the reasons I spelled out in #233. And I can show you other places where I criticized the attempt to use the concept for evolution for the exact same reasons.

    I base my assessment on the overall behavior of people here and you fit the stereotype pretty well. Each one of your replies could be analyzed based on what you say and do not say. More often than not it is what you do not say or if you distort what others say. As I said, you are not the target for any of my comments though I will use what people like you say to form my comments.

    I only read a little of what you say because it is often irrelevant as far as I am concerned. I haven’t seen a valid objection you have made yet that couldn’t be answered. If the others can get CSI defined well enough so it can be applied to evolution, then fine. If they cannot, then FSCI will do which is a subset of CSI which is easy to understand and less difficult to operationalize.

  332. “AAs and RNAs associate with each other in a non-random way that leads to the accumulation of functional chains of longer and longer length.”

    This simple statement is great speculation and tries to hide how immense the problem is but it also begs the question that any functions they will have will also have the same functions for life. Or else there will have to start all over again. There is a lot of begging the question here, I wouldn’t mind this so much if the researchers doing this would admit the highly speculative chain they are proposing and let that admission get into textbooks. Instead students and the public get the impression that it is just around the corner.

    Whether to the power 20, or the log of 2, it is a good way to show the enormity of the task. I can see why you want to avoid it. Yes, all the king’s multiverses and all the king’s monkeys couldn’t put the cell together again.

  333. 335

    jerry at 333,

    If the others can get CSI defined well enough so it can be applied to evolution, then fine. If they cannot, then FSCI will do which is a subset of CSI which is easy to understand and less difficult to operationalize.

    You have yet to define FSCI in a mathematically rigorous fashion, nor have you shown it to take into consideration known physics, chemistry, or evolutionary mechanisms, nor have you demonstrated that it is a clear indication of intelligent intervention.

    Let’s see a calculation for a real biological artifact, that is applicable given what we know about the real world.

  334. Mr Jerry,

    This simple statement is great speculation and tries to hide how immense the problem is but it also begs the question that any functions they will have will also have the same functions for life.

    I lost a message referencing Yarus’ recent paper but I think that is the place to start to understand this is not just speculation. But it is the place to focus attention.

  335. jrry,

    I fully concur with Mr. Nivalis. His point about valid CSI calculations — or an obvious lack of such calculations — still stands. You can scour ID works until doomsday and not find any CSI (or FSCI, or FCSI) analysis that allows for a possibility of Darwinian factors. Assuming tractability of such probability calculations, taking into account physical laws in addition to random mutations could significantly shrink CSI totals. As it is, uniform distributions form a singular basis for CSI claims, and applicability of such quantifications to biological organisms is doubtful. Why IDists don’t work to fill this lacuna is a puzzling conundrum.

    I trust that my communication is passably lucid.

    Cordially yours,
    R0b

  336. R0b–You can scour ID works until doomsday and not find any CSI (or FSCI, or FCSI) analysis that allows for a possibility of Darwinian factors.

    R0b, CSI as has been discussed here involves the structure of proteins and DNA. What Darwinian factors would be involved in the formation of proteins and DNA?

  337. “You have yet to define FSCI in a mathematically rigorous fashion, nor have you shown it to take into consideration known physics, chemistry, or evolutionary mechanisms, nor have you demonstrated that it is a clear indication of intelligent intervention.”

    Mustela Nivalis wonders why no one takes him seriously. FSCI for evolution is precisely defined and is the transcription/translation process used in every biology book on the planet. The relationship is accurately defined by the codon table of DNA triplets with amino acids.

    The conclusion is that it is most likely intelligently design because of the immense improbability of it (1000 interrelated parts) and the fact that no natural process has ever been shown to provide anything similar.

    As I said you are just a foil for explaining things. There is no objective to get you to assent to anything. But you keep on saying silly things and distort replies so it is possible to clear them up using your inappropriate responses. So keep up the good work, you are helping the ID cause enormously.

  338. 340

    jerry at 339,

    FSCI for evolution is precisely defined and is the transcription/translation process used in every biology book on the planet

    First you defined it as 4 to the power of the length of the genome, then 4 to the power of the length of a coding region, now as transcription/translation. Clearly those evil opponents of ID are completely unreasonable when they say you haven’t provided a rigorous definition.

    The conclusion is that it is most likely intelligently design because of the immense improbability of it (1000 interrelated parts)

    At least you are consistent with your assumption of de novo creation. That means your claims are still not applicable to the real world, and that you continue to ignore everyone who has pointed this out to you in great detail, but you are consistent.

  339. Nakashima,

    The Yarus paper looks very interesting but my bio chemistry needs some brushing up. Now how to figure where all those RNA’s came from and how to build the information super highway. ATP syntase here we come ready or not.

    Couldn’t find the structure of a riboswitch.

  340. Nakashima-san,

    Yarus et al., is interesting but it opens up other issues.

    For one the RNA, once latched on to the amino acid via stereochemstry, needs a mechanism for releasing it otherwise there will be an issue with folding once the chain is complete.

    Nevermind the problem of getting the proper RNAs and amino acids in the first place.

  341. “First you defined it as 4 to the power of the length of the genome, then 4 to the power of the length of a coding region, now as transcription/translation. ”

    It has always been about the transcription/translation process. You should read a biology book to understand that the discussion of DNA and amino acids means this.

    I have been consistent all along. From nearly a year ago

    http://www.uncommondescent.com.....ent-305465

    http://www.uncommondescent.com.....ent-308475

    From six months ago

    http://www.uncommondescent.com.....ent-326981

    From last month

    http://www.uncommondescent.com.....ent-343760

    “At least you are consistent with your assumption of de novo creation. That means your claims are still not applicable to the real world, and that you continue to ignore everyone who has pointed this out to you in great detail, but you are consistent.”

    This is one of the more inept comments I have seen here in years. On an ID site, someone makes an ID conclusion, and those claims are not applicable to the real world. By what criteria?

    Who pointed what out?

  342. Hi, question from a newbie; any naivete on my part is correctable. I’m intellectually curious about ID.

    Currently tho I’m talking about information and probability.

    For the moment, let’s leave aside alledged macroevolution over geologic timeframes and look at one person.

    What is the FSCI of any living breathing person, like myself? How likely is that that a new human would come into existence this year? Or to simplify it, of a new viable human genotype arising and manifesting its phenotype>

    If we are reasoning from a random reassortment of base pairs, it’s astoundingly unlikely that any person – able to live and breath and move – would ever exist in the universe, much less happen to come into existence in that timeframe. That new functioning humans could continue to be born in our own lifetiems is mathematically inplausible, from that argument.
    Four to what power? And of all those base pair arrangements, how may would create viable human organisms?

    However, if we took into account the set of parents available and their genes, and the known mechanisms for conception and growth from embryos to adult organisms, it’s not that strange.

    We don’t usually posit that the continued birth of new functioning organisms every year is so unlikely as to prove intelligent design, because in our probability calculations we take into account the biological mechanisms that can produce astoundingly unlikely offspring from existing organisms.

    I am not trying herein to argue either way regarding “where the original complexity came from” – eg: the parents. That’s a different question; I’m only talking about calculating how likely or unlikely it is that a new living child could come into being this year.

    I’m saying that calculating the likelihood of a functioning genotype with and without taking into account the known biological mechanisms comes out radically different. Raising 4 to the number of base pairs is not the relevant calculation, because the new organism’s genotype is not arising from brownian motion or random reassortment of base pairs, and no biologist would assert that it is.

    This still leaves open the question of creating the parents and the reproductive mechanisms that greatly increased the likelihood of producing the new organism. Some believe that Darwinian evolution could do that, some believe that it cannot. That’s another argument. My focus is smaller: questioning any calculation which is implicitly based on the assumption that an organism’s genotype is randomly created from noise, without regard to known mechanisms and constraints.

    Moving beyond that simple question, I would be very interested in analysis which convincingly quantizes the amount of information which can be gained through biological mutation, variation and selection and compares that to the information inherent in a working ecosystem (the organism focus is actually too narrow). Because that’s the relevant gap (or lack of gap) in information (what Darwinian evolution could have done vs what is needed), not the distance from random noise to a functioning organism.

    I’m new to learning about serious ID arguments, but not entirely new to information gain, in particular from so called “genetic algorithms” in software. While these could hardly in themselves prove or disprove whether Darwinnian evolution is a “sufficient”, they do convince me that true information gain via generations of variation and selection is possible. Yes, I know these operate within an intelligently designed system (assuming folks like me might be called intelligent) – that’s not the point if read me carefully; I didn’t say the GA software generated information arose from randomness. I said there was information GAIN – that after running generations of GA simulations, there is new “knowledge” in the software (eg: facial recognition patterns which nobody programmed in) which I as the designer did not have and in fact still do not have unless I am able to ‘decode’ the meaning of the coeffeicients so artificially evolved. This is in fact information which did not exist prior to running the software, so there was an easily demonstrable net gain of total information or complexity.

    If Darwinian evolution can likewise increase information content over time (if software emulations thereof can do so, this seems plausible), then the questions are “can such mechanisms create *enough* new information over the given time period?” and “is there a way to bootstrap the initial learning system?”.

    I am very open to solid arguments that Darwinian evolution as we understand it is insufficient, so there must be some additional factors – including Intelligent Design (tho that needs more positive evidence than “Darwinian evoluation as we know it today is insufficient”).

    But merely calculating how unlikely it is that an organism or its genotype would arise in one pass from randomness appears to lack cognizance of the known mechanisms of biology OR computer science. The calculations will need to be far more nuanced than that.

    The problem here is that ID theorists would need to be flexible mentally to fairly compute “how much of the current complexity Darwinian evolution could have produced” to the same standards of scientific scrutiny they believe others must follow. There may be ID theorists up to this challenge (I said I’m new). Or that calculation may be currently beyond the grasp of Darwinian OR ID theorists alike, leaving the size of the gap unknown.

    From my experience, those who believe more information or order can never be created are naive about the state of current software; they are reasoning from “common sense” that has been obsoleted. Evolvable systems which increase information content are a practical reality today. That doesn’t show anything about Darwinnian Evoluation except that it probably CAN create some information incrementally.

  343. I have another question as a newbie. I gather that there are different variations of ID, like ID with Common Descent and ID with Special Creation.

    Is the ID scientific discipline currently sufficiently robust that some ID schools are capable of falsifying and overcoming the arguments of other ID schools – leaving the Darwinians out of it?

    Certainly there are many varieties of Darwinnian evolution and vigorous debate based on evidence, and change over time. I suspect that if (or when) ID is true science, it will have similar internal debate and that consensus about the “accurate” version of ID will emerge and then change with new evidence. However, if it should be or later become mainly based on giving credence to “anything but Darwinnian” then it won’t grow into a science but will be or become mostly political. I don’t know the scientific ID community; is it evolving as a scientific discipline in itself capable of sorting out its own house, or is it more a cultural phenomenon?

    Where would I find the most purely scientific ID community on the web? This site seems to mix ID science with other motivations and politics (global warming?), and I’m not disputing the value of that mixture – only asking if there is a more specifically scientifically ID website?

    Thanks!

  344. 346

    vjtorley- I too salute you for all the work you’ve done to bring together this “information information,” as it were.

    I have a couple of questions regarding your introductory matter, specifically the paragraphs which say:

    [c) Complex specified information (CSI) is information that is both complex (i.e. highly improbable) and specified. An event is “specified” if it exhibits a pattern that matches another pattern that we know independently – either because we have seen such a pattern before, or because it satisfies a functional requirement that we can readily understand from investigating it. Because we can readily make sense of a specified pattern, it follows that a specified pattern will be easily describable in our language.

    (d) Information is just a mathematical measure of improbability or complexity.

    The first sentence of paragraph [c) says (among other things) that CSI is information that is specified. The second sentence explains what it means for an event to be specified. I don’t usually think of information as an event. Might it be better to say a thing is specified?

    Secondly, if as stated in paragraph (d), information is just a measure of complexity, what does it mean to say that information (itself a measurement of complexity) can be specified and complex? It seems like you are using the word information in two ways. It might be clearer to come up with another term for one or the other.

  345. vjtorley, thanks for the effort to summerize how terms central to ID are currently defined and used. According to your comment at 310

    Meyer also defines functional complex specified information (FCSI).

    Do you have a reference for this? Unfortunately, I haven’t read “Signature in the Cell” until now and can not check it myself but within your later definitions this statement is not repeated or corroborated. To my best knowledge this would be the first time that the term FCSI/FSCI is mentioned in printed ID literature. As Jerry said in his comment at 233 the term was coined here at UD:

    Then kairosfocus appeared for the first time and provided his thoughts and soon the term FSCI or functional specified complex information was being used.

    I guess KF, Jerry and all the others who contributed to the elaboration of the definition of FCSI/FSCI deserve some appreciation. If Meyer defines
    the term he should give credit to the guys mentioned above.

  346. vjtorley – thanks for all that work! I wonder if it could be made a post (or series of posts) in its own right?

  347. “If Meyer defines the term he should give credit to the guys mentioned above.”

    Kairosfocus pointed out that someone used the term “specified information” or “specified functional information”" in 1978 in talking about OOL. I believe it was Orgel. It’s on the web someplace.

  348. Zeph,

    You ask a lot of questions and it could take about 400 comments to get answers to all of them. If you want to understand ID, there are several books. My favorites are the two by Behe, Darwin’s Black Box and especially the Edge of Evolution. Others may have their own favorites and you seem to be interested in GA’s which might mean you may be interested in what Dembski and Marks are currently doing.

    At various times I have tried to summarize what ID is about in various comments. Some of these are linked on this thread way up at #110. It is a personal attempt and others might not agree. There are also a couple long comment by myself and others on this thread that may be of use.

    There are a couple other places where ID is discussed on the net. See the links near the top of any page. Telic Thoughts is like this site with various threads on topics of interest. ARN is a more technical site and I am not sure how active it is.

    Information is key to the discussion but I think you have a naive understanding of what it is about. If you take a population gene pool, it will have a lot more information than what is in any individual organism in the population. If the population reproduces using sexual reproduction, then the new organism will have a unique combination of information but still not have anything that is not in the gene pool. Theoretically the gene pool information can be increased through the mutation of one of the organism’s genome in the population and the subsequent gene pool is now larger. It is also possible that environmental conditions may cause only certain combinations to survive while others disappear in which case the population gene pool becomes restricted and loses information.

    The key issue in the debate is whether any mutation or set of mutations to organisms in the population will increase information enough eventually to form what we call complex new capabilities through sexual reproduction. That is why the Edge of Evolution is such an important book. It argues and I agree with its arguments that it may be impossible to do so. It doesn’t argue that information cannot be added but the amount of information is not the issue but the specific combination of it is the issue. The best analogy I know of is language. Adding to a paragraph is not hard, but adding to it so it is coherent and more functional in what it conveys is almost impossible with random additions of letters or even words if they are the unit of addition. People like to resort to large numbers or deep time to say that all is possible but when the actual numbers and time allowed, even hundreds of million of years, are usually not enough to find the possible solutions for what is necessary to drive the complex novel capabilities that appeared in the microbe to man scenario.

    I also recommend Dawkins new book, The Greatest Show on Earth, which is obviously anti ID but which lays out a lot of the issues. Dawkins does a good job of showing how nature can tease out changes to a population by exploiting information already in the gene pool. He does not do anything to give credence to how information that could control major new functions could arise in the gene pool. This is the failing of Darwinian processes. Another good book to read is Denton’s Evolution, A Theory in Crisis. That is the book that started Behe thinking and it is very good at laying out the issues especially of micro vs. macro evolution and why deep time is not the answer.

    I also recommend trying to bring up these points on a new smaller thread as this one takes forever to load on my wireless lap top so I am sure others have the same problem. It has got to the point of being of being unwieldily.

  349. 351

    Just looking at a dung beetle in action?

    So, there is no thing that man has created by unique thought? The man that invented the wheel did so by observing a dung beetle.

    And, you know this from Shinola?

    Inventing from scratch things we don’t have a clue about, don’t have a name for, that is not easy!

    Not only is it not easy, you’ve indicated that its is not possible. Indeed, this is a comment that cannot be equivocated.

    Where did the idea of a transcendental being come from?

  350. Zeph, great questions.

    ID does not reject the claims upon which you base some of your more interesting observations.

    For instance, can information increase in evolution as per your software? Sure, if the proper algorithm is front-loaded into the evolutionary system.

    Here are some things to consider.

    What are the limits as to what your facial recognition software can become? Without any new input from the designer can it evolve into a web browser or spell-checker?

    Can a search algorithm happen by chance?

    The problem here is that ID theorists would need to be flexible mentally to fairly compute “how much of the current complexity Darwinian evolution could have produced” to the same standards of scientific scrutiny they believe others must follow.

    OK :-)

    You might also be interested in Behe’s blog responses to some of his critics.

    ID is a new science and there is no demand to accept its claims as definitive. One of the issues, however, is that the powers that be seem unwilling to consider the claims it raises as can be seen by some of Behe’s other post on his Amazon blog.

    Anyway, welcome to Uncommon Descent.

  351. Mr BiPed,

    Where did the idea of a transcendental being come from?

    Eating the wrong mushrooms. Schizophrenia.

    I don’t buy into the whole basis of this conversation, but the idea of a transcendental being isn’t a showstopper in it. How about “prime numbers”, “subordinated debentures”, or “souffle”? :)

  352. Jerry,

    Thanks for taking the time on your wireless laptop to reply to this unwieldy thread in some depth!

    The key issue in the debate is whether any mutation or set of mutations to organisms in the population will increase information enough eventually to form what we call complex new capabilities through sexual reproduction.

    Yes, as somebody newly intrigued by ID, that seems to be the key question.

    However, perhaps because of my naivete about information as you suggest, I have a somewhat different perspective about where information – or at least meaningful complexity – is added. I don’t see mutation as the key element; adding noise to a system is pretty easy in our entropic universe (and scrambling the genetic analogues in GA is easy). Where “the magic happens” is in selecting “more useful” noise from “less useful” noise!

    We can observe results when there is a conscious being doing the selection in plant and animal breeding, or when a mechanical algorithm selects survivors in GA – there can be a directed course of evolution. In the Darwinian approach, fitness to reproduce in the face of the environment and competition from other organisms is the “selector”, and is obviously much slower.

    Yet Darwinians believe that with enough trillions of organisms over billions of years, this enormous analog computer of life on earth can produce startling meaningful complexity. It’s easy to see why this could seem intuitively astounding and thus dubious. But frankly, I don’t trust these “reasoning from common experience” intuitions – they prove wrong way too often in other branches of science.

    I find both Darwinian and at least some ID hypotheses sufficiently credible to not discard out of hand because of “my gut feeling” nor any need to reduce cognitive dissonance with my faith. I need to see more objective analysis; and I accept that it may take a long time for a clear winner to emerge (to my satisfaction that is).

    Thanks for the book suggestions; I’ll look for them.

    By the way, I responded here because I found a lot of useful information in this thread (tho it takes patience), and could ask my questions in that context. Any suggestions of a shorter post where this could best continue without being a non sequitur? (or a better website for my kind of curious and non-dogmatic examination?)

    Thanks again. I would also be glad to hear from non ID folks here, who in my estimation make some very good points as well.

  353. tribune7

    Thanks for responding.

    What are the limits as to what your facial recognition software can become? Without any new input from the designer can it evolve into a web browser or spell-checker?

    Can a search algorithm happen by chance?

    You are correct that the face recognition GA is not going to evolve into a web browser. In particular, the “genetic units” it uses are more limited in scope – various coefficents that can be tweaked. (I was using this only to suggest that some useful complexity information is gained by these mechamisms, not as a model for the whole system).

    Evolution in a system is essentially a search algorithm for the must “useful” combinations of building blocks, but it cannot easily transcend the capabilities of those blocks. Luckily, in the physical world, DNA/RNA plus proteins is the ultimate tinkertoy set, and that set of building blocks is demonstrably capable of supporting incredible complexity – because it does. At question is whether these amazing building blocks can also evolve the current complexity using only natural selection pressure (and mutation/noise).

    Anyway, your latter question is the more interesting one – algorithms are probably closer to the analog of genotypes than are web browsers, when transposed to the digital realm. (Web browsers have co-evolved – in the whimsical sense – with human culture as interface elements between the biological complexity at issue here and the digital realm).

    I would say that in some meaningful sense, face recognition IS a search algorithm within a very complex space. But let’s take a simpler case, because that “search space” is very highly manufactured by us programmers for a specific purpose.

    More interesting would be “evolving” a search algorithm using building blocks of virtual machine code. The key here would be the selection criteria used as metrics. There needs to be some incremental payoff. I’m thinking that a sorting algorithm might make a better project, where being “more sorted” is rewarded. Can one evolve a sorting algorithm which creates “mostly sorted” lists from random lists, using only randomization and selection for “more sorted”, but no computer science theory of sorting?

    Any such experiment would be, obviously, vastly simplified compared to trillions of DNA/RNA/protein based organisms over billions of years. That is, it would have to be too artificial to “prove” anything fundamental; it would only be a suggestive datapoint for somewhat illuminating one corner of the problem.

    Has anyone done this yet?

  354. Zeph — Can one evolve a sorting algorithm which creates “mostly sorted” lists from random lists, using only randomization and selection for “more sorted”, but no computer science theory of sorting?

    You ask great questions. The selection criteria and the actions based on it would have had to have occurred by random events for this to be a rebuttal of ID. And we would even have to go deeper in that it would not be lists but items that randomly formed lists.

    But assuming this came about how long would it take for this algorithm to produce something useful.

    Something else to consider: given infinity would it be possible for the face-recognition software to evolve into a browser?

    There is a commonly held view that given infinity — which of course we don’t have with evolution — anything is possible, but is it?

    Or consider this — what if the code in the face-recognition software were randomly changed akin to genetic mutations? Would it be more likely to evolve into a web-browser or become useless?

    Since you’re new, one thing to keep in mind is that ID is not anti-evolution. There is a common misconception that it is.

    Also, keep on the lookout for posts by Gil Dodgen. You and he seem to have similar interests.

  355. Jerry,

    Thanks for posting those links in #110; very interesting. You are too modest about your prose ability, you do explain your viewpoint well. I’m more intrigued than ever from your description.

    Alas, I haven’t yet found a better blog post than this (sufficiently recent, sufficiently on topic, and with fewer comments) to which to attach these discussions; suggestions welcome.

    re your referenced notes, I’m a little puzzled why ID advocates would assume MicroEvolution as a given, if the evidence for it is indeed so scant:

    http://www.uncommondescent.com.....ent-299358

    Where I am finding the most difficulty with accepting ID as science is what comes across to at least naive newcomers as a double standard of evidence.

    Perusing here, I find many examples where Darwinists are challenged to come up with a detailed non-speculative mechanism deemed plausible by the challenger, for some attribute or change; absent that mechanism, the similarity of the results to what human design produces is considered per se evidence of ID.

    But it appears that ID proponents are free of any need to explain in any detail whatsoever, or provide even a hint of a plausible mechanism for the infusion of intelligence into the system. For example, is ID research compatible with an omnipotent and omnicient intelligence, or just human type finite intelligence extrapolated a few centuries into the future? Or one could posit advanced alien biologists dropping in on Earth every few million years to infuse new genotypes into the ecosystem – the Cambrian field trip was a doozy – in which case, can we discern which pieces of DNA came in externally and when? Or perhaps there is some diffuse etheric force which subtly and non-materially biases supposedly random events towards directed goals over time – just shifting probablilities a bit over many centuries, without any new molecules being introduced (eg: unlike the alien spacecraft carrying interventionary biologists). Can ID shed any light on which if these radically different mechanisms is best supported by evidence?

    It seems clear that until biology has a complete picture (which could be many centuries if ever), the ID folks are going to win every debate in which they can just write “a miracle of intelligent origin happens here” atop any small or enormous gaps in their hypothesis, but the Darwinists are required to fully explicate their hypothesis in near indisputable detail without major extrapolation.

    I don’t yet see how one can have a scientific debate in the face of that much disparity in expectations.

    It is a given that humans have large gaps in their knowledge about how life has come to be. Yes, as humans we tend to downplay that, but it’s true.

    Take a gap in the fossil record. Darwinists are extrapolating from the pieces for which they do have decent mechanisms and the evidence they do have (eg: before and after fossils), to fill in the gaps with “something similar but even larger in scope happened here”. That is certainly not “proof” or incontrovertable evidence and it’s fair to search for more solid explanations, because there is sometimes a big extrapolation without enough evidence.

    But just saying “some unknown and undefined agent did something of whose mechanism we have no clue or evidence in order to create the later life forms during the time of the gap” does not seem to meet the criteria of better explaining it scientifically. What is this alternative mechanism whose plausibility we can weigh against the Darwinist proposal, whose mathematical odds can be calculated to be better than those of mutation and selection?

    I cannot so far see how ID can compete as science (not as faith or philosophy) until it competes on level ground with other proposed scenarios. Plate tectonics (“continental drift”) would never have won the geological world over if it just said “some unexplained and mysterious force of which we have not the slighted understanding moved the continents apart”; nor would it have yielded any insights. Once there were plausible mechanisms which could be weighed against evidence, it became taken seriously.

    ID claims only to be a new science without all the answers yet, but I’m looking for even a broad theoretical framework of HOW earth biology interventions happened, how many interventions there have been or is it continuous, the scope of the intervention (what was it capable of, and what was it not?), and such. Is there evidence of a single designer, or of multiple designers with different styles (human intelligent design is strongly marked by discernable “design styles”) Does ID have evidence that design has continued to occur in the last hundred years (“micro ID”) or does ID require deep time to operate (“macro ID”), or has it entirely stopped currently, perhaps to show up again in a million years? There are dozens more of these in my mind; is ID even beginning to form fuzzy shapes from the murkiness of an extremely vague “designer” and “design implementaion mechanism”? I haven’t found those here yet.

    Mendel was able to infer a lot about the structure of genetics before the detailed mechanisms were discovered (even if not entirely accurate). What has ID learned about the structure and nature of intelligent intervention in earthly life?

    I’ll give another example. It was factual that bats were able to somehow navigate in profound darkness. Imagine a group that hypothesized that their navigation was not unlike the guidance of spirit, and gathered evidence that the darkness of some caves was so deep that even “extremely light sensitive eyes” was insufficient to explain it. The spiritual guidance theory would not have gained much scientific credence until that group provided a plausible mechanism, or at least was able to measure and classify where bats were able to navigate and where they were not and provide a “spiritual guidance” framework which explained those observations better than eyes. They might, say, predict and later measure that bats were unable to navigate in the presence of brimstone or sulfer bearing rocks because these rocks had been shown elsewhere to interfere with spirtual guidance. Until then, all their evidence that “sensitive eyesight is not enough” was only evidence that conventional biology was still incomplete, not that bats got spiritual guidance from an unfathomable source. Of course, echolocation was discovered, and this gap in conventional biology was filled, albeit with some overturning of the previous consensus – because it had a testable mechanism to FILL the knowledge gap scientifically, not just point it out.

    In addition to my interest in “truth” (whether or not it pleases me), I admit that I would truly love to see something solid from ID as a science. Why? Because it would make the universe more interesting to me. (I’d also like to see SETI find a signal). It would open up some fascinating vistas. However, my mild preference that “ID be validated” is smaller than my desire to avoid illusions, even comforting ones.

    So far, what I as newcomer am finding is a scientific critique of the completeness of Darwinism (which seems very valuable, by the way – whether it eventually pushes Darwinists to expand and refine, or it overturns Darwinism), married to an apparent philosophical one-upsmanship where ID can win every debate because it doesn’t have to come even close to the same standards of explication, mechanism and evidence that it imposes on its opponents.

    Imagine two kids fighting a battle in a virtual computer world, where one has to follow known physics and the other can invoke magic. One says “I don the titanium powersuit whose 440 kg mass can be rapidly moved using power from twin 3mj batteries for 20 minutes”. The magic bearers can just say “an ineffable entity just teleported the sword of Glyndor into my hand, and this sword can slice through your reinforced titanium armor like butter”. The magic user always wins, because they don’t need to posit any plausible mechanism. However comfortable a related modality may be to philosophical debate, ID needs to transcend this “advantage” in the quest to become serious science.

    AND – I’m new to this. Every school of thought can have brilliant advocates who expand potential human knowledge and, um, true believers who say less than wise or helpful things to support it (including Darwinism!). I’m still sifting the wheat from the chaff in regard to ID. I see some “pro-ID” advocates whose arguments appear fuzzy minded, but those unfortunate camp followers should not deter our exploration of the more serious thinkers who are also evident, nor be held against them. (Again, this is true of both sides of many debates).

    It’s possible that the science of ID is in the process as we speak of drumming the “magic users” from its ranks, proving that it’s a science with serious internal quality control and not a political coalition of convenience.

  356. tribune7:

    I can’t scientifically project to infinite time, because it’s like dividing by zero and beyond science.

    But to get the gist – would a very, very, very long time suffice?

    In some cases, no. I believe that a given toolkit has inherent limitations – there is a finite number of ways the parts can be arranged, and only those arrangements are possible outcomes. More time doesn’t change that.

    The “toolkit” for face recognition GA is mostly various coefficients and constants that go into other software. This toolkit is by design limited mainly to “image recognition”. It doesn’t have keyboard input or an internet link; those are outside it’s universe. (Remember, only the facial recognition aspect is evolving, not the software which does the GA evolution itself). I used this example for only the limited point that useful complexity can be created by non-sentient processes, not as tackling the Big Questions directly.

    The building blocks or toolkit of life are DNA/RNA/proteins (and other chemicals). Anyone who has studied these is aware that these are astoundingly flexible building blocks – far, far more sophisticated than the face recognition building blocks. That is, the range of patterns they can build is huge – take for example every organism that ever existed on Earth, in their full complexity.

    Those DNA/RNA/protein/etc building blocks can even support intelligent life which can design things like Genetic Algorithms or have these discussions! Life is by far the most complex system of which we are aware – and it’s build atop the most flexible toolkit we know of.

    No GA experiment is going to rebut ID per se; at most it might weaken some of the arguments for it. Arguments for and against ID or Darwinism or string theory get weakened and strengthened all the time.

    Again, you speak as if the change variation was the key element, but it’s not. Change variation on a web browser’s machine instructions will just mess it up. But the toolkit underlying it – eg: the Intel Pentium instruction set – is not very suitable for evolution. Literally one bit changing in a 10 megabyte program is fairly likely to break something, maybe major. By contrast the genotype has a lot of redundancy and self-repair mechanisms that Intel didn’t need to include in their design because it didn’t need to self replicate or to evolve.

    To get evolvable systems, you have to create a virtual world of sorts within the software (or you could design hardware, but that’s much more expensive!) I don’t mean a 3d analog of this world, I mean something like a simulator for a much simpler computer than an Intel Pentium, whose instruction set is more adaptable. Yes, this is a design product of humans – but the point here is not to prove that no intelligence is needed to “set up the system”, but a smaller one of seeing how much “new intelligence” can be created by mechanistic processes of variation and selection. Results would would be suggestive, not definitive.

    The key is not the randomness, it’s the non-randomness – the selection process. Focussing on words like “random chance” is to greatly miss the point of the Darwinian evolution approach, which must be understood well before it can be countered. The question is not whether random variation could create a web browser, it’s whether there are incremental selection forces (favoring becoming more browserlike) which can pull a useful “signal” out of the noise of limited random changes.

    In the case of a web browser, what would the criteria be for a “more usable” browser? Billions of people using billions of variant browsers and copying the best ones comes to mind – but the source codes for browsers are not built on a very evolution friendly toolkit, like DNA/RNA/protein. Not likely to work in the real world.

    Alas, web browsers are too far out along a non-evolutionary flavor of complexity to be very relevant to this discussion.

    Except that if we found a web browser in nature, not created by humans, I’d be an instant convert to ID! [smile]

    Zeph

  357. Mr Zeph,

    Welcome to the conversation.

    Literally one bit changing in a 10 megabyte program is fairly likely to break something, maybe major.

    I would take issue with this. If you do some code analysis you’ll see that a lot bits are in data or parts of the code that are rarely (if ever) executed. Both DNA and computer code have places where one bit change will be deadly and other places where wide variation and continued function are still possible.

  358. Mr Zeph,

    Can one evolve a sorting algorithm which creates “mostly sorted” lists from random lists, using only randomization and selection for “more sorted”, but no computer science theory of sorting?

    Google “Hillis sorting networks” and you will see that this is an area where simulated evolution (co-evolution, actually) was used very fruitfully.

  359. Zeph, as I said you ask good questions and make interesting points and I think I will enjoy your posts :-)

  360. 349

    jerry

    01/23/2010

    6:37 am

    “If Meyer defines the term he should give credit to the guys mentioned above.”

    Kairosfocus pointed out that someone used the term “specified information” or “specified functional information”” in 1978 in talking about OOL. I believe it was Orgel. It’s on the web someplace.

    My question was rather if Dr. Meyer defined FCSI/FSCI in his recent “Signature in the Cell” because to my best knowledge even here at UD only a minority of commenters (KF, Jerry) are using it while leading ID-theorists like Drs. Dembski and Behe never mentioned it.

  361. zeph,

    I haven’t got time to answer your questions now. I will try to get some time later today or on Monday.

    Just a quick observation. ID is not a science such as physics, thermodynamics, evolutionary biology, plate tectonics but rather a supplementary way of analyzing the same data from these various disciplines that have been analyzed by other scientists. Is there a science of Intelligent Design? There might be in the future similar to statistics. Statistics is not content specific but is used in nearly every scientific discipline. ID uses statistics and other probability concepts to analyze data from various disciplines and as such is science as much as statistics is when applied to various sciences.

    As for micro evolution. It definitely does happen. Just how much is a question. Dawkins book gives some examples but Dawkins claims a lot of things and it is not clear if all he claims did happen but they might have. So it is not an issue to fight. It just makes ID look like a bunch of malcontents ready to fight anything and makes it less believable on the issues that matter.

    One way to fight them is to show how trivial they are. On another site a couple weeks ago one of the anti ID people who has commented here in the past brought up teosinte and corn. Corn is a variety of teosinte that was artificially selected by the natives of the Americas from this wild plant that is useless as a food stuff. Nature had 10′s of millions of years to develop corn and it didn’t while the local inhabitants of the Americas were able to do so in a short time. This person went to the wall to say that this is an example of evolution when all it was is an example of artificial selection like getting a better dairy cow. This person couldn’t understand how he was undermining his cause by emphasizing such a trivial example.

    So Dawkins by emphasizing artificial selection in his book is actually admitting he has no argument for macro evolution. Otherwise he would go right to it and forget about artificial selection. That is one of the reasons why I recommend Dawkins book. The other reason is that he has some very interesting things in it but none threaten ID.

    But artificial selection only allows one to extract what is in the gene pool such as a better dairy cow, corn, or a labrador retriever. A great book that is still in print somewhere is by Ray Bohlin, about the limits of biological change discusses this in a very scientific way.

    http://www.amazon.com/Natural-.....0945241062

    Here is a podcast by Bohlin from last year that I just found. He is now involved in religion but has a Ph.D in micro biology.

    http://www.podfeed.net/episode.....in/1390108

    I have no idea what it contains since I just found it.

    Dawkins’ book is full of examples that might have happened in nature and some actually did happen. It makes no sense to fight them as they could have happened that way and Dawkins shows examples of those that did. He has a section on evolution in our life time called before our eyes. Dembski in one of his books gives an example or two.

    I maintain, and this is just me, that micro evolution is great design. It is a way for a population to adapt to changing environments and is what a good designer would incorporate into a design. However, it has limits.

  362. “My question was rather if Dr. Meyer defined FCSI/FSCI in his recent “Signature in the Cell” because to my best knowledge even here at UD only a minority of commenters (KF, Jerry) are using it while leading ID-theorists like Drs. Dembski and Behe never mentioned it.”

    Who gives a rat’s rear end if they use the same terminology. They are using the same ideas. The term was meant to show how CSI is used in life. If the terminology we use here is not adopted elsewhere but the same ideas are used, who really cares. It is the inane people who object to the term that show they have nothing of substance to bring to the argument that help make our case. They are like a whiney 6 year old that sticks his tongue out at you and then says, they are not using your words, nyah, nyah nyah nyah.

    A good comparison, anti ID people and whiney 6 year olds. Sometimes it is hard to tell the difference.

  363. Well-defined vocabulary makes constructive discussion possible. Poorly or ambiguously defined key concepts get in the way of productive conversation. I don’t think this is a principle limited to 6-year olds.

  364. “Well-defined vocabulary makes constructive discussion possible. Poorly or ambiguously defined key concepts get in the way of productive conversation”

    If you have paid attention over the years you would know that FSCI is well defined and by the way FSCI is discussed on this thread in some detail.

    Spoken like a true whiney 6 year old. As I said people reveal themselves by their comments.

  365. Jerry:

    As I said people reveal themselves by their comments.

    So, what does it reveal about someone who never says anything of substance and continually try to evade answering direct questions?

  366. Osteonectin (#347)

    Thank you for your post. Sorry for not getting back to you sooner. You wrote:

    According to your comment at 310

    Meyer also defines functional complex specified information (FCSI).

    Do you have a reference for this?

    My apologies; I should have been a little more precise. Dr. Meyer doesn’t use that exact term, but he does use the term “complex functional specificity” (his italics) on page 388 of his book.
    Here’s a full quote (pages 387-388). The italics are Meyer’s; the bold type is mine (VJT).

    Though information theory has a limited application in describing biological systems, it has succeeded in rendering quantitative assessments of the complexity of biomacromolecules. Further, experimental work has established the functional specificity of the base sequences in DNA and amino acids in proteins. Thus the term “information” as used in biology refers to two real and contingent properties: complexity and functional specificity.

    Since scientists began to think seriously about what would be required to explain the phenomenon of heredity, they have recognized the need for some feature or substance in living organisms possessing precisely these two properties together. Thus Erwin Schrodinger envisoned an aperiodic crystal; (19) Erwin Chagaff perceived DNA’s capacity for “complex sequencing”;(20) James Watson and Francis Crick equated complex sequences with “information,” which Crick in turn equated with “specificity”;(21) Jacques Monod equated irregular specificity in proteins with the need for a “code”;(22) and Leslie Orgel characterized life as “specified complexity.”(23) The physicist Paul Davies has more recently argued that the “specific randomness” of DNA base sequences constitutes the central mystery surrounding the origin of life.(24) Whatever the terminology, scientists have recognixed the need for and now know several locations of, complex specificity in the cell, information crucial for transmitting heredity and maintaining biological function. The incorrigibility of these descriptive concepts suggests that specified complexity constitutes a real property of biomacromolecules – indeed, a property that could be otherwise, but only to the detriment of cellular life. Indeed, [page 388] recall Orgel’s observation that “Living organisms are distinguished by their specified complexity. Crystals … fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.”(25)

    The origin of specified complexity, to which the term “information” in biology commonly refers, therefore does require explanation, even if the concept of information connotes only complexity in Shannon information theory, and even if it connotes meaning in common parlance, and even if it has no explanatory or predictive value in itself. Instead, as a descriptive (rather than explanatory or predictive) concept, the term “information” (understood as specified complexity) helps to define an essential feature of life that origin-of-life researchers must explain “the origin of.” So, only where information connotes subjective meaning does it function as a metaphor in biology. Where it refers to complex functional specificity, it defines a feature of living systems that calls for explanation every bit as much as, say, a mysterious set of inscriptions inside a cave.”

    References
    (19) Schrodinger, Erwin. What is Life? Mind and Matter, 82. Cambridge: Cambridge University Press, 1967.

    (20) Alberts, Bruce D., DennisBray, Julian Lewis, Martin Raff, Keith Roberts, and James D. Watson. Molecular Biology of the Cell, 21. New York: Garland, 1983.

    (21) (a) Watson, James D. and Francis H. C. Crick. “A Structure for Deoxyribose Nucleic Acid.” Nature 171 (1953): 737-38. (b) Watson, James D. and Francis H. C. Crick. “Genetical Implications of the Structure of Deoxyribonucleic Acid.” Nature 171 (1953): 964-67. (c) Crick, Francis. “On Protein Synthesis.” Synposium for the Society of Experimental Biology 12 (1958): 138-163.

    (22) Judson, Horace Freeland. The Eighth Day of Creation: Makers of the Revolution in Biology, 611. Exp. ed. Plainview, NY: Cold Spring Harbor Laboratory Press, 1996.

    (23) Orgel, Leslie E. The Origins of Life, 189. New York: Wiley, 1973.

    (24) Davies, Paul. The Fifth Miracle, 120. New York: Simon & Schuster, 1999.

    (25) Orgel, Leslie E. The Origins of Life, 189. New York: Wiley, 1973.

    Here’s another quote, from page 359 of Dr. Meyer’s book, on the same theme. The bold type is mine (VJT):

    Since specifications come in two closely related forms, we detect design in two closely related ways. First, we can detect design when we recognize that a complex pattern of events matches or conforms to a pattern that we know from something else we have witnessed… Second, we can detect design when we recognize that a complex pattern of events has a functional significance because of some operational knowledge that we possess about, for example, the functional requirements or conventions of a system.

    Hope that helps.

  367. “So, what does it reveal about someone who never says anything of substance and continually try to evade answering direct questions?”

    I don’t know but it describes perfectly the anti id people here.

  368. Walter Kloover (#346)

    Thank you for your post. You wrote:

    The first sentence of paragraph (c) says (among other things) that CSI is information that is specified. The second sentence explains what it means for an event to be specified. I don’t usually think of information as an event. Might it be better to say a thing is specified?
    Secondly, if as stated in paragraph (d), information is just a measure of complexity, what does it mean to say that information (itself a measurement of complexity) can be specified and complex?

    Regarding your first query, I would agree with you that specification is best attributed to things or objects (such as strings of characters), although I suppose you could also attribute it to an event such as the manifestation of the object in question, or the relationship between its constituents. Indeed, Dembski’s definition of specified complexity (in Dembski, W. A. and Wells, J. “The Design of Life.” 2008. Foundation for Thought and Ethics, Dallas), says that information can be a property of objects (p. 320):

    An event or object exhibits specified complexity provided that (1) the pattern to which it conforms identifies a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). Specified complexity is a type of INFORMATION.

    In answer to your second query: if a thing, or object (e.g. a string of characters) can be specified, it can also be complex. Its complexity can be measured by its mathematical improbability. Using Dembski’s definition, its specificity can be measured by the brevity of its description. To the extent that it is highly improbable, it can be said to contain information.

    Of course, if the term “information” simply refers to the mathematical improbability (i.e. probabilistic complexity) of the string in question, then the term “specified information” has no meaning. You can say that a string is specified, but you can’t say that a number is. But if the term “information” is used to denote a string of characters possessing the trait of probabilistic complexity, then it makes sense to say that the same string also possesses the property of specificity. For instance, Dembski, 2008, defines complex specified information on p. 311 as “information that is both complex and specified” – in other words, the property possessed by a string which is highly improbable and easy to describe.

  369. Mustela Nivalis (#335)

    Thank you for your post. You wrote:

    You have yet to define FSCI in a mathematically rigorous fashion, nor have you shown it to take into consideration known physics, chemistry, or evolutionary mechanisms, nor have you demonstrated that it is a clear indication of intelligent intervention.

    I refer you to:
    Abel, D. “The Capabilities of Chaos and Complexity,” in International Journal of Molecular Sciences, 2009, 10, pp. 247-291, at http://mdpi.com/1422-0067/10/1/247/pdf .
    I quote (here X_f means X with a subscript f; X_g means X with a subscript g; and t_i means t with a subscript i):

    Durston and Chiu have developed a theoretically sound method of actually quantifying Functional Sequence Complexity (FSC) [77]. This method holds great promise in being able to measure the increase or decrease of FSC through evolutionary transitions of both nucleic acid and proteins. This FSC measure, denoted as Xi, is defined as the change in functional uncertainty from the ground state H(X_g(t_i)) to the functional state H(X_f(t_i)), or

    Xi = delta H (X_g(t_i), X_f(t_j)) (3)

    The ground state g of a system is the state of presumed highest uncertainty permitted by the constraints of the physical system, when no specified biological function is required or present. Durston and Chiu wisely differentiate the ground state g from the null state H_0 . The null state represents the absence of any physicodynamic constraints on sequencing. The null state produces bona fide stochastic ensembles, the sequencing of which was dynamically inert(physicodynamically decoupled or incoherent [196, 197]).

    The FSC variation in various protein families, measured in Fits (Functional bits), is shown in Table 1 graciously provided here by Durston and Chiu. In addition to the results shown in Table 1, they performed a more detailed analysis of ubiquitin, plotting the FSC values out along its sequence. They showed that 6 of the 7 highest value sites correlate with the primary binding domain [77].

    Table 1. FSC of Selected proteins. Supporting data from the lab of Kirk Durston and David Chiu at the University of Guelph showing the analysis of 35 protein families.
    [Meaning of each row - VJT]
    [Name of Protein]
    [1.] Length (aa)
    [2.] Number of Sequences
    [3.]Null State (Bits)
    [4.]FSC (Fits)
    [5.] Average Fits/Site
    Ankyrin 33 1,171 143 46 1.4
    HTH 8 41 1,610 177 76 1.9
    HTH 7 45 503 194 83 1.8
    HTH 5 47 1,317 203 80 1.7
    HTH 11 53 663 229 80 1.5
    HTH 3 55 3,319 238 80 1.5
    Insulin 65 419 281 156 2.4
    Ubiquitin 65 2,442 281 174 2.7
    Kringle domain 75 601 324 173 2.3
    Phage Integr N-dom 80 785 346 123 1.5
    VPR 82 2,372 359 308 3.7
    RVP 95 51 411 172 1.8
    Acyl-Coa dh N-dom 103 1,684 445 174 1.7
    MMR HSR1 119 792 514 179 1.5
    Ribosomal S12 121 603 523 359 3.0
    FtsH 133 456 575 216 1.6
    Ribosomal S7 149 535 644 359 2.4
    P53 DNA domain 157 156 679 525 3.3
    Vif 190 1,982 821 675 3.6
    SRP54 196 835 847 445 2.3
    Ribosomal S2 197 605 851 462 2.4
    Viral helicase1 229 904 990 335 1.5
    Beta-lactamase 239 1,785 1,033 336 1.4
    RecA 240 1,553 1,037 832 3.5
    tRNA-synt 1b 280 865 1,210 438 1.6
    SecY 342 469 1,478 688 2.0
    EPSP Synthase 372 1,001 1,608 688 1.9
    FTHFS 390 658 1,686 1,144 2.9
    DctM 407 682 1,759 724 1.8
    Corona S2 445 836 1,923 1,285 2.9
    Flu PB2 608 1,692 2,628 2,416 4.0
    Usher 724 316 3,129 1,296 1.8
    Paramyx RNA Pol 887 389 3,834 1,886 2.1
    ACR Tran 949 1,141 4,102 1,650 1.7
    Random sequences 1000 500 4,321 0 0
    50-mer polyadenosine 50 1 0 0 0

    Shown are sequence lengths (column 1), the number of sequences analyzed for each family (column 2), the Shannon uncertainty of the Null State H_0 (the absence of any physicodynamic constraints on sequencing: dynamically inert stochastic ensembles) for each protein (column 3), the FSC value Xi in Fits for each protein (column 4), and the average Fit value/site (FSC/length, column 5). For comparison, the results for a set of uniformly random amino acid sequences (RSC) are shown in the second from last row, and a highly ordered, 50-mer polyadenosine sequence (OSC) in the last row. The Fit values obtained can be discussed as the measure of the change in functional uncertainty required to specify any functional sequence that falls into the given family being analyzed. (Used with permission from Durston, K.K.; Chiu, D.K.; Abel, D.L.; Trevors, J.T. Measuring the functional sequence complexity of proteins. Theor Biol Med Model 2007, 4, Free on-line access at http://www.tbiomed.com/content/4/1/47).

    I have to say that this looks pretty “mathematically rigorous” to me.

    As for your assertion that the paper does not “take into consideration known physics, chemistry, or evolutionary mechanisms,” I looked through the paper, and verified that it discusses the following models for the origin of life, and examines their deficiencies: the RNA Word and pre-RNA World models [refs. 208, 209], clay life [210-213]; early three-dimensional “genomes” [214, 215]; “Metabolism/Peptide First” [216-219]; “Co-evolution” [220-223]; “Simultaneous nucleic acid and protein” [224-226]; “Two-Step” models of life-origin [227-229]; autopoeisis [230-232]; complex adaptive systems (CAS) [137, 237, 238]; genetic algorithms [140, 194, 298, 314, 315]; hypercycles [42-49]; and “the Edge of Chaos” [7, 8, 21, 22, 50-57, 198, 316-328].

    That sounds pretty comprehensive to me. Or would you like to propose another model?

    What about “a clear indication of intelligent intervention”? Simple enough. I suggest you look at the introduction (pp. 248-250) and the conclusion (pp. 275-276), from which I quote the following excepts (bold type mine – VJT):

    If Pasteur and Virchow’s First Law of Biology (“All life must come from previously existing life”) is to be empirically falsified, direct observation of spontaneous generation is needed. In the absence of such empirical falsification, a plausible model of mechanism at the very least for both Strong and Type
    IV emergence (formal self-organization) is needed…

    Attempts to relate complexity to self-organization are too numerous to cite [4, 21, 169-171]. Under careful scrutiny, however, these papers seem to universally incorporate investigator agency into their experimental designs. To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory [172, 173]. Evolution has no goal [174, 175]. Evolution provides no steering toward potential computational and cybernetic function [4, 6-11]. The theme of this paper is the active pursuit of falsification of the following null hypothesis:

    “Physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.”…

    Let the reader provide the supposedly easy falsification of the null hypothesis. Inability to do so should cause pangs of conscience in any scientist who equates metaphysical materialism with science. On the other hand, providing the requested falsification of this null hypothesis would once-and-for-all end a lot of unwanted intrusions into science from philosophies competing with metaphysical materialism

    The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated. Scientific mechanism must be provided for how purely physicodynamic phenomena can program decision nodes, optimize algorithms, set configurable switches so as to achieve integrated circuits, achieve computational halting, and organize otherwise unrelated chemical reactions into a protometabolism. To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it:

    “Physicodynamics cannot spontaneously traverse The Cybernetic Cut [9]: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.”

    A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.

    Abel is issuing a challenge that methodological materialists have thus far failed to meet. Pasteur and Virchow’s First Law of Biology (“All life must come from previously existing life”) has yet to be empirically falsified. By default, ID is the only hypothesis still standing.

  370. Nakashima (#329)

    Thank you for your post. You write:

    Abel never proves that his categories of OSC, RSC, and FSC do not overlap or are complete, nor provides an effective procedure for deciding into which category something will fall….

    I refer you to Figure 4 in Abel and Trevors Theoretical Biology and Medical Modelling 2005 2:29 doi:10.1186/1742-4682-2-29. See: http://www.tbiomed.com/content/2/1/29/figure/F4 . Note the caption:

    Superimposition of Functional Sequence Complexity onto Figure 2. The Y_1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from order towards randomness. The Y_2 (Z) axis plane shows where along the same complexity gradient (X-axis) that highly instructional sequences are generally found. The Functional Sequence Complexity (FSC) curve includes all algorithmic sequences that work at all (W). The peak of this curve (w*) represents “what works best.” The FSC curve is usually quite narrow and is located closer to the random end than to the ordered end of the complexity scale. Compression of an instructive sequence slides the FSC curve towards the right (away from order, towards maximum complexity, maximum Shannon uncertainty, and seeming randomness) with no loss of function.

    The reason why Abel never proves that his categories of OSC, RSC, and FSC do not overlap is that by his own admission, they do overlap. Functional Sequence Complexity (FSC) applies to sequences that are generally high in Random Sequence Complexity (RSC). However, they have an extra dimension of complexity on top of this, as the graph clearly shows. That’s why FSC is shown on the Z-axis.
    You also wrote:

    If we examined digits 1000-1100 from the numbers 1/7, 22/7, 2^(1/2)phi, pi, and e, how would Abel sort them into those three categories and/or the additional category “none of the above”?

    I have to say I am mystified by this comment of yours. First, the terms RSC, OSC and FSC apply only to sequences, not digits.

    The digits 1000-1100 clearly exhibit OSC: they’re ordered. The remaining numbers, 1/7, 22/7, 2^(1/2)phi, pi, and e, are for the most part of mathematical significance. Although it would be safe to bet that they were picked by an intelligent agent, they perform no function as a sequence. Thus they do not exhibit FSC. I suppose Abel would just have to say they exhibit RSC, unless you could nominate a reason why you picked those numbers in that particular sequence. Did they just randomly pop into your head?

    Hope that helps.

  371. I will respond later to the oft-repeated claims on this thread that ID doesn’t take Darwinian mechanisms into account, and that ID arguments make use of the “tornado-in-a-junkyard” fallacy. These statements, as it turns out, are canards, as readers of Dembski and Wells’ “The Design of Life” and Meyer’s “Signature in the Cell” should be aware.

  372. Why would ID take Darwinian mechanisms into account, when evolution is the best example of intelligence, chance, and law working together?

    It is the non-IDer who needs to explain the presence of replicating, information processing systems, and the fortuitous matching between non-uniform search space and search algorithm necessary for an evolutionary algorithm.

    It is actually the ID critic who needs to account for the above “Darwinian mechanisms” beginning from a randomly chosen set of laws, absent intelligence (Random.org could be useful in this effort).

    The materialist also needs to provide an account of where events neither best defined by the physical/material/measurable properties of matter/energy nor best defined by chance come from, without providing the “magical emergence” cop-out.” These events would include arrangements of letters (ie: essay), arrangement of parts (ie: machines), and arrangements of nucleotides (genetic information). If those events aren’t defined by physical properties of matter/energy (laws) or chance, and if they are routinely seen to result from the application of foresight, then where should we look to begin to explain such events?

  373. vjtorley #171,172:

    Excellent comments. Both of those papers are great and one of them is an excellent reiteration and further explanation of the probability bound and how to utilize it to effectively eliminate chance — basically an “expansion” of Dembski’s CSI.

  374. Mr Vjtorley,

    Thank you the link to Abel 2005. Figure 4 reinforces for me my fundamental problem with much of Abel’s work. His text might say in one place

    FSC alone provides algorithmic instruction.

    which would lead you to believe that FSC is a ategory that does not overlap with OSC and RSC. But this diagram informs us the OSC and RSC shade into each other, and FSC is just some example of complexity that exhibits function. He hasn’t said, for any given measurement of complexity, that it can’t be functional – just that for some it is improbable that they are functional.

    I’m sorry my question about digit sequences was unclear. i was giving examples of several numbers that have decimal representations which are infinite sequences. For the purposes of my question, I was asking for a consideration of the 100 digits in each sequence that started at digit 1000 and went onward in the sequence. The point was that in each case the digit sequence contained various qualities of order, randomness and function, yet Abel’s qualitative categories really didn’t distinguish any of them from another.

    In general, Abel’s papers suffer a lack of awareness of workers such as Yarus and the stereochemical hypothesis.

  375. Nakashima,

    Thanks for responding; I’ve admired your posts.

    Re: one bit change in software being likely to break something, maybe something major. It’s true that some bits are not critical, and the part broken might not matter (eg: it’s in an error routine that is almost never going to be called anyway). I don’t have any stats on what portion of machine code is how critical.

    I stand by the basic concept however – Intel Pentium machine instructions are poorly designed for mutation and selection. For example, there is very little redundancy in the sense having and using three copies of some code block in parallel, such that if any one of the copies becomes disabled, the other two can carry most of the load.

    Essentially DNA/RNA involves a lot of parallel templates; breaking one copy often affects no others, or few others. On the other hand with machine language which is sequentialy executed, a one bit error can not rarely mess up the functioning of an indefinitely large portion of the rest of the code (like never executing it). It’s all a matter of degree, but machine code is not well adapted keep mostly function in the face of internal mutation. Nor should it be – not needed, different environment.

    So I would not tend to try a genetic algorithm which scrambled and tested Pentium machine language instructions.

    One of the useful things about DNA/RNA is that you can carry around a partially “damaged” fragment (mutated) for some while, if other redundant fragments carry on any essential functions (and the mutation isn’t actively harmful). This means there’s relatively more chance of two “defective” mutations encountering each other to find some synergy, or a second mutation of the same segment to occur. At least compared to sequentially executed code like a computer. I’m not saying that’s “enough” to solve the complexity issues brought up by ID, or that it isn’t, but this robustness of DNA/RNA encoding is part of the toolkit which can’t be ignored.

    Whether originally evolved from simple components or designed, DNA/RNA based life today is pretty well adapted for evolving, ie: for mostly error free copying after corrections, along with error tolerance, with parallel template operation rather than sequential, with sexual mixing for many species, etc.

    (Intel pentium instruction sets are well designed for different purposes)

  376. Speaking of machine language, I once ran into an interesting mutation…

    Well, actually it was a bug, involving one bit set wrong. This was in a galaxy long ago and far away called CP/M. An assembly language program I was developing hung the computer, except when running under the debugger where it worked fine, so I had to trace it step by step. I had a branch wrong (jump when zero rather than jump not zero), and as a result it tried to do a premature “return”, which popped garbage off the stack and jumped to it. In normal usage this resulted in a hung computer – jumping into random locations is often like that. When I traced it down, it turned out that under the debugger the stack was just before the code, and the first two bytes of the program’s code, when interpreted as an address popped of the stack, happened to be EXACTLY the address after the buggy branch where it would have gone without the bug, so execution proceeded exactly as it was supposed to when running under the debugger (and for those who understand the implications here, the first two bytes of the code were never executed again so overwriting them when pushing the next call onto the stack didn’t hurt anything).

    How likely was that?

    Here we have a case of a (nominally) intelligent designer’s mistake being corrected by “random chance”, at least in terms of functionality. I wonder if the designer of Life on Earth included microevolution mechanisms (to use the framework of ID) to help correct their own minor oversights or errors?

    (smile)

  377. Mr Zeph,

    I’m in violent agreement with you, just a question of nuance and an awareness of the counterexamples. Since DNA is read sequentially, there are mutations and indels that cause frame shift error which can disable (or enable) the reading of arbitrarily large amounts of DNA code.

    In developing Genetic Programming, John Koza was careful to work with instruction sets of ‘virtual machines’ that never faulted – all programs always executed. For example, the divide operator was protected so that division by zero was caught and forced to return a value.

  378. Jerry,

    Just a quick observation. ID is not a science such as physics, thermodynamics, evolutionary biology, plate tectonics but rather a supplementary way of analyzing the same data from these various disciplines that have been analyzed by other scientists. Is there a science of Intelligent Design? There might be in the future similar to statistics.

    Your take on ID is certainly an interesting one – including the list of things you assert ID accepts in one of the comments you referenced for me. Your view of ID is quite different that the mainstream stereotypes, and I thank you for explaining it so clearly.

    I’m wondering how may ID advocates would agree with you that ID is not a science as most people understand the term, tho in the future it might become something sort of like statistics (forgive and correct my short paraphrase).

    It sounds like Darwinian evolution is a science because it operates within that paradigm for better or worse. It can turn out to be a false or incomplete theory, which will become discredited eventually, while still remaining a *scientific* theory in nature.

    Certainly some of the challenge to Darwinian evolution appears to be scientific to me, and indeed can exist within the same framework. For example, many of the complexity arguments, correct or not in the end, are quite scientific in nature. But these are not per se defining ID, they are disputing the current orthodoxy. As you say, flaws in one naturalist theory don’t prove non-natural theory, as they may just once again show need for a revised or new naturalistic theory.

    However, once the hegemony of Darwinian explanations has been challenged, there is breathing room for another scientific theory to better explain the phenomena and answer similar challenges in turn.

    If I’m understanding you, ID does not even propose to meet that challenge of scientifically supplanting Darwinian evolution with a better theory, as a new theory in physics might. It proposes a non-naturalistic explanation of sorts which will remain very vague (“intelligent design was involved” appears to be just about the whole corpus of results that I’ve seen so far, with all efforts aimed at sustaining that conclusion rather than elaborating or refining it). As such ID will never be subject to the same kind of “naturalistic” scientific scrutiny that Darwinian evolution must sustain. While defeating Darwinian evolution needs scientific evidence because this is done within the convential scientific framework, ID doesn’t because it’s not really a science but more of a perspective or philosophical interpretation or approach to analyzing data. (In honesty, I haven’t yet understood what ID is in your view, if it’s not a science like others; the analogy with statistics didn’t make sense yet).

    I’m sure I’m misunderstanding your take on ID in many ways (and perhaps almost entirely), but putting this reflection & paraphrase on the table might help in clarifying.

  379. vjtorley,

    You may have missed the little exchange that took place over on the gravity thread but it has implications for your discussion here. One of the posters listed there links that supposedly show information growing in genomes. I followed one of them and it led me to a page which had a host of references to this topic.

    Here is my post from this thread

    “I went to the Adami article referenced by caminintx and then to the articles citing it and it a veritable gold mine of stuff on complexity and information with lots of full text articles. Here is the link I accessed

    http://scholar.google.com/scho.....2F9%2F4463

    It seems all their data for complexity arising is in computer programs and not actual genomes. You would think that genomes is where the action should be.”

    We should try to move this discussion some place else since it is getting very long.

  380. From Abel’s paper cited in #:

    Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products.

    This is an interesting question.

    It seems to me that evolution must always be seen as steering towards “survival and propagation” within some environment – whether an ecosystem or a genetic algorithm’s synthetic universe. The whole point is that feedback look – more adaptive changes being “rewarded” with higher survival rates.

    What you have to avoid is injecting “designer foreknowledge”. For example, suppose you are training a GA based system in facial recognition. Feedback of “more success” vs “less success” is integral. But if the programmer were to give survival rewards to “better contrast detection” based on intelligent knowledge from outside this system, that this ability will aid the final product – that would be injecting design knowledge.

    That is – rewarding contract enhancement would be based on predicting future payoffs, and evolution doesn’t do that.

    However, rewarding simple success is not only fair play, it’s the core concept.

    Zeph

  381. “I’m wondering how may ID advocates would agree with you that ID is not a science as most people understand the term”

    I am a thorn in the side of a lot of people and what I say in my comments has never been challenged by the pro ID people though I am sure some would not agree with them. On a lot thing we do not agree and I have learned over time a lot from the pro ID people as they have questioned my assessment. I try not to say things that cannot be backed up and some times I exaggerate to make a point. I don’t believe anything I say would be questioned by Behe, Dembski or Meyers based on my read of what they wrote. Dembski has twice criticized me here but for other reasons, nothing to do with my take on ID and I was twice sent into moderation by Dave Springer when he ran the site because he disagreed with me. I doubt if Dembski reads 1% of what I write unless it is about something he posts. He has a lot better things to do.

    I am willing to hear from people on my take about how ID fits into science. ID is just another way of analyzing data within a particular science domain. As it happens the most important one is biology and the sub disciplines of evolutionary biology and origins of life. There does not seem to be any room for it in such things as thermodynamics, chemistry, astronomy, plate tectonics, meteorology etc. In such areas as anthropology, archaeology, forensics, cryptology there seems to be a use because no one questions that intelligence intervention has an affect. It is just this intermediary area of life that generates the heat. It also has some place in cosmology and the origins of the universe.

    “It sounds like Darwinian evolution is a science because it operates within that paradigm for better or worse. It can turn out to be a false or incomplete theory, which will become discredited eventually, while still remaining a *scientific* theory in nature.”

    One of the books I suggested you read is Denton’s book, “Evolution a Theory in Crisis.” He makes a distinction between what he calls the Darwin’s general theory of evolution and Darwin’s special theory of evolution. Darwin never made this distinction but what Denton means is micro evolution and macro evolution. He calls what is micro evolution, Darwin’s special theory and there is not much controversy there. The general theory is that all changes over time follow the microevolutionary path but the evidence does not support it.

    The Achilles heel of Darwinian processes is the building of information. If they had anything of consequence, the critics here would be all over it. Instead we get computer algorithms and nothing in real life. People point to small changes in information like it is pivotal or consequential when it reality it is ho hum. The Edge of Evolution is all about this and how little the changes are when large scale changes are necessary to get to major new capabilities.

    This all assumes that you understand basic biology and role of DNA and the transcription/translation process.

    “If I’m understanding you, ID does not even propose to meet that challenge of scientifically supplanting Darwinian evolution with a better theory, as a new theory in physics might.”

    Re-read my comments where I talk about what ID is about. Intelligent intervention is not a process that you can measure with an experiment. That is what natural laws are about. Intelligent intervention could be a one time event and would show up when the natural laws do not play out as expected. When that happens, one searches for alternative explanations. ID says that one alternative might be intelligent intervention. In 99.999% of the time there is no possibility of intelligent intervention. In some rare cases it seems it could be a possibility and that percentage above is a little less for evolutionary biology. In other words there are definitely some places within that particular discipline where intelligent alternatives make sense but it is still a small percentage. Here again are my thoughts on this.

    http://www.uncommondescent.com.....ent-326046

    “In honesty, I haven’t yet understood what ID is in your view, if it’s not a science like others; the analogy with statistics didn’t make sense yet”

    You are confusing a domain of inquiry with the possible causes for events within that domain. It is not like a discipline such as evolutionary biology which has a domain of inquiry. It is a potential explanation or conclusion of a finding within that domain. In other words it expands the possible causes for an event. Right now science a priori rules it out in certain domains of inquiry. One of the domains is evolutionary biology. It does not rule it out in anthropology or archaeology.

    You are trying to absorb in a few days what it has taken a long time to come to and after the reading of several books, watching videos, reading comments here and elsewhere etc. Essentially 10 years of looking at this topic.

  382. 384

    Nakashima at 353

    I agree. However, if someone wants to suggest that man has never created by unique thought, then its fair to ask that person to follow their suggestion to its logical end.

    By the way, I don’t think there is any information that a person who hallucinates invents anything they are not already aware of.

    - – - – - – -

    Hello Zeph, (at 380)

    You say: “It seems to me that evolution must always be seen as steering towards “survival and propagation” within some environment.”

    Evolution cannot be viewed this way at all, ever. The mechanism to drive evolution must be blind with regard to fitness and function. To say anything else is to add foresight and intent. For the materialist, function and fitness must arise without either. This is, of course, something they have not done.

    (Nor have they offered a material explanation for the elemental drive toward “survival and propogation” in the first place.)

    You then say: “The whole point is that feedback look – more adaptive changes being “rewarded” with higher survival rates.”

    There is no evidence of anything in the genome tracking the suvival rates of a population, which is a measure that would logically need to exist in order to drive adaptation toward increasing it. The reward you speak of is said to be the output of selection, but the mechanism that powers selection cannot be anything more than random chance – and as such – it is not the recipient of a (non-existant) feedback loop.

    “What you have to avoid is injecting “designer foreknowledge”.

    Yep.

  383. Mustela Nivalis and R0b
    Thank you for your posts, which raise related points.

    Mustela Nivalis (326)

    As it turns out, I actually have read all of your references in my search for a definition of CSI that ID proponents agree uniquely identifies design and that takes into account known physics, chemistry, and evolutionary mechanisms. Unfortunately, none of your referenced materials do that. Most suffer from the assumption of a uniform probability distribution (the tornado in a junkyard fallacy described by Aleta).

    R0b (336)

    You can scour ID works until doomsday and not find any CSI (or FSCI, or FCSI) analysis that allows for a possibility of Darwinian factors. Assuming tractability of such probability calculations, taking into account physical laws in addition to random mutations could significantly shrink CSI totals. As it is, uniform distributions form a singular basis for CSI claims, and applicability of such quantifications to biological organisms is doubtful.

    Here’s a quote from William Dembski’s online 2004 paper, “Irreducible Complexity Revisited” at http://www.designinference.com.....isited.pdf (pp. 28 ff.). [Dembski uses similar language in his 2008 book, The Design of Life, The Foundation for Thought and Ethics, Dallas, which he co-authored with Jonathan Wells, pp. 182 ff.] The bold type is mine (VJT); the italics are Dembski’s.

    The details here are technical, but the general logic by which design theorists argue that irreducibly complex systems exhibit specified complexity is straightforward: for a given irreducibly complex system and any putative evolutionary precursor, show that the probability of the Darwinian mechanism evolving that precursor into the irreducibly complex system is small. In such analyses, specification is never a problem—in each instance, the irreducibly complex system, any evolutionary precursor, and any intermediate between the precursor and the final irreducibly complex system are always specified in virtue of their biological function. Also, the probabilities here need not be calculated exactly. It’s enough to establish reliable upper bounds on the probabilities and show that they are small. What’s more, if the probability of evolving a precursor into a plausible intermediate is small, then the probability of evolving that precursor through the intermediate into the irreducibly complex system will a fortiori be small.

    Darwinists object to this approach to establishing the specified complexity of irreducibly complex biochemical systems. They contend that design theorists, in taking this approach, have merely devised a “tornado-in-a-junkyard” strawman. The image of a “tornado in a junkyard” is due to astronomer Fred Hoyle. Hoyle imagined a junkyard with all the pieces for a Boeing 747 strewn in disarray and then a tornado blowing through the junkyard and producing a fully assembled 747 ready to fly. Darwinists object that this image has nothing to do with how Darwinian evolution produces biological complexity. Accordingly, in the formation of irreducibly complex systems like the bacterial flagellum, all such arguments are said to show is that these systems could not have formed by purely random assembly. But, Darwinists contend, evolution is not about randomness. Rather, it is about natural selection sifting the effects of randomness.

    To be sure, if design theorists were merely arguing that pure randomness cannot bring about irreducibly complex systems, there would be merit to the Darwinists’ tornado-in-a-junkyard objection. But that’s not what design theorists are arguing. The problem with Hoyle’s tornado-in-a-junkyard image is that, from the vantage of probability theory, it made the formation of a fully assembled Boeing 747 from its constituent parts as difficult as possible. But what if the parts were not randomly strewn about in the junkyard? What if, instead, they were arranged in the order in which they needed to be assembled to form a fully functional 747? Furthermore, what if, instead of a tornado, a robot capable of assembling airplane parts were handed the parts in the order of assembly? How much knowledge would need to be programmed into the robot for it to have a reasonable probability of assembling a fully functioning 747? Would it require more knowledge than could reasonably be ascribed to a program simulating Darwinian evolution?

    Design theorists, far from trying to make it difficult to evolve irreducibly complex systems like the bacterial flagellum, strive to give the Darwinian selection mechanism every legitimate advantage in evolving such systems. The one advantage that cannot legitimately be given to the Darwinian selection mechanism, however, is prior knowledge of the system whose evolution is in question. That would be endowing the Darwinian mechanism with teleological powers (in this case foresight and planning) that Darwin himself insisted it does not, and indeed cannot, possess if evolutionary theory is effectively to dispense with design. Yet even with the most generous allowance of legitimate advantages, the probabilities computed for the Darwinian mechanism to evolve irreducibly complex biochemical systems like the bacterial flagellum always end up being exceedingly small.

    The reason these probabilities always end up being so small is the difficulty of coordinating successive evolutionary changes apart from teleology or goal-directedness.In the Darwinian mechanism, neither selection nor variation operates with reference to future goals (like the goal of evolving a bacterial flagellum from a bacterium lacking this structure). Selection is natural selection, which is solely in the business of conferring immediate benefits on an evolving organism. Likewise, variation is random variation, which is solely in the business of perturbing an evolving organism’s heritable structure without regard for how such perturbations might benefit or harm future generations of the organism. In attempting to coordinate the successive evolutionary changes needed to bring about irreducibly complex biochemical machines, the Darwinian mechanism therefore encounters a number of daunting probabilistic hurdles. These include the following:

    (1) Availability. Are the parts needed to evolve an irreducibly complex biochemical system like the bacterial flagellum even available?

    (2) Synchronization. Are these parts available at the right time so that they can be incorporated when needed into the evolving structure?

    (3) Localization. Even with parts that are available at the right time for inclusion in an evolving system, can the parts break free of the systems in which they are currently integrated and be made available at the “construction site” of the evolving system?

    (4) Interfering Cross-Reactions. Given that the right parts can be brought together at the right time in the right place, how can the wrong parts that would otherwise gum up the works be excluded from the “construction site” of the evolving system?

    (5) Interface Compatibility. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly so that, once suitably positioned, the parts work together to form a functioning system?

    (6) Order of Assembly. Even with all and only the right parts reaching the right place at the right time, and even with full interface compatibility, will they be assembled in the right order to form a functioning system?

    (7) Configuration. Even with all the right parts slated to be assembled in the right order, will they be arranged in the right way to form a functioning system?

    To see what’s at stake in overcoming these hurdles, imagine you are a contractor who has been hired to build a house. If you are going to be successful at building the house, you will need to overcome each of these hurdles. First, you have to determine that all the items you need to build the house (e.g., bricks, wooden beams, electrical wires, glass panes, and pipes) exist and thus are available for your use. Second, you need to make sure that you can obtain all these items within a reasonable period of time. If, for instance, crucial items are back-ordered for years on end, then you won’t be able to fulfill your contract by completing the house within the appointed time. Thus, the availability of these items needs to be properly synchronized. Third, you need to transport all the items to the construction site. In other words, all the items needed to build the house need to be brought to the location where the house will be built.

    Fourth, you need to keep the construction site clear of items that would ruin the house or interfere with its construction. For instance, dumping radioactive waste or laying high-explosive mines on the construction site would effectively prevent a usable house from ever being built there. Less dramatically, if excessive amounts of junk found their way to the site (items that are irrelevant to the construction of the house, such as tin cans, broken toys, and discarded newspapers), it might become so difficult to sort through the clutter and thus to find the items necessary to build the house that the house itself might never get built. Items that find their way to the construction site and hinder the construction of a usable house may thus be described as producing interfering cross-reactions.

    Fifth, procuring the right sorts of materials required for houses in general is not enough. As a contractor you also need to ensure that they are properly adapted to each other. Yes, you’ll need nuts and bolts, pipes and fittings, electrical cables and conduits. But unless nuts fit properly with bolts, unless fittings are adapted to pipes, and unless electrical cables fit inside conduits, you won’t be able to construct a usable house. To be sure, each part taken by itself can make for a perfectly good building material capable of working successfully in some house or other. But your concern here is not with some house or other but with the house you are actually building. Only if the parts at the construction site are adapted to each other and interface correctly will you be able to build a usable house. In short, as a contractor you need to ensure that the parts you are bringing to the construction site not only are of the type needed to build houses in general but also share interface compatibility so that they can work together effectively.

    Sixth, even with all and only the right materials at the construction site, you need to make sure that you put the items together in the correct order. Thus in building the house, you need first to lay the foundation. If you try to erect the walls first and then lay the foundation under the walls, your efforts to build the house will fail. The right materials require the right order of assembly to produce a usable house.

    Seventh and last, even if you are assembling the right building materials in the right order, the materials need also to be arranged appropriately. That’s why, as a contractor, you hire masons, plumbers, and electricians. You hire these subcontractors not merely to assemble the right building materials in the right order but also to position them in the right way. For instance, it’s all fine and well to take bricks and assemble them in the order required to build a wall. But if the bricks are oriented at strange angles or if the wall is built at a slant so that the slightest nudge will cause it to topple over, then no usable house will result even if the order of assembly is correct. In other words, it’s not enough for the right items to be assembled in the right order; rather, as they are being assembled, they also need to be properly configured.

    Now, as a building contractor, you find none of these seven hurdles insurmountable. That’s because, as an intelligent agent, you can coordinate all the tasks needed to clear these hurdles. You have an architectural plan for the house. You know what materials are required to build the house. You know how to procure them. You know how to deliver them to the right location at the right time. You know how to secure the location from vandals, thieves, debris, weather and anything else that would spoil your construction efforts. You know how to ensure that the building materials are properly adapted to each other so that they work together effectively once put together. You know the order of assembly for putting the building materials together. And, through the skilled laborers you hire (i.e., the subcontractors), you know how to arrange these materials in the right configuration. All this know-how results from intelligence and is the reason you can build a usable house.

    But the Darwinian mechanism of random variation and natural selection has none of this know-how. All it knows is how to randomly modify things and then preserve those random modifications that happen to be useful at the moment. The Darwinian mechanism is an instant gratification mechanism. If the Darwinian mechanism were a building contractor, it might put up a wall because of its immediate benefit in keeping out intruders from the construction site even though by building the wall now, no foundation could be laid later and, in consequence, no usable house could ever be built at all. That’s how the Darwinian mechanism works, and that’s why it is so limited. It is a trial-and-error tinkerer for which each act of tinkering needs to maintain or enhance present advantage or select for a newly acquired advantage.

    I hope that quote answers most of your questions. Later, I’ll discuss how Meyer addresses the same question, and I’ll contrast it with the approach taken by Kalinsky.

  384. “You can scour ID works until doomsday and not find any CSI (or FSCI, or FCSI) analysis that allows for a possibility of Darwinian factors. ”

    Why should it? We are told Darwinian processes started after the first cell appeared. The components of the cell supposedly appeared before life began unless someone wants to make the case that a different form of life existed before our life arose. What was the pre ATP synthase, pre Ribosome?

    What were the steps along the way? Darwinian factors assume zillions of possibilities and selection led to the ones we see. In life it is easy to see the zillions of possibilities but for the basic organization of life to begin with, what are the alternatives. None have been found so how are Darwinian processes relevant? Did one of the few miracle solutions somehow get discovered by chance? That is a not a 747 in the junkyard solution but maybe a 757.

    We are not talking about a slightly different microbe here. We are talking about the building blocks and how essential they are. What alternatives are there to all these essential parts? If there are no or few alternatives, then was Darwinian selection.

  385. The last sentence of my previous post should be

    “If there are no or few alternatives, then what Darwinian selection?”

  386. Zeph and Jerry –ID is not a science such as physics, thermodynamics, evolutionary biology, plate tectonics

    Design detection is something that can be, and has been done, using a methodology that limits itself to measurable and testable observations of nature.

    ID is part of that field. I’d call it every bit as much of a science as physics etc.

  387. “ID is part of that field. I’d call it every bit as much of a science as physics etc.”

    It is science but it there is no domain of design detection. It is more like mathematics and in fact is heavily dependent on statistics/probability. Science is generally defined by the domain of phenomena it investigates. ID spans several of these domains just as mathematics and statistics does. It is definitely science but the particular inferences take place within forensics, cosmology, evolutionary biology, anthropology, etc.

    That is my take on it.

  388. It is science but it there is no domain of design detection.

    OK, I can see that point.

  389. This post is on the same theme as my previous one: do the writings of ID proponents suffer from the tornado in a junkyard fallacy?
    Mustela Nivalis (326)

    As it turns out, I actually have read all of your references in my search for a definition of CSI that ID proponents agree uniquely identifies design and that takes into account known physics, chemistry, and evolutionary mechanisms. Unfortunately, none of your referenced materials do that. Most suffer from the assumption of a uniform probability distribution (the tornado in a junkyard fallacy described by Aleta).

    R0b (336)

    You can scour ID works until doomsday and not find any CSI (or FSCI, or FCSI) analysis that allows for a possibility of Darwinian factors. Assuming tractability of such probability calculations, taking into account physical laws in addition to random mutations could significantly shrink CSI totals. As it is, uniform distributions form a singular basis for CSI claims, and applicability of such quantifications to biological organisms is doubtful.

    In my previous post, I demonstrated that Dembski’s writings on intelligent design do not suffer from the tornado in a junkyard fallacy. In this post, I argue that Dr. Meyer’s latest book, Signature in the Cell does not suffer from this fallacy either. Meyer does take the time to evaluate and reject the “pure chance” hypothesis for the origin of life, but he then goes on to carefully examine a number of hypotheses that invoke a combination of chance and necessity – including the much-vaunted RNA-world hypothesis. The following extracts convey the flavor of his work, and give the lie to claims made by some Darwinists that Meyer is attacking a straw man in his book.

    Evaluating the “pure chance” hypothesis for the origin of life

    Pages 210-213

    [B]y taking what he knew about protein folding into acocunt, Axe estimated the ratio of (a) the number of 150-amino-acid sequences that produce any functional protein whatsoever to (b) the whole set of possible amino-acide sequences of that length. Axe’s estimated ratio of 1 in 10^74 implied that the probability of producing any properly sequenced 150-amino-acid protein at random is also about 1 in 10^74. In other words, a random process producing amino-acid chains of this length would stumble onto a functional protein only about once in every 10^74 attempts

    In June 2007, Axe had a chance to present his findings at a symposium commemorating the publication of the proceedings from the original Wistar symposium forty years earlier. In attendance at this symposium was MIT engineering professor Murray Eden…

    Axe’s improved estimate of how rare functional proteins are within “sequence space” has now made it possible to calculate the probability that a 150 amino-acid compound assembled by a random interaction in a prebiotic soup would be a functional protein. This calculation can be made by multiplying the three independent probabilities by one another: the probability of incorporating only peptide bonds (1 in 10^45) , the probability of incorporating only left-handed amino acids (1 in 10^45), and the probability of achieving correct amino acid sequencing (using Axe’s 1 in 10^74 estimate). Making that calculation (multiplying the separate probabilities by adding their exponents: 10^45+45+74) gives a dramatic answer. The odds of getting even one functional protein of modest length (150 amino acids) by chance from a prebiotic soup is no better than 1 chance in 10^164…..[T]he probability of finding a functional protein by chance alone is a trillion, trillion, trillion, trillion, trillion, trillion, trillion times smaller than the odds of finding a singel specified particle among all the particles in the [observable- VJT] universe.

    And the probability is even worse than this for at least two reasons. First, Axe’s experiments calculated the odds of finding a relatively short protein by chance alone. More typical proteins have hundreds of amino acids, and in many cases their function requires close association with other protein chains….

    Second, as discussed, a minimally complex cell would require many more proteins than just one. Taking this into account only causes the improbability of generating the necessary proteins by chance – or the genetic information needed to produce them – to balloon beyond comprehension. In 1983 distinguished British cosmologist Sir Fred Hoyle calculated the odds of producing the proteins necessary to service a simple one-celled organism by chance at 1 in 10^40,000. At that time scientists could have questioned his figure…

    Axe’s experimental findings suggest that Hoyle’s guesses were pretty good. If we assume that a minimally complex cell needs at least 250 proteins of, on average, 150 amino acids and that the probability of producing just one such protein is 1 in 10^164 as calculated above, then the probability of producing all the necessary proteins needed to service a minimally complex cell is 1 in 10^164 multiplied by itself 250 times, or in in 10^41,000. This kind of number allows a great deal of quibbling about the accuracy of various estimates without altering the conclusion.

    [The last sentence should suffice to refute claims made on the Internet that the 1 in 10^164 estimate for the likelihood of a single functional protein arising by chance is far too pessimistic. Even if we revise the estimate by dozens of orders of magnitude, we are still left with an astronomically improbable event, for a cell which requires 250 functional proteins - VJT.]

    Pages 256-257

    Energy flowing through an open system will readily produce order. But it does not produce much specified complexity or information.

    The astrophysicist Fred Hoyle had a similar way of making the same point. He famously compared the problem of getting life to arise spontaneously from its constituent parts to the problem of getting a 747 airplane to come together from a tornado swirling through a junk yard… Energy might scatter parts around randomly. Energy might sweep parts into an orderly structure such as a vortex or funnel cloud. But energy alone will not assemble a group of parts into a highly differentiated or functionally specified system such as an airplane or cell (or into the informational sequences necessary to build one).

    [NOTE: Here the point Meyer makes is a purely qualitative one - that energy alone cannot generate sequence specificity - VJT.]

    Evaluating the self-organization hypothesis for the origin of life

    Pages 267-268

    As I examined Kauffman’s model, it occurred to me that I was beginning to see a pattern. Self-organizational models for the origin of biological organization were becoming increasingly abstract and disconnected from biological reality. [T]hese models claimed to describe processes that produced phenomena with some limited similarity to the organization found in living systems. Yet upon closer inspection these allegedly analogous phenomena actually lacked important similarities to life, in particular, the presence of specified complexity, or information.

    But beyond that, I realized that self-organizational models either failed to solve the problem of the origin of specified information, or they “solved” the problem at the expense of introducing other, unexplained sources of information.

    …In my view, these models either begged the question or invoked a logical contradiction. Proposals that merely transfer the information elsewhere necessarily fail because they assume the existence of the very entity – specified information – they are trying to explain. And new laws will never explain the origin of informatiom, because the processes that laws describe necessarily lack the complexity that informative sequences require.

    Page 294

    As noted in Chapter 10, the probabilistic resources of the entire universe equal 10^139 trials, which, in turn, correponds to an information measure of less than 500 bits. This represents the maximum information increase that could be reasonably expected to occur by chance from the big-bang singularity to the present… Taking these caveats into account allows a more general statement of the law [of conservation of information - VJT] as follows: “In a nonbiological context and absent intelligent input, the amount of specified information of a final system, S_f, will not exceed the specified information content of the initial system, S_i, by more than the number of bits of information the system’s probabilistic resources can generate, with 500 bits representing an upper bound for the entire observable universe.

    The RNA World

    Page 305

    Problem 3: An RNA-based Translation and Coding SystemIs Implausible

    RNA-world advocates offer no plausible explanation for how primitive self-replicating RNA molecules might have evolved into modern cells that rely on a variety of proteins to process genetic information and regulate metabolism…

    Page 312

    Problem 4: The RNA World Doesn’t Explain the Origin of Genetic Information

    [A]s I studied the [RNA world] hypothesis more carefully, I realized that it presupposed or ignored, rather than explained, the origin of sequence specificity – information – in various RNAmolecules.

    Page 315

    To make matters worse, as Gerald Joyce and Leslie Orgel note, for a single-stranded RNA catalyst to produce an RNA identical to itself, (i.e. to self-replicate), it must find an appropriate RNA molecule nearby to function as a template, since a single-stranded RNA cannot function as both replicase and template. Moreover, as they observe, this RNA template would have to be the precise complement of the replicase.

    Pages 318-319

    Problem 5: Ribozyme Engineering Does Not Simulate Undirected Chemical Evolution

    Ribozyme engineers tend to overlook the role that their own intelligence has played in enhancing the functional capacities of their RNA catalysts. The way the engineers use their intelligence to assist the process of directed evolution would have had no parallel in a prebiotic setting, at least one in which only undirected processes drove chemical evolution forward. Yet this is the very setting that ribozyme experiments are supposed to simulate.

    Page 322-323

    Theorists relying on necessity awaited the discovery of an oxymoron, namely “a law capable of producing information” – a regularity that could generate specified irregularity. Meanwhile, theories combining law and chance begged the question as to the origin of the information they ought to explain….

    But as I examined these new approaches [to the origin of life], I found them no more convincing than those they were seeking to supplement. Even apart from their limited success, the very fact that these experiments required so much intervention seemed significant. By involving “programming” and “engineering” in simulations of the origin of life, these new approaches had introduced an elephant into the room that no-one wanted to talk about…This led me back to where I had started – to the idea of intelligent design…

    Well, I hope that settles the matter, regarding the writings of Dr. Meyer.

  390. Jerry & tribune7,

    ID as part of the field of Design Detection? Hmmm.

    What I’m perceiving here is that what is called “ID” feels like a marriage of convenience between (1) tearing down Darwin scientifically, and (2) describing a competitive hypothesis, not very scientifically.

    I find the former (critique of Darwinism on scientific grounds) worthwhile in its own right, whether it winds up in the end discrediting Darwinism or revolutionizing it (or even being discredited itself). It’s a good line of argument to pursue, and it fits within the framework of evolutionary biology (leaving aside whether the human proponents therein react well or not to it).

    But critiquing the Darwinian explanation is not really elaborating or developing any science of “intelligent design”; it’s just creating enough space for an alternative approach like ID to compete for explanatory value.

    I’m finding more difficulty discerning much content to the alternative that ID is proposing. What I’m hearing is extremely vague and amorphic: support for a hypothesis that intelligent design was injected into the mixture. Nothing about the shape or scope of that intervention, nothing about the structure or frequency or style of the intervention, or the nature of the presumed designer.

    Imagine that the Darwinists had just said “the diversity of life comes from accumulated gradual changes” and stopped there. No further details or structure or analysis of how or when, and no desire or interest in going beyond championing & defending that one hypotheses as the sole “product” of Darwinian evolution as science. I wouldn’t call that “a science”, it’s just a simple hypothesis (however well supported) in search of elaboration and development.

    Jerry, you do a very lucid job of describing the critique of established science alongside what is still accepted (at least by you) from that science. Thank you. But then when you start describing the alternative, for me the lucidity of your prose seems to recede, and you do things like blatently conflating intelligence and free will as if it were obvious that these are the same thing or completely linked. (If seeding oyster beds is intelligence, I can show you some very intelligent programs that do not appear to be exhibiting free will). That kind of thinking jumps the rails of science.

    I’m a very receptive audience for anything that ID can produce in the way of telling us something about the designer or the mechanism or the structure of the interventions. Like, suppose that using evidence and scientific analysis, ID could identify two strains of intelligent design with different styles, goals, or mechanisms. Or estimate the IQ of the designer, based on analysis of mistakes. THOSE would be fascinating, and would be an original contribution of merit to science. Certainly literary forensics can discover such things, so it’s not setting the bar impossibly high.

    Is the ID hypothesis fruitful, or sterile, as an area of human research? Do you care to predict whether, in say the next ten years, ID will develop any structure beyond “some kind of intelligence intervened in some way once or many times”?

    One thing I’m wondering is whether this lack of even rudimentary elaboration of ID’s hypothesis is due to “there’s nothing to be discerned” (true intellectual sterility) or “it might offend some supporters who believe the designer is the Christian God” (adherence to a limiting but unspoken agenda).

    Suppose some ID researcher, analyzing signs of intelligence, asserted that there were two distinguishable styles of intervention. Scientifically, this might be a breakthrough of enormous impact. But for anybody who supports ID (rhetorically, finanically or emotionally) with the hope that it will validate their religious faith, this might be unsettling. Is this sign of Jesus vs Yahweh having different styles (where’s the holy ghost?). Or is it the pagan God and Goddess? Yahweh and Satan? Theologically it would be a mess, and might result in that branch of ID being condemned from the pulpit as furiously as Darwin is.

    What’s your sense? Is the ID movement intellectually free to pursue the evidence where ever it might lead, or are there orthodoxies which might constrain which ID research gets support from the ID community, depending on its conclusions?

  391. Zeph — Imagine that the Darwinists had just said “the diversity of life comes from accumulated gradual changes” and stopped there.

    It would actually be correct. Diversity of life does come from accumulated gradual changes. The problem comes when it claims “all diversity of life” and “only through accumulated gradual changes”

    No further details or structure or analysis of how or when,

    What are the details on the cause of The Big Bang? Is The Big Bang science?

    and no desire or interest in going beyond championing & defending that one hypotheses

    You see no value in new observations? It’s when the observation is accepted that expansion occurs.

  392. 394

    vjtorley at 385,

    Thanks for the detailed quote from Dembski — it really helps to have the reference material as part of the discussion.

    One of the reasons I’m interested in getting a better understanding of CSI is because, as you note, Dembski does explicitly state that known evolutionary mechanisms must be taken into account if the measurement is to be meaningful in the real world. There are, however, two problems with his further discussion of the topic.

    The first is that he never, to my knowledge, actually discusses particular mechanisms. That is, I haven’t seen an attempt by Dembski to calculate CSI for a real biological artifact with reference to how particular physical, chemical, and evolutionary mechanisms affect the calculation. Everything I’ve read is at a very high level, whereas the CSI, if it’s there, will be in the details.

    The second problem is illustrated by the section you quoted. The analogy with a housing contractor and the list of supposed requirements assume that evolutionary mechanisms are looking for a particular outcome. This is sometimes referred to as the “Lottery Winner fallacy” and is, unfortunately, almost as prevalent in ID calculations as the uniform probability distribution. While any specific outcome might be unlikely, an outcome of some sort (a lottery winner or a viable organism) is far more probable.

    I do hope that Dembski or another ID researcher takes the CSI concept to the next level and calculates it for some real biological artifacts, keeping in mind Dembski’s quite appropriate insistence that known evolutionary mechanisms be incorporated.

  393. 395

    vjtorley,

    I intend to respond to your citations of Dunston, Chiu, et al. as time permits. I have some notes from when I first read those papers. Real world work is interfering with my web time, though, so it may be a day or two.

    Regards.

  394. Mr Vjtorley,

    With regard to your quotation from Dr Dembski, that matter would be settled if we looked at the work of which he says

    Yet even with the most generous allowance of legitimate advantages, the probabilities computed for the Darwinian mechanism to evolve irreducibly complex biochemical systems like the bacterial flagellum always end up being exceedingly small.

    If Dr Dembski, or whoever had done the calculations that “always end up being exceedingly small” would publish those calculations, the critics would be quieted. Instead we get an analogy about house building, and not building a series of houses, either. So while Dr Dembski does say the calculations take selection into account, without seeing them there is still an opening for the critics.

    With respect to Signature in the Cell, Dr Meyer quotes a number of calculations, some of which you have highlighted.

    Bolding Dr Fred Hoyle’s original tornado-in-a-junkyard calculation is not going to convince anyone that ID does not routinely use tornado-in-a-junkyard argumentation. All of these phrases that you have bolded:

    “random process producing amino-acid chains of this length would stumble onto”

    “probability that a 150 amino-acid compound assembled by a random interaction”

    “The odds of getting even one functional protein of modest length (150 amino acids) by chance”

    “Axe’s experiments calculated the odds of finding a relatively short protein by chance alone.”

    “odds of producing the proteins necessary to service a simple one-celled organism by chance”

    “If we assume that a minimally complex cell needs at least 250 proteins of, on average, 150 amino acids and that the probability of producing just one such protein is 1 in 10^164 as calculated above, then the probability of producing all the necessary proteins needed to service a minimally complex cell is 1 in 10^164 multiplied by itself 250 times, or in in 10^41,000.”

    Now, all of the above appear in the section on the chance hypothesis, so you can say that Meyer is quoting them correctly. Subsequent occurences cannot be treated as kindly.

    “He famously compared the problem of getting life to arise spontaneously from its constituent parts to the problem of getting a 747 airplane to come together from a tornado swirling through a junk yard…” Ooops.

    “As noted in Chapter 10, the probabilistic resources of the entire universe equal 10^139 trials”

    “reasonably expected to occur by chance”

    To summarize, if you see the phrase “random process … by chance” then you are not dealing with chance and selection. If you see “number of trials” you are not dealing with selection and contingency, you are dealing with independent trials. If you see “the probability of X is A, so the probability of N*X is A^N” you are dealing independent trials, not selection, variation, exaptation, history, or contingency on the laws of chemistry and physics. All of the above are versions of “if it happened at all, it happened all at once” aka tornado-in-a-junkyard.

    Bringing these quotations together does not dispel the notion that ID theorists frequently use the idea and mathematical equivalent of Dr Hoyle’s memorable phrase. You could go to papers by Kalinsky, Abel and others and get more examples.

    If you want to find true counter-examples, look for reasoning and math that assumes f(t+1) = f(t) + variation + selection, in other words, an iterated function system. Look for the use of the Price equation or Holland’s Schema Theorem. But I have to warn you, I have never seen an ID theorist reason using these tools to show the inadequacy of evolution.

  395. Tribune7:

    Zeph: Imagine that the Darwinists had just said “the diversity of life comes from accumulated gradual changes” and stopped there…
    No further details or structure or analysis of how or when…

    Tribune7: What are the details on the cause of The Big Bang? Is The Big Bang science?

    I notice a shift of “details or structure or analysis” to “details on the cause”, which is quite different. I’m not asking ID to come up with details on the cause of intelligence or who designed the designer, just more structure for how the process works or worked.

    Note that there has been a tremendous amount of detailed analysis of the likely events and mechanisms during the Big Bang, as well as attempts to validate and test alternative details based on such evidence as differences in cosmic background radiation. The proponents decidedly didn’t stop at “there was a big bang out of which everything expanded somehow” or it would not have been a branch of science.

    You see no value in new observations? It’s when the observation is accepted that expansion occurs.

    No, I do see value in new analyses, which is why I’m here. I’m intrigued by the intelligent design hypothesis and the critiques of Darwinian evolution associated with it.

    However, I’m asserting that the only people who need to “accept the observation” now are ID scientists. They are free already to produce fruitful results and begin elaborating signficant structure in the designer, the mechanism, etc so that the sum total of “results” is more than “somehow intelligence was probably introduced at some point or points”, and defend those results to the same standard that they hold Darwinists to. THEN the world will really take notice! (And I’ll be truly excited).

    It doesn’t work to say “ID scientists are unable to elaborate any details whatsoever until mainstream science totally accepts their hypothesis”. That’s putting the cart before the horse. They need to believe in their hypothesis sufficiently to produce some real theories and content.

    Suppose a decade from now Darwinian evolution was totally discredited, and there was no longer any need to attack it. An ID textbook is being taught in school. It has one introductory “historical context” chapter detailing how ID proponents discredited Darwinnian evolution; sort of like a microbiology book with a brief nod to pre-germ theory notions before getting down to the real content. The remainder of the book would describe the results of ID science.

    Currently that rest of that book on ID science seems to be “somehow some kind of intelligence of unknown nature of was injected at some time or times in the past”. No further details. No theories. No structure. No mechanisms. No progress on reducing any of the vagueness in that statement. No experimental validation of its own detailed elaborations (versus attacking Darwinism, which was completed in the intro chapter).

    Please understand that even somebody open minded and interested in alternate hypotheses has a hard time characterizing that textbook as describing even a new science.

    I’m hoping for more. I’m rooting for it, not against it. I would enjoy having a true science of ID with fruitful and exciting results, internal debates and schools of scientific thought, everexpanding scientific knowledge. As a newbie, at this point I’m struck by how little scientific fertility the ID hypothesis appears to have, and hoping that will change.

  396. Zeph

    I’m not asking ID to come up with details on the cause of intelligence or who designed the designer, just more structure for how the process works or worked.

    I’d say that life came about via a “big design” is a fair analogy to claiming the universe came about via a big bang. :-)

    I’m asserting that the only people who need to “accept the observation” now are ID scientists. They are free already to produce fruitful results and begin elaborating signficant structure in the designer,

    And that they are. But it is certainly not inappropriate for them to defend the observation.

    Suppose a decade from now Darwinian evolution was totally discredited, and there was no longer any need to attack it.

    One of the big problems with out culture, science and educational system is the insistence in attempting — pretending might be a better word — to provide empirical certainty where none exists.

    This is evident in the way Darwinian evolution is treated as dogma even including attacks on heretics and such.

    I would not want ID to become a dogma. One of my big concerns regarding ID is that people will attempt to use it to prove God via science, which would be a very bad thing.

    Now, I think it would be very good for a nation to hold the existence of God — the loving one of Judeo-Christianity — as axiomatically true even if just for the hopefully obvious reasons that we want those who write and enforce society’s laws to realize that there will be an ultimate accounting for injustice, and that we want the citizenry to realize they have an individual responsibility for compassion and action independent of the state, but science class isn’t the place to teach this.

    The one useful thing I see ID as doing is refuting, conclusively, material accidentalism.

  397. tribune7: “The one useful thing I see ID as doing is refuting, conclusively, material accidentalism.”

    OK, that’s honest. Your interest in ID is not in developing an fertile and intellectually vigorous new science which is ever expanding human knowledge like other sciences, but to provide more of a philosophical or sociological outcome. To do that, the only result it needs is “some kind of non-natural intelligence was injected into the system at some point”, and once that’s conceded, there’s no need to examine the issue any further.

    I have the impression that some people actually believe ID has the potential to become a science in itself. It is they who may be disappointed. Assume success. Once attacking Darwinism becomes beating a dead horse, there’s no need for graduate students to study ID and hope to make a career in it – if the sum total of all it ever intends to reveal is that simple statement and stop there. The only useful conclusion of ID will have been “proven”, so there’s nothing left for ID to investigate or explicate.

    Meanwhile, the evolutionary biologists will continue to have a rich intellectual endeavor which continues to illuminate “micro-evolution” with all its fascinating twists and turns. There will be ongoing employment for practical or small scale evolutionists, even if they can explain only part of the picture (micro-evolution), because that part keeps expanding and deepening. It’s a fertile science in its ability to explain more and more, even if it were never to explain everything.

    This reminds me of some people who dispute the validity of all macroeconomic models. If they succeeded in convincing everybody that there was no point to macroeconomics because the phenomena were too complex, the microeconomists would still be employed. (But there’s no need to pay anybody to keep reproving that macroeconomics isn’t correct, without elaborating on any alternative).

    I’m really interested in finding our best understanding of the scientific truth here, separated from religion, philosophy, and politics.

    “One of the big problems with out culture, science and educational system is the insistence in attempting — pretending might be a better word — to provide empirical certainty where none exists.”

    Hmmm. I may be misunderstanding, but isn’t that what ID is attempting as well? Its proponents in general appear to be just as firm in their belief that their hypothesis is certainly correct, as are their opponents. (The opponents have the advantage of numbers currently, but I’m talking about attitudes, and obviously many here wish to change that disparity – to have more ID believers than DE believers a decade hence).

    I’m not saying that ID is worse; just that it fits the same characterization – fulfilling a cultural need for alleged certainty rather than uncertainty.

    I appreciate your desire to avoid dogma, but I have doubts about the success with regard to ID, if it has only one simple conclusion and no internal debate or investigation of further details.

    (By the way, while I agree that science class isn’t the place – I agree that “we want the citizenry to realize they have an individual responsibility for compassion and action independent of the state”. I’d probably say independent of any particular religion or groups of religions as well. I’m not sure ID would actually lead to that, but I can sympathize with your goals).

  398. Zeph — Your interest in ID is not in developing an fertile and intellectually vigorous new science which is ever expanding human knowledge like other sciences, but to provide more of a philosophical or sociological outcome.

    There’s a subtlety your missing. Overturning a dogma that has become a dead-end would expand human knowledge more than any new science could.

    “One of the big problems with out culture, science and educational system is the insistence in attempting — pretending might be a better word — to provide empirical certainty where none exists. . . but isn’t that what ID is attempting as well?

    I can’t speak for all ID proponents but the consensus to me seems to be that ID follows a scientific methodology which means it is and must remain potentially falsifiable.

    Its proponents in general appear to be just as firm in their belief that their hypothesis is certainly correct, as are their opponents

    Obviously, a sincere advocate will believe his position to be correct but I hope nobody here wants to hold ID as a dogma – i.e. we have the truth hence if you question us you must not be allowed tenure, be kept from publishing etc.

  399. I talk mostly about scientific evidence, but I’ve been thinking about the “materialist” orientation of science. That particular obsession seems to particularly gall some ID proponents. I do not believe it arises from atheism or desire to deny spirit – indeed many of the leading proponents of science have been theists.

    I believe this material or natural orientation arises more as a heuristic of sorts, based on accumualated experience.

    Continually expecting a material explanation – even if we don’t have it yet – has turned out to be a remarkably fertile intellectual approach. That is, people following that approach achieve more useful or valuable results, which leads to others adopting similar approaches.

    The alternative is in a broad sense, magic. Perhaps there is a better word, but this will do for now. As a plot device, the characteristic of magic is that it has no limits – except the limits the author wants for plot purposes. The magic in Harry Potter’s universe follows no real structure, has little rhyme or reason. This is convenient for fiction – if you need a spell for changing a housecat into a dragon, such spells are possible. No explanation needed, “it just is”.

    The real world isn’t like that. There is an inherent structure of what can become what, restrictions like conservation of mass, or dynamics of loss vs gain of information. (These limitations are what ID advocates rely on when scientifically critiquing DE).

    We can invent similar magical explanations of the natural universe. We as humans have done so many times and in many ways. The brilliance of the scientific method is its dogged (even if at times time delayed) return to the test of the real world. I’ve heard reality defined as “that which stays around even if we stop believing in it”. And that reality which doesn’t bend to our norms and expectations and desires is the pole star to which science keeps returning, honing and burnishing (and breaking and rebuilding) its explanations. (If ID wins scientifically, it will be from using these tools).

    If the cause of leprosy is ascribed to supernatural means, then either we don’t even bother trying to understand it better, or we come up with many conflicting and arbitrary treatments or propituations which basically don’t work because they take their cues from theories and imaginings ungrounded in empiricism. If on the other hand we assume there are natural causes to be eventually discovered, through continuing to observe, hypothesize, and test against the real world – wow, time after time we wind up finding at least part of the answer. We might learn how to avoid spreading it, even before learning about microbes.

    Science does not have all the answers and probably never will; but it’s approach in assuming there are materialistic or naturalistic answers has proven to be extremely fruitful compared to many other approaches. It has an ongoing and even accelerating success at explaining more and more, which is hard to argue with. The anti-DE part of ID is very much within that realm.

    So science is naturally biased against accepting something like “some non-natural intelligence intervened” because that becomes a show stopper, like a convenient magic spell to untangle plot crisis. The gap in knowledge is filled with an all purpose formless putty that admits no further exploration or illumination. It’s like letting a geometry student just write on their paper “this lemma will lead to the desired theorum” in green ink whenever they come to a step for which they have no solution, and considering that sufficient explanation. Once you can do that, there’s no point in working up as sweat looking for a more nuanced solution. But then, you don’t develop the mathematics to build a bridge, because you didn’t keep working to fill in the gaps of your earlier knowledge.

    So – even if ID is “true”, it’s understandable that science is going to have a hard time accepting it, so long as adopting its approach leads to intellectual dead ends.

    Suppose you are are a grad student looking into some adaptation of butterflies which is mysterious; why bother continuing to puzzle it out if you can just say “this was probably a manifestation of the designer because we have no naturalistic explanation”, turn in your thesis, and get your degree? And teach your students to do the same.

    A nation whose “scientists” did that would be out-competed by a nation whose scientists doggedly assumed and sought a natural explanations even if it takes generations to find it.

    Maybe there are non-natural explanations for many daunting phenomena. But assuming otherwise and continuing to look has proven over time to be extemely adaptive. Even mysteries that have baffled scientists for generations sometimes unfold later, because they keep assuming there’s a natural explanation. So scientists have developed a strong allergy to non-naturalistic answers as being “too pat” and “intellectually sterile, leading to no further insights”.

    This is why I’d like to see ID actually take on learning more about the nature of the designer, the methods and nature of the interventions, etc. Become fruitful and the memes will multiply (even if such reproductive success doesn’t explain the original memes :-)). Show that ID is not an immediate scientific dead end, with 90% of all it will ever in a thousand years deliver already known by 2010.

    A metascientific prediction would be: because it relies on non-natural interventions, ID is likely to be less fertile in the knowledge and insights it yields over time. That has historically been the experience of other conflicts between scientific naturalism and non-naturalistic hypotheses or approaches, so the natural prediction is that it will happen again with ID. This prediction is testable and falsifiable, but I don’t claim it’s directly scientific.

    I’d like to see that prediction proven wrong, by a robust ID science field which elaborates the details, debates the evidence, forms theories, challenges itself – and comes up with far more than one result.

  400. Zeph –Continually expecting a material explanation – even if we don’t have it yet – has turned out to be a remarkably fertile intellectual approach.

    If you are studying nature you are going to look for natural causes. The problem that has occurred is the claim that the study of nature can reveal all truths, or if it can’t be found in the study of nature the truth is not important.

    And some the greatest scientific discoveries have occurred almost in direct opposition to the expectation of a material cause. Think thermodynamics, biogenesis, The Big Bang.

    When scientists assume God did it and ask “how?” they are more productive than if assume chance did it and ask “how?”

  401. tribune7: There’s a subtlety your missing. Overturning a dogma that has become a dead-end would expand human knowledge more than any new science could.

    First off, I hope you have understood that I support challenging the DE orthodoxy on scientific grounds. I can see that evolutionists have circled the wagons and rigidified. (I actually think the ID debate would be far healthier if there were not battle scars from biblical creationism poisoning the trust and good will possibilities, but the world is as it is). So I say “go for it” – see if your viewpoint can win on scientific grounds. I’m intrigued as I say.

    BUT – I don’t honestly see that biology is at any dead end because of some DE dogma. There is an assertion that they have not yet sufficiently explained macroevolution – but even many ID folks concede that microevolution has explanatory value. As I’ve said in another post, science is OK with marking something “not yet fully explained” for generations, so long as they meanwhile are producing useful results, incrementally chipping away.

    On the other hand – once the dogma if DE (I’m tired of writing out Darwinian Evolution) has been defeated and ID is free to demonstrate its scientific fruits – what will they be? If DE’s explanation has lost traction by 2015, how will ID step in to continue to expand human knowledge from 2015 to 2025?

    I appear to be hearing that there’s no interest, drive or perhaps ability for ID science to continue to expand human knowledge any further than “intelligent intervention of some kind happened”.

    Now if you want a stagnant science to attack, try String Theory. The case for that being an intellectual dead end are far stronger than for DE and biology. The cosmologists need some radically new theories – but ones that can in turn be elaborated and tested.

    But that’s another subject. For now, we’re talking biology. And I’m genuinely curious what sorts of ongoing expansion of human knowledge you think will arise once the dragon of Darwin has been tamed and minds are free to expand on alternate hypotheses, including ID.
    What would be a possible example second finding from studying ID?

  402. tribune7:

    We are in alignment about many things.

    If you are studying nature you are going to look for natural causes. The problem that has occurred is the claim that the study of nature can reveal all truths, or if it can’t be found in the study of nature the truth is not important.

    As it happens, I personally do not believe the science explains everything. I think that science excels at the aspects of the universe which are naturalistic and subject to systematic study using the scientific method. Within that sphere, I think it does a good job. But I doubt that science will ever answer questions like “is there a God?” per se, because it’s outside science’s realm.

    ID wants to be accepted within the domain of things that Science excels at, using the logic and tools of science.

    To take another theory of creation, some say that the universe was created a few thousand years ago, complete with fossils and radioactive decay residues and everything – completely and seamlessly simulating all evidence of a long pre-history. Because of the way this is exposited, science can not weigh in on it on way or the other – it’s not a scientific hypothesis. Whereas I think most people here recognize that at least parts of ID are legitimate scientific hypotheses (whether or not they consider those hypotheses validated).

    When scientists assume God did it and ask “how?” they are more productive than if assume chance did it and ask “how?”

    Well, I don’t know if we’re ready to assume the designer fits the full description English speakers ascribe to “God” :-), but we are in alignment that asking “How?” the designer did their work would be very interesting! Let’s look for signs of a mechanism or the structure of its abilities and limits or any other discernalbe “how”. If that free inquiry yields many scientific results, I will agree with your contention that breaking the dogma of DE has opened the door to expanding human knowledge.

    Maybe you are right, and asking “how” within the context of assuming ID will prove more fruitful than asking “how” within the context of assuming DE. The proof will be in the pudding.

    Alas, I don’t see much interest yet in ID addressing “How” questions now that it has the freedom to do so, but it sounds like you’d like to see that happen too.

  403. Zeph -Alas, I don’t see much interest yet in ID addressing “How” questions

    But it won’t be ID doing the addressing. It would be someone in medicine or chemistry or astrophysics asking the question with ID in the rather distant background.

    And I think you and I are in agreement on much.

  404. Nakashima: I’m in violent agreement with you…

    Wow, I read that as violent disagreement, and started to compose a clarification, thinking I must have clumsily sounded like I was making a broader assertion than I intended. (smile)

    Your point that DNA fragments are sequentially read is well taken.

    It’s a good thing that there are a lot of parallel and sometimes redundant genes, so a minor error in one location *often need not* affect full functioning elsewhere (tho of course it might).

    I continue to be amazed and delighted with the mechanisms of life, starting with DNA/RNA/proteins. Intel does a pretty good job too, but is adapted to a different purpose and environment.

    Er, designed for a different purpose, in this particular case.

  405. I have a question for Nakashima and Mustela (and anybody else).

    Yeah, me again.

    Thanks to all the kind souls who have tolerated and responded. I’ll get too busy with some other projects soon and need to drop back, promise.

    Reading around on this site, I seem to perceive a recurring propensity to briefly characterize the heart of Darwinian Evolution as “chance” or “randomness”. Not in the most sophisticated arguments from ID proponents, tho even they sometimes slip into that kind of phrasing despite knowing better.

    I find interesting and legitimate the hypothesis that mutation + natural selection + conserved incremental cumulative functionality are insufficient to account for known complexity (tho I’m not decided either way).

    But forgetting selection and mentioning just chance sounds to me like skepticism that just draining the oil from your engine periodically could allow your car to operate for over 200K miles – and often forgetting to mention that the oil change proponents also expect the car to be refilled with clean new oil, probably the more important half of the operation.

    (The car would do better – for at least a while – with no oil change than with draining without refilling. Likewise, in DE, we’d survive better in the short run with zero mutations, absent any selection process to cull the results. Somewhere not far beyond this point, the analogy breaks down faster than the oil free car).

    The question: why is chance so often mentioned alone? Is this sort of like Einstein’s famous quote about refusing to believe that God plays dice (quantum physics) – an intuitive repugnance or rejection of randomness (that entropic killer of all that is orderly and good) which puts “chance” at the top of the Most Wanted List of intellectual felons? Or do many ID folks basically not understand the core concepts of (neo) Darwinian Evolution? (Obviously *some* ID proponents understand it quite well, better than I as a non-biologist). Or is it a rhetorical ploy, building a strawman to tear down, even if they (should) know better?

    (Right now I’m not talking about full blown tornado/junkyard arguments, but just frequent casual references in the course of other arguements)

    I wish this was less prevalent. I’m tryinig to give ID a fair hearing (and am impressed with the best of its arguments, if not yet fully convinced). But it helps me follow along when I see that the ID proponent is not overtly distorting or demeaning DE, just trying to out-reason it fairly.

    Thoughts?

  406. Mr. Nakashima (#396)

    Thank you for your post. With the greatest respect, you seem to have missed the point of the quotations I cited from Dr. Stephen Meyer’s book, Signature in the Cell.

    You wrote:

    With respect to Signature in the Cell, Dr Meyer quotes a number of calculations, some of which you have highlighted…

    Bringing these quotations together does not dispel the notion that ID theorists frequently use the idea and mathematical equivalent of Dr Hoyle’s memorable phrase…

    Bolding Dr Fred Hoyle’s original tornado-in-a-junkyard calculation is not going to convince anyone that ID does not routinely use tornado-in-a-junkyard argumentation…

    To summarize, if you see the phrase “random process… by chance” then you are not dealing with chance and selection. If you see “number of trials” you are not dealing with selection and contingency, you are dealing with independent trials. If you see “the probability of X is A, so the probability of N*X is A^N” you are dealing independent trials, not selection, variation, exaptation, history, or contingency on the laws of chemistry and physics. All of the above are versions of “if it happened at all, it happened all at once” aka tornado-in-a-junkyard.

    If you look back at my post on Dr. Meyer (#391) you will see that it is divided into three main parts. The first part does deal with pure chance. Indeed, I myself described it as such:

    Evaluating the “pure chance” hypothesis for the origin of life.

    That’s where the bold quotes you cited came from. The point of this section was purely to show that random processes alone don’t have a snowball’s chance in Hades of generating life. Comparisons with Hoyle were entirely apt here; the difference being that in the 27 years since he wrote, the probability calculations have improved for the “pure chance” scenario. Evidently, the probability of life originating by pure chance isn’t 10~-40,000, as Hoyle thought. It’s 10~-41,000 – give or take.

    The next section in my post (#391) was entitled,

    Evaluating the self-organization hypothesis for the origin of life.

    In this section, Meyer carefully considered non-random mechanisms that had been proposed for the origin of life, including Kauffman’s self-organization theory and several other related hypotheses. After examining them carefully, Meyer’s found them wanting:

    …I realized that self-organizational models either failed to solve the problem of the origin of specified information, or they “solved” the problem at the expense of introducing other, unexplained sources of information.

    …In my view, these models either begged the question or invoked a logical contradiction. Proposals that merely transfer the information elsewhere necessarily fail because they assume the existence of the very entity – specified information – they are trying to explain. And new laws will never explain the origin of information, because the processes that laws describe necessarily lack the complexity that informative sequences require.

    I think that’s clear enough. Don’t you?

    Finally, Meyer addressed the RNA world. He listed five reasons why he thought it wouldn’t work. I quoted the three most important ones:

    Problem 3: An RNA-based Translation and Coding System Is Implausible

    RNA-world advocates offer no plausible explanation for how primitive self-replicating RNA molecules might have evolved into modern cells that rely on a variety of proteins to process genetic information and regulate metabolism…

    Problem 4: The RNA World Doesn’t Explain the Origin of Genetic Information

    Page 315

    [A]s Gerald Joyce and Leslie Orgel note, for a single-stranded RNA catalyst to produce an RNA identical to itself, (i.e. to self-replicate), it must find an appropriate RNA molecule nearby to function as a template, since a single-stranded RNA cannot function as both replicase and template. Moreover, as they observe, this RNA template would have to be the precise complement of the replicase.

    Problem 5: Ribozyme Engineering Does Not Simulate Undirected Chemical Evolution

    Ribozyme engineers tend to overlook the role that their own intelligence has played in enhancing the functional capacities of their RNA catalysts. The way the engineers use their intelligence to assist the process of directed evolution would have had no parallel in a prebiotic setting, at least one in which only undirected processes drove chemical evolution forward.

    In short, Dr. Meyer looked at the most promising non-random hypotheses for the origin of life, and found them wanting: they all failed to account for the origin of specified information that is found in living things. The failure here was not isolated to this or that hypothesis: it was a pervasive failure, suggesting that scientists seeking to account for the origin of life had taken a fundamentally wrong-headed approach.

    I should like to add that Dr. Meyer has been investigating the problem of the origin of life since the mid-1980s, and his book has received high praise from several professors in related disciplines, such as chemistry and biology. You can verify this yourself at his Website: http://www.signatureinthecell.com/quotes.php .His book was also extensively reviewed by experts in the fields he discusses, before it was published. To accuse Dr. Meyer of attacking a strawman, and of having fundamentally misconstrued contemporary scientific approaches to the origin of life striked me as implausible to the n-th degree.

    And what of the alternative mechanisms you propose?

    If you want to find true counter-examples, look for reasoning and math that assumes f(t+1) = f(t) + variation + selection, in other words, an iterated function system. Look for the use of the Price equation or Holland’s Schema Theorem. But I have to warn you, I have never seen an ID theorist reason using these tools to show the inadequacy of evolution.

    Iteration won’t work as an account of how specified information originates. Polymerization, yes; specificity, no.

    Let’s see what Wikipedia has to say about the equation and theorem you discuss.

    Holland’s schema theorem

    Holland’s schema theorem is widely taken to be the foundation for explanations of the power of genetic algorithms…

    Using the established methods and genetic operators of genetic algorithms, the schema theorem states that short, low-order, schemata with above-average fitness increase exponentially in successive generations.

    Price equation

    The Price equation (also known as Price’s equation) is a covariance equation which is a mathematical description of evolution and natural selection. The Price equation was derived by George R. Price, working in London to rederive W.D. Hamilton’s work on kin selection…

    Suppose there is a population of n individuals over which the amount of a particular characteristic varies…

    Index each group with i so that the number of members in the group is n_i and the value of the characteristic shared among all members of the group is z_i. Now assume that having z_i of the characteristic is associated with having a fitness w_i where the product w_i * n_i represents the number of offspring in the next generation.

    Note that this is really a difference equation relating the average value of a characteristic in one generation to the average value of the characteristic in the very next generation.

    This is all very well and good, but it doesn’t tell us where this variation comes from.

    Moreover, “fitness” is a biological concept. Polypeptides are neither fit nor unfit, so it seems to me that Price’s equation and Holland’s schema theorem are of no help in explaining the origin of proteins, let alone the first living cell. Unless you can tell me why polypeptides that fold up nicely and do a job are more likely to have proliferated on the primordial Earth than polypeptides that don’t, it seems to me that the two mathematical results you quote won’t help explain abiogenesis.

    Finally, you claim that Dembski doesn’t give detailed probabilistic calculations to support his case:

    If Dr Dembski, or whoever had done the calculations that “always end up being exceedingly small” would publish those calculations, the critics would be quieted. Instead we get an analogy about house building, and not building a series of houses, either. So while Dr Dembski does say the calculations take selection into account, without seeing them there is still an opening for the critics.

    When Dr. Dembski wrote his article on “Specification” in 2004, detailed probabilistic calculations were not yet available. In any case, calculating the probability of a bacterial flagellum emerging by undirected processes would have been too difficult. However, since then, it has become possible to calculate the likelihood of a functional protein emerging, and to arrive at a ballpark figure for the probability of life emerging, since the simplest cell requires hundreds of proteins. Dembski refers to these results in his 2008 book, The Design of Life. As Dr. Meyer puts it in Signature in the Cell:

    If we assume that a minimally complex cell needs at least 250 proteins of, on average, 150 amino acids and that the probability of producing just one such protein is 1 in 10^164 as calculated above, then the probability of producing all the necessary proteins needed to service a minimally complex cell is 1 in 10^164 multiplied by itself 250 times, or in in 10^41,000. This kind of number allows a great deal of quibbling about the accuracy of various estimates without altering the conclusion.

    You may object that this is “tornado in a junkyard” reasoning again, as the calculation assumes that the proteins formed independently. So here’s my challenge: show me a mechanism that would allow an already formed functional protein to increase the probability of another functional protein arising, while swimming around in the primordial soup, thereby raising the likelihood of life forming by undirected processes.

    I’m waiting.

  407. Mustela Nivalis (#394)

    Thank you for your post. I’d like to respond to two points you raised regarding Dr. Dembski’s argument for intelligent design.

    First, you write:

    I haven’t seen an attempt by Dembski to calculate CSI for a real biological artifact with reference to how particular physical, chemical, and evolutionary mechanisms affect the calculations.

    I sympathize with your frustration. However, with regard to origin-of-life scenarios, none of the self-organization hypotheses proposed to date contain any detailed probabilistic calculations for the mechanisms they put forward. Without these, detailed CSI calculations of the kind you request are impossible.

    Next, you write:

    [Dembski's] analogy with a housing contractor and the list of supposed requirements assume that evolutionary mechanisms are looking for a particular outcome. This is sometimes referred to as the “Lottery Winner” fallacy…

    I can assure you that neither Dembski nor Meyer commits any such fallacy. The housing contractor was simply an illustration of the probabilistic hurdles confronting any undirected mechanism that is supposed to produce irreducibly complex biochemical machines. Dembski listed seven: availability of ingredients; synchronization; localization; interfering cross-reactions; interface compatibility; order of assembly; and configuration. No matter what kind of irreducibly complex structure you’re making, these hurdles need to be overcome.

    I should add that Dr. Dembski is a qualified mathematician. It’s hardly likely that he’d be making a high school blunder.

    The same goes for Dr. Meyer’s book, Signature in the Cell. On page 210, Meyer refers to the likelihood of producing “any functional protein whatsoever”, and later on he writes:

    The odds of getting even one functional protein of modest length (150 amino acids) by chance from a prebiotic soup is no better than 1 chance in 10^164…

    Here, he is clearly referring to the odds of getting one protein that can perform any kind of function at all, not the odds of getting a protein which performs a particular function.

    Dr. Meyer continues:

    If we assume that a minimally complex cell needs at least 250 proteins of, on average, 150 amino acids and that the probability of producing all the necessary proteins needed to service a minimally complex cell is 1 in 10^164 multiplied by itself 250 times, or 1 in 10^41,000. This kind of number allows a great deal of quibbling about the accuracy of various estimates without altering the conclusion.

    Here, once again, Dr. Meyer is not referring to any particular functions performed by the proteins in question. His argument hinges on the simple fact that the most basic viable cell requires 250 proteins. That’s all.

    Dembski and Wells’ latest book, The Design of Life, refers to the same data relating to proteins. Once again, we have no “Lottery Winner” fallacy here. Unless Darwinists can put forward a mechanism and demonstrate that it dramatically shortens the odds, the hypothesis of abiogenesis is in deep, deep trouble.

  408. Mr Zeph,

    I’m also interested in the “edge of evolution”, “what evolution can’t do”, etc. in order to understand the limits of a tool. To continue your car analogy, tires a helpful, but they have warnings “don’t inflate to more than X pressure”. Knowing the same limits to evolution lets you use the tool effectively, for example by giving you sizing parapmeters – if your problem solution candidates are k bits long, use a population size of 1.4*k to get convergence near the global optimum in k generations.

    However, I’m the wrong person to ask about why chance is highlighted so often. Perhaps it is just human nature to highlight differences.

  409. Mr Vjtorley,

    Thank you for your response. Yes, I did notice the structuring of your message, parallel to Dr Meyer’s book. In fact, my message @396 also notes that the first several of those quotes appear in the chance hypothesis section, and give them a pass. It’s the one that are not in the chance hypothesis section that are problematic, of which I noted three.

    (This next section of your message is not directly about the tornado problem, but I will comment.)

    I’ll pass over the summary of the first two sections (chance and self organization) since I don’t have any strong objections to Meyer’s presentation or your summary.

    Of the three problems re the RNA world that you quote:

    Problem 3 – Yes, the lack of a plausible historical pathway will only be resolved through more research.

    Problem 4 – Dr Meyer is wrong here to not discuss Yarus and the stereochemical hypothesis, since it is the answer to this question. (The big question of the entire book.)

    Problem 5 – Criticising the fact that scientists work with abstractions and models is a bit silly. One lab scientist working on an experiment will apply stronger mutagens and selection pressures to see if they get any response from the chemicals in a liter of water than are necessary for the same results to occur in an entire ocean in millions of years.

    You write:

    In short, Dr. Meyer looked at the most promising non-random hypotheses for the origin of life, and found them wanting: they all failed to account for the origin of specified information that is found in living things.

    Sadly no, as I point out above in relation to problem 4.

    I agree that the Schema Theorem and Price’s Equation are properly applied to already living systems. In the context of abiogenesis, fitness simply means using feedstocks faster than the next molecule. But they do apply to biological situations, and we still see “tornado” reasoning applied outside of abiogenesis, for example in irreducible complexity.

    It is heartening to hear that between 2004 and 2008, some calculations were completed, however I note that you don’t quote them. That is the point. Quoting an abiogenesis calculation by Dr Meyer does not support a claim to an irreducible complexity argument about new functions arising in already living systems. Until we see those calculations to which Dr Dembski refers, we won’t know whether they use “tornado” reasoning or not.

    Your final quote of Dr Meyer is problematic. As you yourself note, assuming the 250 proteins need to form independently is “tornado” reasoning. In addition, where did that 10^-164 number come from? Looking back, we see that it came from Axe’s work quoted in …drumroll please… chance hypothesis section.

    As a matter of fact, not just the number comes from the chance hypothesis section, the whole quote does. You are summarizing Dr Meyer’s reasoning by quoting pure tornado calculations.

    Finally, you make a challenge to point out the mechanisms by which a functional protein could encourage the creation of another protein with another function.

    Off the top of my head, I can think of two. Both rely on the idea that protein chains would form by individual amino acids sticking to RNA chains and then being connected. Further, the proteins that match the RNA strand must have a mutualistic relationship, they must somehow help each other survive.

    The first method is that the underlying RNA chains break and reform in different sequences and combinations. As a result new protein sequences will form. As with exaptation in cells, exchanging sequences of RNA is reusing a part that are already contributing to some function, so this is a faster kind of functional exploration than single base mutations.

    The other method relies on the idea that the early templating process of AA on RNA was not exact. Today one triplet of RNA codes for only one AA, with the help of the whole tRNA system. But earlier, I think Yarus’ research shows it was more probabilistic. Therefore, one RNA sequence mapped to several proteins. Templating and ligation didn’t just produce one protein, it created a whole cloud of different proteins at different probabilities.

  410. Mr VJtorley,

    With respect to abiogenesis, I think the relevant issues are:

    1 – are there abiotic pathways to the creation of RNA monomers, amino acids, and phospholipid bilayers? This gets a qualifed yes, but more work is necessary.

    2 – Can these chemicals accumulate in the same environment (which may involve transport from the environment where they are synthesized)? Also a qualified yes.

    3 – Is there any direct interaction of AAs and RNA chains in the absence of a translation system? Cases:
    3.a – no interactions. Still a need to find a translation system.
    3.b – complete and permanent binding. Need to find a way to keep them separated and a translation system.
    3.c – temporary binding. AAs and RNA bind due to mechanical (shape) and or chemical (charge distribution) reasons. Goldilocks would be so happy! Cases:
    3.c.i – random bindings. RNAs show no preferences for any AA over any other AA. Still need to find a source of specificity.
    3.c.ii – preferential binding. RNA shows at least probabilistic prefernces for some AAs over other AAs.

    What Yarus, et al are showing is that case 3.c.ii corresponds to the real world. The laws of physics and chemistry drive the existence and general layout of the genetic code.

    As I said earlier, mutualism between proteins and the RNAs that template them is necessary. Is this realistic? A typical evolutionary rate based argument says yes. Assume there is some distribution of mutualism among RNA/protein pairs. Some pairs are actually antagonistic – the protein attacks the RNA that is its template. Some pairs are neutral – they don’t interact at all after separation. Some are positively mutualistic – the protein helps the RNA as an enzyme, protects it from degradation in some way, or the RNA helps the protein fold into some shape that itself lasts longer.

    Across this distribution – if it exists at all – over time the mutualistic pairs will dominate.

  411. 413

    vjtorley at 408,

    And what of the alternative mechanisms you propose?

    “If you want to find true counter-examples, look for reasoning and math that assumes f(t+1) = f(t) + variation + selection, in other words, an iterated function system. Look for the use of the Price equation or Holland’s Schema Theorem. But I have to warn you, I have never seen an ID theorist reason using these tools to show the inadequacy of evolution.”

    Iteration won’t work as an account of how specified information originates. Polymerization, yes; specificity, no.

    Could you please explain your reasoning? It will probably require a precise, mathematical definition of “specified information” in order to prove your point.

  412. 414

    vjtorley at 409,

    Next, you write:

    “[Dembski's] analogy with a housing contractor and the list of supposed requirements assume that evolutionary mechanisms are looking for a particular outcome. This is sometimes referred to as the ‘Lottery Winner’ fallacy”

    I can assure you that neither Dembski nor Meyer commits any such fallacy.

    From your posts here you seem intelligent and likable, so I’d like to take your assurance. Unfortunately, I’ve read the material as well, and I think my assessment is reasonable based on what Dembski has written.

    The housing contractor was simply an illustration of the probabilistic hurdles confronting any undirected mechanism that is supposed to produce irreducibly complex biochemical machines. Dembski listed seven: availability of ingredients; synchronization; localization; interfering cross-reactions; interface compatibility; order of assembly; and configuration. No matter what kind of irreducibly complex structure you’re making, these hurdles need to be overcome.

    Dembski fails to consider the fact that these issues may be addressed by building on earlier versions. That is, he fails to take into account known evolutionary mechanisms. He also appears to assume a particular result, leading me to raise the Lottery Winner Fallacy.

    I should add that Dr. Dembski is a qualified mathematician. It’s hardly likely that he’d be making a high school blunder.

    This is just an argument from authority. Based on what Dembski wrote, I don’t believe my understanding is unreasonable. As Nakashima points out, Dembski could eliminate the confusion by simply publishing his calculations.

  413. 415

    vjtorley at 408,

    When Dr. Dembski wrote his article on “Specification” in 2004, detailed probabilistic calculations were not yet available. In any case, calculating the probability of a bacterial flagellum emerging by undirected processes would have been too difficult.

    How, then, was he able to claim unequivocally that those probabilities are “exceedingly small”?

Leave a Reply