Home » Culture, Education, Intelligent Design, Philosophy, Science, The Design of Life » Message Theory – A testable ID alternative to Darwinism – Part 1

Message Theory – A testable ID alternative to Darwinism – Part 1

Message Theory is a testable scientific explanation of life’s major patterns.

That claim should intrigue you. If I heard such a claim, I would nearly leap across the room to demand more details; else I couldn’t sleep that night. That is because I highly value testability, just as all scientists do, (in physics, chemistry, geology, medicine, engineering, etcetera) – and just as evolutionists do in all their court cases.

Message Theory should even intrigue evolutionists, because it offers what they repeatedly demanded from their opponents – a testable, scientific alternative to evolution. Yes, that is exactly what they demanded. In reality, the evolutionists’ response has been exceedingly superficial, falling into two categories: (1) Silence; or (2) They misrepresent Message Theory. (If you are aware of exceptions, let me know.) Therefore, my posts here will not much address the evolutionists’ response to Message Theory, since a serious response doesn’t much exist.

The creationist/ID response has been more varied, and I focus on that here. Many see Message Theory as exciting and promising. For example, Origins Magazine reviewed it saying, “I can give no greater accolade than urging that this book should now be the starting point for all of our discussions.” Phillip E. Johnson calls it “Bold and fascinating … a comprehensive theory.” Carl Wieland calls it, “Masterpiece … incredible … of immense value.” Michael Behe and many others have given glowing reviews, (see this link). To which I say, Thanks! That’s a good start.

However, some creationists/ID-ists are hesitant to investigate Message Theory, and the central reason is its claim of testability – its claim to make numerous coherent, risky, predictions about what we should see, and should not see. Unfortunately, many creationists/ID-ists do not value testability, and some aggressively dislike testability. Without knowing any details about Message Theory, we encounter their leading objection – testability.

For example, some creationists say, “Aren’t you claiming to test God?” To which I answer: No. Message Theory is about life’s data – many observations that must be explained – and Message Theory explains those observations in a testable (falsifiable, vulnerable, empirically risky) manner. It meets all the criteria for a scientific theory. A theory is tested, not God. The thought process is no different than concerning, say, the Piltdown fossils, which needed an explanation. These fossils were a hoax created by an intelligent designer – a testable explanation that no scientist disputes. We need not test the intelligent designer, (indeed, the designer of the Piltdown Hoax remains unidentified), rather we test the theory. In science we test explanations (i.e., theories); not God.

Also, deep down, many creationists want the ‘certainty of faith,’ and they are not yet comfortable with the inherent riskiness of science – they haven’t learned to balance the two types of thought: risk and certainty.

The classic creationist organizations (ICR, AIG, CRS) often do not value testability, (and sometimes they explicitly oppose testability). Instead, they use a different criterion of science; a different value system. They claim “science must be repeatable, and since origins are not repeatable, creation and evolution are equally unscientific.” They are deeply mistaken. For example, we frequently execute murderers (which is not a flimsy thing to do) based solely on scientific evidence, even though the murder is not repeatable.

Instead, repeatability is how we identify naturalistic laws (as opposed to the work of intelligent beings); therefore the creationists’ demand for ‘repeatability’ is implicitly a demand that science must be purely naturalistic and cannot include an intelligent designer. They are shooting themselves in the foot!

Thankfully the ID organizations don’t take that approach. They take a more sophisticated approach, yet they tend to undervalue testability nonetheless, (sometimes through redefining it into obscurity).

In my many discussions with my fellow creationists/ID-ists, the foremost obstacle to Message Theory is their devaluing or misunderstanding of testability. So let me pause to underscore this for my readers: If you do not value testability highly, then leave now, or you will only waste your time, and mine. Let me put it stronger: Anyone (creationist, ID-ist, or evolutionist for that matter) who cheapens testability is a danger to science, and moreover, they miss many opportunities to advance creation/ID as superior science.

Let me put my claim stronger still: Message Theory is testable science, and macro-evolutionary theory (as practiced by its modern proponents) is not. I employ testability – the same tool evolutionists use in all their court cases – to turn the tables on evolutionists.

After handling some comments, I will next discuss Message Theory proper.

– Walter ReMine

The Biotic Message – the book

Tags:
  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

143 Responses to Message Theory – A testable ID alternative to Darwinism – Part 1

  1. Where would one go to find out what message theory is.

  2. Message Theory sounds like some arm of Information Theory, but I’ve been unable to find any reference to it in the math literature.

  3. Your work preceeds you and is interesting to many so I wouldn’t be surprised if the creationists were interested, though they may interpret the data differently.

    The distinction made is simply between science that is observable and repeatable in the present – operational science – and that which deals with past events – ‘forensic science’ – in which evidence must be interpreted. It is unfortunate to see such innaccuracies and the seemingly obligatory knock to creationists.

    A sampling from creationists

    For example, if I design a trajectory that returns men from the moon using the Earth’s atmosphere as a braking mechanism, then this approach can be simulated and tested and verified regardless of my “belief” in the age of the Earth or the mechanism of creation of life on this planet.

    The ‘clinical expression’ of genes refers to the way genes work in the present world. Studying such matters, with the hope of being able to tailor drugs more precisely targeted to human malfunctions of various sorts, is valid operational science (how the world works)-whereas hypotheses about the unobserved and unobservable past involve historical, or forensic science.

    In contrast, evolution is a speculation about the unobservable and unrepeatable past. Thus it comes under origins science. Rather than observation, origins science uses the principles of causality (everything that has a beginning has a cause11) and analogy (e.g. we observe that intelligence is needed to generate complex coded information in the present, so we can reasonably assume the same for the past). And because there was no material intelligent designer for life, it is legitimate to invoke a non-material designer for life. Creationists invoke the miraculous only for origins science, and as shown, this does not mean they will invoke it for operational science.

    The difference between operational and origins science is important for seeing through silly assertions such as the following by Levitt (as quoted by Lerner):

    ‘… evolution is as thoroughly established as the picture of the solar system due to Copernicus, Galileo, Kepler, and Newton.’

    However, we can observe the motion of the planets, but no-one has ever observed an information-increasing change of one type of organism to another.

    Similarly, believing that the genetic code was originally designed does not preclude us from believing that it works entirely by the laws of chemistry involving DNA, RNA, proteins, etc. Conversely, the fact that the coding machinery works according to reproducible laws of chemistry does not prove that the laws of chemistry were sufficient to build such a system from a primordial soup.

  4. Surely this isn’t another case of a “theory” doing an end run around peer review by going straight to the popular press, is it? Since “The Biotic Message” was first published in October of 1993, there must be loads of scientific papers demonstrating successful tests of message theory by now, right?

    (Man, it would be so cool if I was made to look the fool by being bombarded by citations. And no, I’m not being sarcastic here…I would be genuinely happy)

  5. Massage Therapy?

    Why are we talking about massage therapy?

    (takes a drink of coffe)

    Oh MESSAGE THEORY- Walter ReMine- now I got it.

    But anyways Walter, I have seen claimed before- that Creationists do NOT value testability.

    You repeated it. Yet I have never seen any evidence of this.

    To me “repeatablity” is testability.

    That is because all experiments must be able to be repeated in order to be verified.

    Say I do an experiment and come up with some result- X.

    Now in order to confirm X someone else has to be able to REPEAT what I did and come to the same conclusion.

    IOW my premise was TESTED- testability.

    If one cannot repeat what I did then what I did does not have “testability”.

    THAT has been my experience with Creationists.

    For example the origin of life- no repeatable- no testable.

    Cetaceans “evolving” from land mammals- not repeatable, not testable.

    Dropping weights from a tower- repeatable and and therefor testable.

    Also you have to be careful with the word [b][i]macro[/i][/b]evolution- [quote]In evolutionary biology today, macroevolution is used to refer to any evolutionary change at or above the level of species.- talk origins[/quote]

    The way it is defined even YECs accept it.

  6. Sorry, off-topic, but this is really interesting.

    http://www.eurekalert.org/pub_.....021009.php

    Using OOL to teach general chemistry???

    Would a UD official please consider blogging on this? Thanks.

  7. Evolution is an historical process and as such like history theoretically cannot be tested. For history, certain ideas can be hypothesized and then the historical record investigated to see if they are supported as an historian looks at old records, artifacts etc. to support an hypothesis.

    However, naturalistic evolution is actually more than an historical process and supposed to be the result of the laws of nature playing out over time. As such it is testable today because there are an almost endless supply of organisms in the world to examine to see if some pattern exists which supports or falsifies this hypothesis. As such modern evolutionary theory is testable and is constantly tested every day in labs and in the wild by evolutionary biologists.

    And every day they fail to provide data that supports the latest evolutionary synthesis for Macro evolution with a big “M.” So each day they falsify their theory according to the criteria of repeatability or testability. It is why ID exists.

    I guess we will examine message theory and it may be interesting and can see if it relates in any way to what Kirk Durston is doing.

  8. Design detection is generally and practically falsifiable in biologic application.

    For instance we can form an ID hypotheses regarding the flagellum thus:

    The flagellum required an intelligent agency to produce in the first instance.

    A single observation, in vivo or in vitro, of a flagellum forming by law & chance alone will falsify the hypothesis. If anyone believes it is not practically possible to make this observation then that would seem to be a tacit admission that it is not a practical possibility for law & chance alone. If anyone makes the claim that even if we were to make the observation how could we know that “God” didn’t do it in some undetectable manner. That claim is a departure from science which does not admit supernatural explanations.

    In the meantime the design hypothesis remains a perfectly valid scientific hypothesis.

    Design detection is rooted in statistical mechanics and can be applied to any physical system. Biotic message theory seems to be a departure from that or at the very least it is narrow in scope.

  9. jerry

    So each day they falsify their theory according to the criteria of repeatability or testability. It is why ID exists.

    This is the very definition of the Argument From Ignorance. “Not X, therefore Y.”

    DaveScot

    A single observation, in vivo or in vitro, of a flagellum forming by law & chance alone will falsify the hypothesis.

    Flipping it around (“X, therefore not Y”) and saying it’s a “test” of the theory doesn’t make it any less fallacious.

  10. KRiS,

    Dave’s point- excuse me Dave- is that if a non-telic solution is found there is a requirement for telic processes.

    Not that a telic process didn’t do it or could not have done it. It’s just that when there isn’t any difference don’t add something not required.

  11. Oops- “there is NOT a requirement for telic processes”

  12. Kris,

    I love those who accuse us of using the argument from ignorance and display their own ignorance in the process.

    If someone has an hypothesis and does a test of that hypothesis and the research fails to support the hypothesis, they are failing to support their theory. If the test is repeated ten thousand times, I will go out on a limb and say that the theory is being falsified. I realize that the proper conclusion is that it can never be falsified. So maybe the wording was not exactly correct but like all critics here, the best they can do is nitpick. Otherwise you would have offered the correct wording and interpretation. Or even better, research results.

    Let’s here it for Kris who has joined the ranks of nit pickers but never offer substance. We are picking up quite a collection since the moderation rules were changed.

  13. #9 KRiS

    jerry

    So each day they falsify their theory according to the criteria of repeatability or testability. It is why ID exists.

    This is the very definition of the Argument From Ignorance. “Not X, therefore Y.”

    I think he meant to say that’s why the theory of ID exists, that’s why it’s being considered. It’s more like “After many years of testing, not X yet, so what about Y?” At least that’s how I read it. And if that’s not what he said, then it’s what I’m saying :D

    To me, neo-Darwinism seems unfalsifiable. That is unless evolution is overturned altogether (we find human remains that date back to the Jurassic, for example). It seems to me to be impossible to PROVE that something didn’t happen. That’s how pop-evolution has come about…militant blogs become as scientific as multi-million dollar labs because they’re all doing the same thing: “Well, it could have happened this way…”

    All of the atoms in the universe, including those that make up every brain on Earth (if materialist conciousness turned out to be real) could have randomly assembled 5 seconds ago, forming all of our memories correctly in the process, as well as computer databases, films, and any other record of the past. This could have happened. It’s also impossible to prove that it didn’t.

    …That’s a bit more extreme than imaginative evolutionary theories about how bio-nanotechnology formed randomly, but it illustrates the point.

  14. #6 Landru

    …yeah that is amazing. Is that real?

    Klymkowsky and Clemson University chemistry Professor Melanie Cooper were recently awarded a $500,000 grant from the National Science Foundation for a three-year project titled Chemistry, Life, the Universe and Everything, or CLUE. The project includes developing a general chemistry curriculum using the emergence and evolution of life as a springboard to introduce and explain related chemistry concepts, Klymkowsky said.

    What is abiogenesis doing in any K-12 science classroom? I’m an ID advocate who doesn’t think ID should be taught in science classrooms yet, although what neo-Darwinism has failed to prove should be taught. But abiogenesis has less of a right to be in a science classroom than ID. That is a blantant example of a worldview being shoved down people’s throats w/o any evidence.

    I can’t get over how hypocritical that is. They blatantly do the exact thing they accuse ID-supporters of doing – bypassing the scientific method straight for the classroom. If abiogenesis is in classrooms, then ID should be. There is no defending that. It is 100% speculation at this point, has not been scientifically proven to any reasonable extent. Yet the NSF, defender of almighty science, skips all of that and gives $500,000 to have its worldview fed to children

  15. DaveScot @8

    Design detection is generally and practically falsifiable in biologic application.

    For instance we can form an ID hypotheses regarding the flagellum thus:

    The flagellum required an intelligent agency to produce in the first instance.

    That’s more of a prediction than an hypothesis. It also leads to the obvious question of what empirical observations are explained by the hypothesis that generated this prediction. With the observations, proposed explanation (hypothesis), and prediction(s) clearly stated, we can apply the tools of science to support or falsify the hypothesis.

    A single observation, in vivo or in vitro, of a flagellum forming by law & chance alone will falsify the hypothesis.

    That’s not the only way to do so, of course. Since the claim is that it is not possible in principle for a flagella to come into being without intelligent input, the prediction can be refuted by a credible, naturalistic explanation of how it could have come about, even if that turns out not to be how it did come about.

    A Google search for “evolution flagella OR flagellum” yields a number of articles discussing exactly this. As an icon of the ID movement, the flagellum appears endangered.

    If anyone believes it is not practically possible to make this observation then that would seem to be a tacit admission that it is not a practical possibility for law & chance alone.

    That doesn’t follow. It could simply require too long a time scale to observe in the lab, so other methods of determining its provenance must be used.

    If anyone makes the claim that even if we were to make the observation how could we know that “God” didn’t do it in some undetectable manner. That claim is a departure from science which does not admit supernatural explanations.

    True, which is why ID theory must, well, evolve to include the mechanisms used by the designer. Without an hypothesized mechanism, research potential is limited.

    In the meantime the design hypothesis remains a perfectly valid scientific hypothesis.

    Without empirical evidence and a unifying hypothesis behind predictions such as this, ID is little more than an inspired conjecture, with good potential for further research.

    Before anyone gets upset at that characterization, it’s a good thing. We’re still in the early days. All the exciting discoveries are ahead of us!

    JJ

  16. That’s not the only way to do so, of course. Since the claim is that it is not possible in principle for a flagella to come into being without intelligent input, the prediction can be refuted by a credible, naturalistic explanation of how it could have come about, even if that turns out not to be how it did come about.

    A Google search for “evolution flagella OR flagellum” yields a number of articles discussing exactly this. As an icon of the ID movement, the flagellum appears endangered.

    I agree that there are ways in which the flagellum could have come about by chance and law alone. I think just about any statement that includes “cannot” is probably false. the statement “the flagellum could not have come about by chance and law” is false, because there are proposed ways that it could have come about. Whether or not those pathways are true has absolutely nothing to do with whether or not they falsify the statement.

    But the interesting thing to me, the reason the flagellum is not an endangered ID icon, is the likelihood of proposed scenarios. It is the likelihood of natural generation of a flagellum that keeps it as an object of interest. If Earth and life on Earth were 10^500 years old, maybe all of the proposed scenarios of natural flagellum had enough time to make them likely. But in the time it had to come about (uncertain, but not a very long time by geological standards) it is my opinion that the proposed scenarios are unlikely. While they are not falsified (in fact, they are unfalsifiable, unless an accurate record of the origin of every flagella that has ever existed on Earth can be produced), perhaps their likelihood could be discussed in a scientific manner, using the most up-to-date knowledge we have on every factor involved (nano-biomechanics, genetics, microbiological selection, chemistry, etc.)

  17. KRiS is yet another sock puppet from the Panda’s Thumb forum. Same one I ejected a couple months ago when his subtle mockery of Denyse became too obvious.

    Just so y’all know.

  18. Since the claim is that it is not possible in principle for a flagella to come into being without intelligent input, the prediction can be refuted by a credible, naturalistic explanation of how it could have come about, even if that turns out not to be how it did come about.

    If explanations alone had any merit I would have cruised through school.

    That is way some testable demonstration is required.

    That is take a population or two or three or flagella-less bacteria, and let them have at it- even try to “tempt” motility by moving the nutrition just away from them.

    A Google search for “evolution flagella OR flagellum” yields a number of articles discussing exactly this. As an icon of the ID movement, the flagellum appears endangered.

    Not when one takes a close look at one.- (video won’t load)

    True, which is why ID theory must, well, evolve to include the mechanisms used by the designer. Without an hypothesized mechanism, research potential is limited.

    Just how can we tell HOW the flagellum was designed?

    To me design is a mechanism. And artificial selection is one SPECIFIC design implementation mechanism.

    Genetic engineering and genetic algorithms programmed into genomes which directs the chemistry required.

    But this is like asking natives of the Amazon that unless they can tell me how my laptop was designed and how it was made, they can’t tell it was designed.

    Without empirical evidence and a unifying hypothesis behind predictions such as this, ID is little more than an inspired conjecture, with good potential for further research.

    And yet there exist several design hypotheses- complete with predictions and falsifications.

  19. JayM

    By what criteria do we decide what is a credible explanation of how the flagellum evolved? What you find credible I may find incredible and a just so story.

    Message theory is interesting, but I wonder if it isn’t confining to limit one’s argument to saying that it all points to just ONE designer. It makes it hard to analogize to the evolution of cars and computers by intelligent design where many designers are responsible for the diversity thereof.

  20. Davescot:

    KRiS is yet another sock puppet from the Panda’s Thumb forum. Same one I ejected a couple months ago when his subtle mockery of Denyse became too obvious.

    Just so y’all know.

    A sockpuppet lacks sincerity. KRis, in my opinion, is sincere in stating his opposition to ID and doesn’t deserve your name-calling.

  21. Davescot:

    In the meantime the design hypothesis remains a perfectly valid scientific hypothesis.

    Not if all you’re saying is “the intelligent designer did it.” A scientific hypothesis must be mathematically modellable.

  22. B L Harville, “A scientific hypothesis must be mathematically modellable.”

    Where do you get off creating yet another “definition of science”.

    I find a funny-shaped stone. I develop the scientific hypothesis that the stone was carved by non-natural processes (humans). Where is my mathematical model? Is my hypothesis bogus? We are still waiting for an effective mathematical model showing the validity of the neo-Darwinian hypothesis. As such has not been forthcoming, it would seem reasonable that there simply is no hypothesis of orignins, right!?

    ‘Seems that “falsifiability” should be the only scientific criteria.

    I still contend, however, that ID is not a scientific theory but a framework, a metatheory. If Irreduceable Complexity (a theory) is falsified, if CSI (a theory) is falsified, if Haldane’s dilemma (brought to our attention by Walter ReMine) is found vacous, if Mr. ReMine’s “message theory” (which I don’t yet understand because I haven’t read his book yet — its on my short list) then the ID framework will come up with another theory. The Irreduceable Complexity theory can run out of proposed irreduceables, but ID can always come up with another theory. It is for this reason that I suggest that ID is not a theory because it is not falsifiable.

  23. KRIS: (9):

    This is the very definition of the Argument From Ignorance. “Not X, therefore Y.”

    Darwin’s method of argumentation in the Origins fits your very definition of the “Argument From Ignorance”.

    Darwin argued, e.g., that if we believe that God is All-Knowing, then we must also believe that this All-Knowing God created animals in such a way as to allow man to “domesticate” species, thus leading the animal kingdom in directions that nature will not take it. This is proposition X.

    He then gives examples of what humans have done to wild species, the tremendous variations brought about via domestication. He concludes that this proposition contradicts the notion of an “All-Knowing” God since it gives the appearance that man—who is NOT “All-Knowing—is capable of what God Himself is not. Thus he rejects Proposition X, and, instead, says that is only reasonable to accept Proposition Y: natural selection and random variation.

    Per your very defintion, this is an Argument From Ignorance. Which is the worse argument: arguing that evidence of design implies the presence of an intelligent agent, or arguing from one’s personal notions about what God can and cannot do, or, rather, what God would or would not do? I’m interested in your answer.

  24. B L Harville,

    Was Charles Darwin’s theory mathematically modellable at the time he proposed it? If not, then does it mean that his theory was not a scientific theory? If not, should it have been rejected as pseudo-science? Hint: Mr. Darwin admitted that he was terrible at math.
    PS: I’m not saying that evolution is not mathematically modellable or a scientific theory.

  25. DaveScot @17

    KRiS is yet another sock puppet from the Panda’s Thumb forum. Same one I ejected a couple months ago when his subtle mockery of Denyse became too obvious.

    Just so y’all know.

    That’s a classic example of the ad hominem fallacy. If we want to see ID succeed, we need to address the arguments (which KRiS does make well), not that man.

    JJ

  26. Joseph @18

    And yet there exist several design hypotheses- complete with predictions and falsifications.

    I have not encountered these, despite actively searching for them. Cites?

    I am sympathetic to the ideas of ID, and because of that I want to see some rigor applied in the process of changing from an interesting speculation to a full science. If those of us who see the potential of ID don’t do this, our detractors will destroy the nascent science.

    JJ

  27. Colling @19

    By what criteria do we decide what is a credible explanation of how the flagellum evolved? What you find credible I may find incredible and a just so story.

    If the process hypothesized relies solely on known natural mechanisms and does not require more than 4.5 billion years, it at least damages DaveScot’s prediction.

    JJ

  28. Several commentators in this thread and others have asserted (without corroboration) that although there is abundant evidence for microevolution (which they apparently accept), there is no evidence for macroevolution (which they do not accept, mainly because of its implications for their religious beliefs). I started to write a response to this, but it started to get very long, so I made it into a post on my own blog. You can read it here:

    http://evolutionlist.blogspot......dence.html

    After you do, I would appreciate any comments (and especially substantive criticisms) you might have…but please, save the ad hominems for each other. Thank you for goading me to write what will become yet another chapter in my forthcoming evolution textbook from John Wiley & Sons (due out in 2010).

  29. bFast:

    Where do you get off creating yet another “definition of science”.

    To say that a scientific hypothesis should be mathematically modellable is another way of saying that natural phenomenon should be, at least in principle, reducible to physics. I did not invent this idea. As Ernest Rutherford said: “Physics is the only real science. The rest are just stamp collecting.”

    Collin:

    Was Charles Darwin’s theory mathematically modellable at the time he proposed it?

    To some degree, although I don’t know if he himself produced any. I’ll admit that I don’t know the history of evolution well enough to know when mathematical models for it were first developed. They are abundant now. And computer models are now widespread throughout science so that anyone who thinks they can replace a well-developed theory like evolution without computer-modelling is delusional.

  30. Whoever said they can’t distinguish between law & chance and human agency couldn’t even write down the thought if it were true as that person wouldn’t be able to tell apart an information bearing message from the randomly arranged letters in a bowl of alphabet soup. I suppose they can’t tell a sand dune apart from a space shuttle either. Like duh.

  31. “which they do not accept, mainly because of its implications for their religious beliefs”

    Don’t put that in your textbook because it is yet another bogus assertion. I have no problems with macro evolution from a religious point of view. I have a problem with it from an empirical point of view. I believe many others here have the same assessment.

    Since you are on record as saying there is no model that handles macro evolution, we will have to see what you have said.

    If you are honest in your textbook then any mention you make of ID should be passed by here for our input to make sure it is correct. You seem to make a lot of erroneous conjectures about people here.

  32. BL Harville

    Can you reduce things like a prokaryote acquiring a nucleus to a mathematical model?

    The problem is when WE apply statistical mechananics to “evolution” law and chance come up as wanting in capacity to produce molecular machinery as it is in producing microprocessors. The numbers just don’t work. The further problem is that “evolution” devotees just plain ignore the probalistic problems. Because, for them, evolution is true they simply accept any improbability, however remote, as something that MUST have happened because that IS the TRUE way life came to be the way it is.

    This is not reducing things to physics. It’s faith. There isn’t two degrees of difference between faith in the chance & necessity narratives and faith in the biblical narratives.

  33. Allen MacNeill,
    I’ve long thought that there ought to be a book written for the layperson, such as me, wholly on the topic of speciation and the evolution of new forms and functions witnessed in the laboratory. (If you know a good book like this, let us know.)
    It’s late, so I don’t feel up to delving into your blog post at the moment, but it looks very interesting.

  34. BL Harville

    Speaking of mathematical models you might want to check out this one since the author of this article is involved in its development:

    http://mendelsaccount.sourceforge.net/

    When evolution of a complex genome is modeled the result is not the decreasing entropy that the chance & necessity narrative describes but rather the increasing entropy that the laws of physics and statistical mechanacs describes.

    Put that in your pipe and smoke it.

  35. B L Harville:

    To say that a scientific hypothesis should be mathematically modellable is another way of saying that natural phenomenon should be, at least in principle, reducible to physics.

    In the same way that all engineering is reduceable to physics, I fail to see that any ID hypothesis is not reduceable to physics. Is it not true that all engineering is reduceable to physics? Is it not true that all ID hypothesees suggest that life is the product of engineering?

  36. Allen,

    First, thank you for the work you did generating your examples. We should all learn from your effort.

    I quickly looked at the plants and you have to know two things. I am not a biologist let alone an evolutionary biologist or botanist. Each of your examples seemed interesting but before we start talking past each other, it seems that your examples are not what we would call macro evolution of complex new functional capabilities. We would call most if not all of the examples, micro evolution. In the end we may need more precise definitions for the terms micro and macro evolution and maybe a new definition for the type of change we say is impossible or highly improbable.

    ID does not challenge what seems to be happening in your examples so we may end up agreeing that these are changes but not the kind of changes that meet the threshold of new information. Before you go off in a huff and a puff that we are changing the goal posts or something like that, try to understand the threshold of change we say has not happened does not seem to be passed in your examples. We are not saying changes didn’t happen but what is the nature of the changes. And maybe my assessment is very wrong and in the end we may say that this is macro evolution.

    But also in the end we may continue to disagree but you should know that our opposition is based on the construction of new information that govern new systems and not that various combinations of current information can not make interesting morphological changes but are they really creating new information and systems that did not exist before. Also nearly all the discussions we have here are with animals and not plants. Animals seem to have more systems than plants. Tomorrow, I will give you my 2 cents on the animal examples.

    I am definitely not the best person to articulate this. Others should read your site and should comment on it. Thank you for the work you put in to the examples. I am sure we will all learn something, including yourself about just what ID is about and what are the limits of naturalistic processes.

  37. Jerry:
    I
    t seems that the species has always been considered sacrosanct at least in creationist circles. He’s giving repeated examples apparently of new genetic variants appearing that are assigned new latin names and can’t interbreed with the previous species. Consider the very first example he gives. (Although I’m still looking at his examples as well.)

    Are you saying he has to show major morphological change appearing instantaneously?

    Also the example from yesterday of lichens being the endosymbiotic combination of two different species seemed pretty compelling to me.

    But couldn’t ID look at any conceivable scenario and still say that inteligence is behind the whole process.

  38. Still going through it but starting to be underwhelmed a little. At one point it says “He found statistically significant assortative mating between populations raised on different media” and the phrase “stastically signficant” is repeated five times in this presentation. It seems awfully vague and euphemistic as in “could be argued to be slightly better than random.” Also whenever someone makes you slog through a lot of technical minutia regarding the experimental setup and lab details, makes it seem like their trying to pad the presentation or overwhelm you with their thoroughness, rather than just presenting whatever significant findings there were in a succinct way. Of course it depends on the audience, but presumably it wouldn’t be professional biologists here.

  39. What journal has this work been published in? I can’t find it looking at the mathematics journals in the main.

  40. H’mm:

    As a first level of test — and Mr MacNeil, most of your examples are fairly obviously micro-level changes within forms — it seems to me that the core of the issue on Macro-Evolution is origination of major life systems and forms, i.e body-plans; in light of the quantum leaps in information required to account for such.

    That starts with the first such body plan, i.e. origin of life.

    Mr MacNeil puts up a speculative account for the origins of different classes of cells, but that does not address the origin of info level that is the core of the ID concern.

    So, why not start with giving us a good explanation with empirical evidence on origin of bio-information — not speculations and homologues on cell types etc [which would equally fit design with re-use of a library of components . . . as software engineering and civil engineering have long done . . . ] — as observed, on say the Cambrian fossil life revolution? [As say CRD was concerned about in his day . . . ]

    GEM of TKI

  41. PS: here are some excerpts from the just linked that give a picture of where some of us at least are coming from:

    ____________________

    The Cambrian explosion represents a remarkable jump in the specified complexity or “complex specified information” (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . [.]

    One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6

    _______________________

    Similarly, Lonnig in his 2004 paper observed:

    —–

    . . . examples like the horseshoe crab [a 250 MY living fossil on form] are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by ‘living fossils’ in the present world of organisms when applying the term more inclusively as “an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time” [85] . . . .

    Now, since all these “old features”, morphologically as well as molecularly, are still with us, the basic genetical questions should be addressed in the face of all the dynamic features of ever reshuffling and rearranging, shifting genomes, (a) why are these characters stable at all and (b) how is it possible to derive stable features from any given plant or animal species by mutations in their genomes? . . . .

    A first hint for answering the questions . . . is perhaps also provided by Charles Darwin himself when he suggested the following sufficiency test for his theory [16]: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” . . . Biochemist Michael J. Behe [5] has refined Darwin’s statement by introducing and defining his concept of “irreducibly complex systems”, specifying: “By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning” . . . [for example] (1) the cilium, (2) the bacterial flagellum with filament, hook and motor embedded in the membranes and cell wall and (3) the biochemistry of blood clotting in humans . . . .

    One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if “several well-matched, interacting parts that contribute to the basic function” are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because “the removal of any one of the parts causes the system to effectively cease functioning”) such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . [,]

    The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process — or perish . . . .

    According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski’s criterion of specified complexity . . . . “For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity” [23]. For instance, regarding the origin of the bacterial flagellum, Dembski calculated a probability of 10^-234[22].

    _________________________

    So, in that context, what is there that would distinguish common descent without intelligent intervetnion, from common design, perhpas in part implemented through evolutionary mefhaisms used as useful hueristics similar to today’s genetic Algorithms.

    Thanks.

    GEM of TKI

  42. Clive

    We should forgive Allen MacNeil for being a little testy. If you’d been teaching the same course 33 years you’d be going batshit by now too. I mean, the poor guy’s been at Cornell forever, rubbing elbows with some really brilliant people, and hasn’t managed to get a doctorate or advance beyond a teacher’s aide in all that time. So when he tells someone else they’re ignorant just consider the source.

    http://lsc.sas.cornell.edu/lscstaff.html

    Allen MacNeill earned a BS in biology from Cornell in 1974 and an MA in science education from Cornell in 1977, and has taught the support course for introductory biology at Cornell University since 1976. As a senior lecturer for the Learning Strategies Center, Allen works with students taking both majors and non-majors introductory biology. In addition, he organizes and carries out in-service training for teaching assistants in biology and related fields. Allen also teaches evolution for the Cornell Summer Session, and has taught the introductory evolution course for non-majors at Cornell. He has served as a Faculty Fellow at Ecology House and as an honorary member and faculty advisor for the Cornell chapter of the Golden Key International Honour Society. He has served on numerous advisory committees and editorial boards at Cornell and in the Ithaca community.

  43. I am confused again and the FAQ doesn’t help me so I’ll ask the question here.

    Does ID say that macro-evolution is impossible? I was sure it didn’t but reading some of the comments here makes me wonder.

  44. Allen,

    I have said this before and will say it again:

    The way macroevolution is defined not even YECs reject it.

    macroevolution from talk origins:

    In evolutionary biology today, macroevolution is used to refer to any evolutionary change at or above the level of species.

    However those YECs have their own definition:

    2) macroevolution—the theory/belief that biological population changes take (and have taken) place (typically via mutations and natural selection) on a large enough scale to produce entirely new structural features and organs, resulting in entirely new species, genera, families, orders, classes, and phyla within the biological world, by generating the requisite (new) genetic information. Many evolutionists have used “macro-evolution” and “Neo-Darwinism” as synonymous for the past 150 years.

    So which macro are you talking about?

    The one no one debates or the one tat is being debated?

  45. Jay M,

    Design Hypotheses:

    The Positive Case for Design

    Darwinism, Design and Public Education page 92:

    1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.

    2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.

    3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.

    4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.

    Casey Luskin has put together this positive case for design

    My case for intelligent design was made in many posts but summed up in intelligent design the design hypothesis- which is overdue for an upgrade.

  46. Allen:

    As jerry has rightly pointe out, most of the examples you give would fall under the label of ‘microevolution’.

    Additionally, in the case of polyploidy in plants, these are ‘jumps’, not the ‘gradual’ evolution that Darwin insisted on.

    Didn’t De Vries, for example, believe in saltation?

    One very recent example of a ‘jump’ is the case of the Adriatic Island lizard that, within, thirty something years of its introduction to a different island, developed a very different digestive system. Since you’re more of an ‘evo-devo’ person, I’m sure this is right up your alley. However, the lizard remains a lizard. This is ‘microevolution’ at best, but it is also ‘saltational’ change, a type of change that is not consistent with Darwin’s ideas. One only has to remember Darwin’s dealings with Huxley to be reminded that Darwin absolutely insisted on “gradualism”. We don’t see “gradualism”. So why is it, then, that Darwin is still taken seriously?

  47. GSV:

    Many major ID scientists and thinkers are proponents of common descent [e.g. Prof Michael Behe of LeHigh; who was comfortable theologically as well as scientifically with darwinism, but has come to see that the evidence does not fit the story].

    Some, too, are not.

    Indeed, some Creationists [of both young and old earth varieties; also I think we have Islamic creationists etc to look at . . . . and perhaps some Jews as well] are also ID thinkers, while some are actually hostile to ID.

    Some Design thinkers are platonic — thinking in terms of inner forms of nature. Others are agnostics, Buddhists, and at least one prominent ID thinker is a follower of the Unification church.

    In recent years the former leading philosophical atheist in the world, Antony Flew has become a deistic ID thinker.

    It will help to understand that:

    1–> ID is a cause- of- information theory, which intersects biology since the cell has in it a sophisticated info system.

    2 –> From signs of intelligence such as FSCI, design thinkers infer that the cell shows signs of design.

    3 –> There are candidate mechanisms by which such design can be implemented through common descent, in part or in whole [cf. on front-loading].

    4 –> ID — and this cuts straight across the “ID is Creationism in a cheap tuxedo” caricature — is about reliably identifying the causal factors at work [across chance, necessity and design], not the implementing mechanism; nor even “whodunit.”

    Try this 101, popular level article, if the WACs did not help you enough.

    GEM of TKI

  48. To kairosfocus

    Thanks for your reply, it depends on who you speak to then. This happens in science a lot.

    (I am at a loss as to why you started talking about religions and atheists what have they got to do with it?)

  49. Re #45

    Joseph

    The subtle point that the argument leaves out is that both high information content and irreducible complexity are defined in terms of low probability of an alternative explanation. To see that this is true you only have to ask yourself whether you would still have high information content or irreducible complexity if you had a plausible explanation.

    What appears to be a positive argument is actually a negative argument in disguise.

  50. #46 Pav

    As jerry has rightly pointe out, most of the examples you give would fall under the label of ‘microevolution’

    Additionally, in the case of polyploidy in plants, these are ‘jumps’, not the ‘gradual’ evolution that Darwin insisted on.

    In the first sentence you criticise the examples for not representing large enough changes. In the second sentence you criticise them for representing too large a change.

    You criticise evolution for not explaining a certain level of change – let’s not worry about what we call it. Is it the creation of a new species or the introduction of entirely new structural features and organs? Where are you putting the goalposts?

    Presumably this is the lizard you are talking about. It did evolve some interesting new features in about 30 generations. But it was hardly saltation. A cecal valve is not an “entirely new digestive system” and there is no reason to believe it appeared fully formed in one generation. It is an example of a relatively significant change in the phenotype in a just a few generations.

  51. Mark Frank,

    Nearly all the arguments in the evolution debate are negative. It is the lack of the positive that is at the heart of the debate.

    People criticize ID because they cannot prove a designer or that the design is inefficient for any designer or there is no known motive for the designer, or a design event or a means of design etc. You fill in the rest to suit yourself.

    People criticize naturalistic methods because there is no known mechanism for creating the information for life or major additions for life naturally, there is no trail of events from one species to another either in the fossil world or in the known world in which the changes are major.

    So both sides are using negative information against the other. This has been pointed out many times in the past so your point is not new. Some will then argue that certain things are positive but in essence the arguments are negative against the other side.

  52. Mark Frank,

    So if irreducible complexity, or FSCI, or nucleic sequencing were easy, they wouldn’t be hard?

    Thanks for the clarity.

    I rather enjoy the insertions of “only a negative argument” leveled against ID. It’s a sure sign of an industry tired of having to cover all its failing bases.

    As if pointing out that the explanation doesn’t fit the evidence is a weakness of some kind (see Copernicus, Einstein, Mendel, Wegener, Denton, Behe)

  53. FWIW, The following were the experiments that I noted as seeming relavtively significant while going through the MacNeil website:
    de Vries, Digby, Gottlieb, Macnair,Crossley (1974),Kilias, Rice and Salt, Dodd.

    ———————————–

    Mark Frank:

    To see that this is true you only have to ask yourself whether you would still have high information content or irreducible complexity if you had a plausible explanation.

    So, someone has some long binary string of length n and says the probability of getting this string by chance is 2^n, so that is its information content.

    So you say in effect, “No, because there could have been a program that existed that output that string, so you can’t compute the probability of y or its information content based on its length as if it had to come into existence at random.”

    I would agree. So you have to consider the probability of the program that output y (which itself would be a bit string – call it f(x), with f and x being two portions of this string, with one portion being considered the “program” and the other portion the “input”.) So needless to say you’ve just pushed back what needs to be explained. f(x) could itself be the output of some previous process and so the regression continues. But at some point in this regression you’ll hit a string that has either always existed or came into existence by chance. And at that point you can just compute its information content from its length. But anyway, the probablity of y to begin with was not determined by its own length, but the length of the smallest program-input that could generate it.

    The problem in ID is positing “intelligence” as an explanation, because the ID conception of intelligence is meaningless.

    As far as your “moving the goal posts” comment in 49, I agree.

  54. Material context:

    INFORMATION (per Shannon et al) is defined in probabilistic terms, not just FSCI or CSI.

    In praxis, we can count bits objectively enough, and when we do so in a functional context they are functionally specified [e.g. the number of bits in this message]. We do this all the time when we talk about file, memory stick or hard drive sizes.

    When we go over 1,000 bits, we have passed a threshold of complexity such that no random-walk based search on the gamut of the cosmos will have a reasonable chance of hitting the message. (Cf diagram at the just linked.)

    So, I am very comfortable in saying that if we have a functioning entity that works based on at least 1,000 bits of working info, it will be FSCI in the relevant sense. We may easily enough show many cases of such FSCI tracing to intelligent action. Notice that for all objections made to date, we have yet to see a case where chance + necessity have been observed to produce such a case.

    BOTTOMLINE: FSCI is a reliable, empirically justified, provisional – as are all scientific inductive inferences — sign of intelligence.

    GEM of TKI

  55. PS: When we try to shift to programs that “generate” observed — i.e. physical — strings, we must remember that such only happens when we have: algorithms, languages, coded programs, and executing machinery. these all reek of FSCI, and as well are known to be: artifacts of intelligence. the “escape” from the force of observing FSCI is only apparent.

  56. Re #50 and #51.

    Joseph make a comment with the title: “The Positive Case for Design” and also links to a Casey Luskin article with a similar title. I point out the negative assumption underlying this. You respond:

    Nearly all the arguments in the evolution debate are negative. It is the lack of the positive that is at the heart of the debate

    and

    As if pointing out that the explanation doesn’t fit the evidence is a weakness of some kind (see Copernicus, Einstein, Mendel, Wegener, Denton, Behe)

    It appears that neither of you are arguing that there is a positive argument for ID.

  57. KF [54]: A binary string does not reek of FSCI regardless of that fact that if you throw it at a computer it can be executed. Furthermore, when you think of a computer, think “turing machine”- an utterly simplistic device. All the optimized subsystems of a typical computer are not intrinsic to computer as a concept and can be thought of as software. Any arbitrary binary string is software as well.

  58. Re JT #52.

    I am sorry I don’t get the point of your post. All I am saying is that if you find a plausible natural explanation for an observation then it no longer has specified complexity or irreducible complexity. Are disagreeing?

  59. “It appears that neither of you are arguing that there is a positive argument for ID.”

    Nor is there a positive argument for any naturalistic mechanism for macro evolution. So what else is new.

  60. Joseph @45

    1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.

    Unfortunately, as has been discussed extensively in multiple threads here, there is no rigorous definition of specified complexity that can be objectively applied to arrive at the same answer by different, independent individuals.

    Further, it has not been demonstrated that such information is uniquely the product of intelligence.

    2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.

    Again, the current information measurements such as CSI are insufficiently rigorous to support this claim.

    In addition, “irreducible complexity” is too often positioned as “ID of the gaps”, relying on lack of knowledge rather than positive knowledge.

    3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.

    Unproven, partly due to the lack of rigor in the measurement definitions.

    4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.

    Since the first three premises are unsupported, the conclusion is highly suspect.

    I would like to see the mathematical arguments for ID strengthened, but I suspect that support for ID is more likely to come from investigating the limits of MET mechanisms in the short to medium term. I’d be delighted to be proven wrong, of course.

    However, as long as it is so easy for even a sympathetic listener such as myself to shoot down the common ID argument you’ve presented, DaveScot and other ID proponents here should refrain from making the resulting unsupported claim. It makes it too easy for ID opponents to dismiss the whole idea.

    JJ

  61. Jerry @58

    It appears that neither of you are arguing that there is a positive argument for ID.”

    Nor is there a positive argument for any naturalistic mechanism for macro evolution. So what else is new.

    Jerry,

    I generally make a point of reading your posts, but you’ve made similar claims to this on occasion and they don’t meet your usual level of intellectual quality.

    There is an enormous literature discussing possible mechanisms for macro evolution. Allan MacNeill has posted references to some of it elsewhere on this site. That literature discusses positive claims and the results of testing positive predictions.

    The question of where the limits of those mechanisms lie is still outstanding, but your statement that there is no positive argument is factually incorrect.

    JJ

  62. Mark Frank:

    I am sorry I don’t get the point of your post. All I am saying is that if you find a plausible natural explanation for an observation then it no longer has specified complexity or irreducible complexity. Are disagreeing?

    Don’t know. Are you saying your plausible natural explanation doesn’t have CSI either?

    Does a book not have CSI because we know someone wrote it (I’m not appealling to metaphysical notions of intelligence though.)

  63. Re DaveScot in #42:

    Thanks, Dave, for admitting publicly that you have no rational arguments or evidence with which to rebut my arguments and evidence, and so must resort once again to pure ad hominem attacks. This, indeed, says more about your confidence in your position than virtually anything else you might post.

    BTW, you might ask yourself why I have been repeatedly asked to teach the introductory evolution course and the upper level history of biology seminar at Cornell, given my seeming lack of qualifications to do so. Might it be that the department of ecology and evolutionary biology thinks that both my scholarship and teaching skills might just be adequate for the task?

    And while we’re on the subject, what are your professional qualifications to comment intelligently on the science of evolutionary biology? Also, at which Ivy League universities have you taught biology and evolution (and received multiple awards for doing so)? Just curious…

  64. you might ask yourself why I have been repeatedly asked to teach the introductory evolution course and the upper level history of biology seminar at Cornell

    Same reason someone else is repeatedly asked to clean the restrooms would be my guess. You might ask yourself why you think you’re qualified to argue with Cornell geneticist John Sanford or why you aren’t teaching courses in genetics or engaged in genetics research or generating the patents that Sanford did. Look up “The Peter Principle” and you’ll have the answer.

  65. Davescot,
    First , you responded with a model of evolution not ID. Could you not think of a model of ID?
    Second, if the genomes of species are constantly degrading then it would not just be a problem for evolution, the species would need constant repair. If all the species all over the world all throughout time are in need of constant repair and are yet thriving there is either a problem with the theory that they are falling prey to entropy or we have moved into the realm of the supernatural.

    bFast:

    Is it not true that all engineering is reduceable to physics?

    Unless you’re going to be specific about the methods used by these engineers you’re making a statement about history, not forming a scientific hypothesis. And in the case of ID the historical statement can’t be verified since the engineers are nowhere to be found.

  66. It’s called tenure, Dave; look it up.

    John Stanford and I have debated on several occasions (including once when he was invited to give a presentation in our introductory evolution course), and never once did he question my authority to debate him. Neither did Michael Behe when he made a presentation in our course. Indeed, John thanked me and Will Provine profusely for our “gentlemanly” treatment of him in our course (something for which you are clearly never in danger of being thanked).

    I don’t teach courses in genetics because my field is evolutionary biology. They’re different departments at Cornell. What are they at the university where you teach, Dave?

    And as for the “Peter Principle”, clearly someone stopped promoting you to your level of incompetence long before reaching any position at any institution of higher learning anywhere. Or am I wrong? Are you, in fact, a tenured academic at the DaveScot Institute for Advanced Ad Hominem Argumentation?

  67. Time to end this. I will no longer respond to DaveScot’s vile screeds. Anyone who wishes to discuss the issues and is willing to act the way I have acted toward John Sanford, Michael Behe, and Hannah Maxson will be responded to with the respect they deserve.

  68. In case anyone is interested and has the time, I am teaching a one-week intensive seminar course this summer at Cornell on the evolution of the capacity for religion. You can find out more about it here:

    http://www.sce.cornell.edu/cau.....38;v=13012

  69. “The question of where the limits of those mechanisms lie is still outstanding, but your statement that there is no positive argument is factually incorrect”

    Since there is no evidence for any mechanism, how is that positive? People have proposed all sorts of mechanisms but when it comes to empirical data, the well is dry.

    I could posit one eyed purple people eaters from the Andromeda galaxy as the source of DNA for earth and that is a positive assertion but there is no evidence for it. So is that a positive argument or not. Now this silly comment is meant to show that no one can provide any evidence for any mechanism for macro evolution (macro evolution of complex new functional capabilities) other than assertion.

    So I stand by my comment. The argument is over information and they know it but only throw out trivia and then claim victory.

  70. “Could you not think of a model of ID”

    Craig Venter and MIT are both involved in modifying life forms. These are two you can use. There are many others who are trying to create something from scratch so you many want to look at them.

  71. “the evolution of the capacity for religion.”

    Sounds like the begging of a few questions there.

  72. B L Harville:

    bFast:

    Is it not true that all engineering is reduceable to physics?

    Unless you’re going to be specific about the methods used by these engineers you’re making a statement about history, not forming a scientific hypothesis.

    I saw a show a while back where scientists were using available tools to reproduce the arrowheads that they had found. Now, are you saying that there were no scientific theories about where the arrowheads came from until the scientists did this work?

    The ID position is as follows:
    1 – Engineering is capable of producing the class of complexity that we see in nature.
    2 – Non-engineering methods, we claim, are unable to produce the class of complexity that we see in nature.
    3 – Therefore either the complexity we see in nature is the product of engineering, or the theory is falsifiable by demonstrating that a non-engineering method can do the job.

    If engineering can do the job, the scientist is certainly called to seek out who the engineer(s) is/are. However, science is pretty darn lame if it can’t detect design without first identifying the designer. Is your science lame?

  73. 73

    Davescot:

    Can you reduce things like a prokaryote acquiring a nucleus to a mathematical model?

    Quite easily I expect, but then you’ll just attack the model.

    Evolutionary biologists do do math. It’s called population genetics, and while I had to bend a few legs to get them to admit it, it does look at exactly the kind of problems ID is talking about; modelling populations and modelling evolutionary change, selection, etc.

    The problem is when WE apply statistical mechananics to “evolution” law and chance come up as wanting in capacity to produce molecular machinery as it is in producing microprocessors. The numbers just don’t work.

    But you don’t apply statistics to evolution. Remember Kirk’s math from a few weeks ago? He modelled evolution as a 10^42 long random walk. Dembski modelled the likelihood of a flagellum assembling by chance in one of his papers; I don’t know if he’s done better since then.

    These are tornado-in-the-junkyard scenarios. Evolution isn’t like that; it builds on previous successes. You can’t say you’re applying statistics to evolution if every time you leave out the stuff which makes it, well, evolution.

    The further problem is that “evolution” devotees just plain ignore the probalistic problems.

    Because they’ve heard lots of arguments from improbability before ID.

    Because, for them, evolution is true they simply accept any improbability, however remote, as something that MUST have happened because that IS the TRUE way life came to be the way it is.

    This is not reducing things to physics. It’s faith. There isn’t two degrees of difference between faith in the chance & necessity narratives and faith in the biblical narratives.

    Or, faith that scientists are decent and hard-working people who’ve spent their lives doing research and have no interest in lying. I mean, honestly, you’re accusing thousands of people either of being rubbish at their jobs, or lying every single day of their lives.

  74. For all those who are interested, ID is not a “negative” argument. In fact, every negative argument implies a reciprocal positive argument and vice versa. ID is no different. Indeed, it is the positive argument that frames the issue which allows the negative argument to take hold.
    The principle characteristic of intelligent causation is “directed contingency,” or choice. As most of you know, whenever an intelligent cause acts, it chooses from a range of competing possibilities. This applies to Divine intelligence, superhuman intelligence, humans intelligence, and, some would say, animal intelligence.

    Whenever one of us writes a post, he/she chooses from among a wide range of possible permutations and combinations. ID always entails choosing some things and ruling out others. The same thing applies to the “designer” of the universe and of life. In that sense, we do know something about the way the designer acts in a very general sense, which is a point that our critics choose to ignore.

    In any case, the process by which we attain knowledge about these things can be either a DIRECT conclusion (syllogism) or an INDIRECT conclusion, (process of elimination) or “reductio ad absurdum.” The latter technique is not “negative,” it is merely the reciprocal of the other form.

    Example:

    Design is real [affirmation]

    Design is not an “illusion” [reciprocal]

    Now it is true that ID does have a powerful “negative” argument against Darwin’s “general theory” (not his “special theory,” but this argument has been emphasized for strategic reasons to dramatize the point that our adversary’s dogmatic pronouncements have no basis in science. But that negative argument depends on the positive reciprocal argument or it would have no logical force. The positive side of that argument is, “here are the conditions any theory about biodiversity must meet.”
    In truth, both arguments are positive in their formulation, if we refrain from including the negative reciprocal.

    [A] Intelligent design [A designer fashioned all life forms either directly or indirectly.]

    [B] Materialist Darwinism [“Life found a way.” ] Quote from Jurassic Park

  75. #72 StephenB

    Are you saying the design is simply the “reciprocal” of modern evolutionary theory? i.e. that all that ID is proposing is that MET is false?

    To see what is negative about ID consider what counts as evidence for ID and what counts as evidence for MET.

    MET counts as evidence:

    - fossil record shows gradual change that is required
    - method of inheritance is particulate as required
    - age of the earth is sufficient
    - specific episodes of microevolution and speciation have been observed
    - etc

    I know where will be comments saying that this evidence is not true. That’s not the point of this comment. The point is that the evidence is positive. It is about observations that MET predicts – things that need to hold or are likely to hold.

    Now look at the “evidence” for ID. It is all about the impossibility of alternatives – then it jumps in with the conclusion that ID is true. CSI and IR are defined in terms of the improbability of alternatives – that’s what makes them complex and irreducible. The only evidence that is allowed under ID is assessment of the implausability of MET – anything else would require discussing who, why and how.

    To look at it another way – if a plausible alternative to ID were identified for any specific outcome e.g. bacterial flagellum, then ID goes away – there is no evidence left. If a plausible alternative to MET were identified then there would be a discussion about which is the more plausible explanation and how to test between them.

  76. Mark Frank,

    Your logic escapes me. You know what we are advocating here is. The process is:

    Making some observations. And then after we have made those observation, we have offered a tried and true way they could have happened.

    We then observe that no other mechanism has ever produced anything like these observations. And then we conclude that the most likely scenario is the one that has been shown capable to produce the observations and not some hypothetical pie in the sky mumbo jumbo.

    It is as simple as that. If the MET or whatever it is called today could do it, we would admit it. If some other hypothetical process could do it, we would admit it. We just know of one process that can do it. So we are stuck with that till someone can pull the rabbit out of the hat and produce another.

    Now there is no known intelligence before man. But somehow the universe was created and is so incredibly fine tuned that it must have been the result of intelligence so the likelihood of an intelligence existing after the universe came into being seem highly reasonable.

    We also know that the laws of nature existed before life appeared and it is a possibility that these laws could have led to life. That is the position of a lot of people but it has one failing. It has not left any evidence that such a thing ever happened or was even possible. But yet people cling to this illogical low probability hope with an incredible faith.

  77. Venus Flytrap,

    The whole point behind irreducible complexity is that there aren’t any previous successes to build on.

    And that means the larger the number of components the more difficult it would be to bring together based on that- ie no success until the whole thing is not only assembled but functional.

    Also many scientists are specialists whose research has nothing to do with the debate.

    I would even go so far as many of them don’t understand what is being debated.

    For example the NCSE seems to support that all evolution is being debated.

    In Discover Sean Carroll stated the same thing.

    And Darwin started that myth!

    IOW most every scientist who opposes both Creation and ID thinks that both teach fixation of species.

    Bill Nye the science guy was on TV spewing that same garbage.

    So perhaps those scientists are lying- maybe they are just ignorant of the issue.

  78. Venus Mousetrap:

    Evolutionary biologists do do math. It’s called population genetics

    Lets get back to the point of this thread, Walter ReMine’s theorizing. Mr. ReMine has suggested that Haldane’s dilemma(Haldane’s calculation of just how few mutations could fix between the common ancestor of human and chimp compared to the number of mutations recorded) is still an issue.

    If I understand ReMine correctly, he did his calcuations assuming about 1.5% difference between human and chimp. The most evolutionary value I have been able to find these days is more like about 6% difference in coding DNA. (The scientific community is coming to recognize that much of non-coding DNA is expressed in the phenotype, so is also relevant.)

    ReMine’s premise is that the math does not work out between human and chimp via classic population genetic analysis. My understanding is that his position, Haldane’s position, has not been seriously addressed.

  79. Mark Frank,

    The complexity is positive evidence for design. I find it interesting that nobody arguing for evolution seems to address the arrowhead/Mt. Rushmore argument. How do you infer design from Mt. Rushmore? When someone claims that Mt. Rushmore was created by erosion and I counter that that is very unlikely and that it is more likely that someone design/created it am I only making a negative argument?

    Even Dawkins says that things evolved to appear to be designed. Isn’t that appearance positive evidence? And isn’t the attempt to explain why something isn’t designed, even though it looks designed, probative as well?

  80. This is OT from earlier, but worthy of attention here:

    This is what was discussed earlier:

    #6 – landru:

    Sorry, off-topic, but this is really interesting.

    http://www.eurekalert.org/pub_…..021009.php

    Using OOL to teach general chemistry???

    Here is what I posted…#14 uoflcard:

    …yeah that is amazing. Is that real?

    from article:

    Klymkowsky and Clemson University chemistry Professor Melanie Cooper were recently awarded a $500,000 grant from the National Science Foundation for a three-year project titled Chemistry, Life, the Universe and Everything, or CLUE. The project includes developing a general chemistry curriculum using the emergence and evolution of life as a springboard to introduce and explain related chemistry concepts, Klymkowsky said.

    What is abiogenesis doing in any K-12 science classroom? I’m an ID advocate who doesn’t think ID should be taught in science classrooms yet, although what neo-Darwinism has failed to prove should be taught. But abiogenesis has less of a right to be in a science classroom than ID. That is a blantant example of a worldview being shoved down people’s throats w/o any evidence.

    I can’t get over how hypocritical that is. They blatantly do the exact thing they accuse ID-supporters of doing – bypassing the scientific method straight for the classroom. If abiogenesis is in classrooms, then ID should be. There is no defending that. It is 100% speculation at this point, has not been scientifically proven to any reasonable extent. Yet the NSF, defender of almighty science, skips all of that and gives $500,000 to have its worldview fed to children

    I sent Dr. Klymkowski an e-mail. Here is what I wrote:

    Please explain what abiogenesis is doing in high school. Did I miss the memo that anything other than speculation was proven to be possible regarding this process? If not, doesn’t this violate the exact reasons the science classroom has been so militantly defended from ID or creationism?

    It came out a little more pointed than I intended, but he was gracious, cordial and expedient in his response, which I appreciated. But that didn’t hide how shocking it was (emphasis added):

    Klymkowski:

    Well, from a scientific perspective, life must have arisen from non-living physiochemical systems, and there are a growing number of hints as to how that may have occurred.

    This is a subject of some (but not a lot of research – curing diseases is more pressing) …

    I wrote back saying I think he substituted “science” for “naturalism”. Science is not a worldview, it is simply the methodological study of natural phenomena. It doesn’t require anything. NATURALISM requires life to have arisen from non-living physiochemical systems. (And btw, my Christian belief does not require anything either way regarding OOL, either natural or supernatural)

    I wouldn’t really care much about what one person says, but this is a guy who just received a $500,000 grant from the NSF to have this worldview (not science) taught to children.

  81. uoflcard,

    I encourage you in your communications to be uber-polite. And I think you made a really good point.

  82. Technical note:

    For two days now, I see a “comments closed” screen before — on a reload of the original page — I see the comment box open.

    GEM of TKI

  83. Re #76 Jerry

    Re #79 Collin

    I am going to try and avoid going round in circles and take a new tack. The problem is that “design” is not a hypothesis. If I proposed “chance” in the abstract as an explanation of something you would not be impressed. It is just too broad.

    Try thinking of it in the reverse. Take yourself back 300 years – before Darwin. A large number of intellectuals believe that life was created by God (this is not design – it is much more specific). I propose that “chance” is the cause of life. How do I know? God is the only plausible designer. I know that God does not exist. Therefore it is chance. Aha – you say. How did chance create life? I say – that is not my concern. I am a chance theorist. I just look for evidence of chance.

    Collin – Mount Rushmore looks like someone designed it because the chances of the rock weathering naturally to resemble four US presidents is incredibly much smaller than the chances that someone carved it that way.

    Compare this to a situation where e.g grass grows a different colour on my lawn in almost exactly the outline of a US president. Is this evidence of design? At first sight you might say so because the chances of grass growing in that pattern seem very small. But then it turns out someone left a metal outline of the president on the lawn over the winter and recently removed it. As soon as a chance explanation exists then design goes away. You would have to go a specific explanation of how design might have happened and compare it to the specific chance alternative.

  84. uoflcard,

    You are a hero of rational thought.

  85. JT:

    Re 57:

    A binary string does not reek of FSCI regardless of that fact that if you throw it at a computer it can be executed. Furthermore, when you think of a computer, think “turing machine”- an utterly simplistic device. All the optimized subsystems of a typical computer are not intrinsic to computer as a concept and can be thought of as software. Any arbitrary binary string is software as well.

    Let’s take this from the top:

    1 –> Have you ever designed, built, debugged and trouble-shot a computer or microcontroller, from a bag of chips and sheets of paper to lay out designs, timing diagrams, a monitor pgm [what is below the level of operating systems], up to getting it to successfully interface with the real world and fulfill a real-world function? (And, no, I don’t mean assembling a machine from pre-built components and pre-developed software, I mean rolling yer own from scratch.)

    [Obvious answer from the above: no. I have. So has DaveScot. We both got the soldering iron scars to prove it.]

    2 –> Once you move from the paper world of theoretical machines (useful as they are in their place)to real physical strings of bits stored in physical media and functioning in objects that physically realise and give effect to algorithms, you will know that bit strings don’t magically do anything by themselves; and that FUNCTIONING bit strings of any significant length don’t appear by chance.

    3 –> For an inputted bit string to trigger any functional algorithmic response, there have to be: [1] an algorithm, [2] an architecture for it to run on, [3] one or more coding languages, starting with a relevant machine code [Ever coded in mac code or had to handle a core dump: FF BC CC DF AE 06 5A . . . ?], [4] hardware capable of carrying out input interface, storage, processing and output interface, [5] coded programs that execute the algor on the target machines (or equivalent hardware, but we are focussed on the softy side) [6] data structures, [7] handshaking protocols, and [8] sequencing and synchronisation, starting with system initialisation on turnon. (I used to strongly stress to my students: get a clean robust initialisation to a known initial condition, and NEVER let the system get into an out-of control condition. Regular “sanity checks” — hardware interrupt triggered (use a timer chip) . . . — if the system is potentially dangerous . . . don’t forget, ~ six sample-hold action points per key signal rise time if you are controlling a process . . . emergency handlers . . . )

    4 –> ANY significant mis-steps on any of these eight core components, and the function fails at some point, per Murphy [I firmly believe in the doctrine of Murphy] usually a point of maximum embarrassment. In short, the core of the system is irreducibly complex, for any given design.

    5 –> Within that PHYSICALLY INSTANTIATED context, we generally have stored information, and data strings flowing in, being processed and transformed ones flowing back out.

    6 –> Such strings take meaning from their structure relative to the conventions and architecture of the particular system, and as a rule are EXTREMELY vulnerable to perturbation. [NASA once had to blow up a rocket as somebody put a comma where a semicolon was required in a Fortran I think it was control program.]

    7 –> Now, we may indeed have fairly short bit strings at points, that trigger big events, e.g. a switch [one bit: on/off] or a keystroke or a mouse stroke or click or the input to an A/D converter yielding a given output state or the feed in from a UART receiving a serial data string [typically 1/2 or 1 - 4 bytes in these cases or down to 1 bit]. BUT THESE TAKE MEANING ONLY IN THE CONTEXT OF THE PHASE OF ALGOR EXECUTION IN VIEW, THE SPECIFIC POINT IN THE SYSTEM THEY APPEAR AT AND THE ASSOCIATED PROGRAMS AND DATA STRUCTURES THAT INTERACT WITH THEM.

    8 –> So, the cascade of co-ordinated programs where a long one is called by successively shorter ones so that the final digital string is “caused” by a much shorter one [one that lo and behold hqas a reasonably high probability, e.g a click on/off is 50-50 after5 all . .. ], is only viable in the context of an entity that as a whole requires a LOT of FSCI,and is itself irreducibly complex [IC]. The very PC you are using is an apt illustrative case in point, when you swoosh your mouse or click it or hit a key on the keyboard.

    9 –> Thus, once we are dealing with an identified algorithmic context, physically functional bit strings, even short ones, given that context, reek of FUNCTIONALLY SPECIFIC, COMPLEX INFORMATION.

    So, pardon my itching soldering iron scars: physically functional digital strings in an algorithmic context are either FSCI themselves (which is the context under discussion) or to function require FSCI rich data strings embedded in a system that makes the inputs function.

    In either case, once we observe FSCI, and we know the origin story directly, we observe intelligence. And, given that we have cut off at 1,0000 bits that function, we are loking at such isolation in the relevant config spaces thatt eh whole observed universe working as a search engine, per random search strategies, will not credibly be able to find the relevant islands or archipelagos of function.

    On needle in a haystack grounds.

    FSCI, I repeat, is a strongly warranted, reliable sign of intelligence.

    And, a Turing Machine, once physically instantiated is anything but “simplistic.”

    GEM of TKI

  86. MF:

    @ 85: The problem is that “design” is not a hypothesis. If I proposed “chance” in the abstract as an explanation of something you would not be impressed. It is just too broad . . . . a situation where e.g grass grows a different colour on my lawn in almost exactly the outline of a US president. Is this evidence of design? At first sight you might say so because the chances of grass growing in that pattern seem very small. But then it turns out someone left a metal outline of the president on the lawn over the winter and recently removed it.

    1 –> Mark, do you not see the design implication in the situation? [Cf my highlight.]

    2 –> This also brings out that the inference for a given aspect of a situation, across chance, necessity and design, is not a WHODUNIT or a HOWTWEREDUN inference.

    3 –> You see independent specification, check.

    4 –> You simultaneously see complexity, check.

    5 –> You properly infer CSI, so design, check.

    6 –> The back-story comes out: someone left a metal outline of the silhouette of a US president on a lawn, triggering grass to grow in a certain way it would otherwise not have.

    7 –> You then infer: well, there was no direct intent to make grass grow that way so no design. [ERROR, as (i) the key entity the metal silhouette was designed, and the presence of such design was detected. Also, (ii) detection of design does not rule out the presence of chance or natural regularities as well.]

    8 –> So, while indeed, design, chance and necessity ae GENERAL CATEGORIES of causal factors, once we focus on a given situation and its aspects, using the EF, we are dealing with alternative hyptheses ansd empirical data that allows reliable discrimination between them.

    GEM o TKI

  87. I’m not sure if this is technical difficulty, but judging by DaveScot’s statements about me being a sock puppet, I suspect I may have been locked out. I’ve tried to post the following multiple times, and it has yet to show up. If I am mistaken, please forgive me for even considering the idea that you may have caused this, DaveScot. Meanwhile, I’ve started this “sock puppet” to post what I had originally intended to post as a follow up as KRiS:

    Sorry for waiting to long to post my reply. I lost my internet connection and this is the first chance I’ve gotten to be online since my last post.

    Jerry

    If someone has an hypothesis and does a test of that hypothesis and the research fails to support the hypothesis, they are failing to support their theory. If the test is repeated ten thousand times, I will go out on a limb and say that the theory is being falsified.

    Unfortunately, this is exactly why a statement like “You will never see X” is not a good test of a hypothesis. To be considered actual support for the hypothesis, results must be found which agree with the statement. No results can ever agree with any statement that says “never”, because the search space is all of time and space. Anything less is an incomplete search, meaning that the actual test of the statement must continue (i.e. the test hasn’t finished yet, so we don’t know what the final result is).

    The way that I worded it above sounds kind of silly, and many people would immediately discount that argument simply because it sounds so ridiculous (all of time and space…hee hee hee), so let me be a bit more exact in how I present it now.

    Any statement in the form “You will never see X” can be more accurately stated as “Given the set of all possible Y, no Y will be found among them which is actually X.” It should be pretty clear when restated in this way that to be in agreement with the statement, and therefore supportive of the hypothesis, all of Y must be searched without finding X. Any number of searches through anything less than all of Y results in either a falsification (X is found) or a necessary continuation of the search (X is not yet found). Not finding X means the search is incomplete and therefore inconclusive. (Not sure of that? Just ask yourself, if X hasn’t been found yet, can you conclusively say that it will therefore never be found? If not, then it is by definition inconclusive)

    Now, since the test of the statement is thus far inconclusive (assuming X has not been found yet, of course), to claim that the statement is therefore supported is to say that an inconclusive result must be considered to be supportive of the statement. This means that it must be assumed to be true unless and until it is conclusively demonstrated to be false. In other words “It’s true because you haven’t proven it to be false.” This is the Argument From Ignorance. Now you can attempt to justify using such an argument (maybe you can claim that inconclusive is still “conclusive enough”, though I think that’d be a hard sell), but you can’t legitimately claim that it’s not an Argument From Ignorance at all, even if you do flip it around and call it a “test”.

    Of course, one can limit the search space to make it more manageable by using a limited set of Y, rather than the set of all possible Y. However, this necessarily changes the original statement from “You will never see X” to “You will not see X if you search through this limited set of Y”. There better be a very good reason for excluding those areas which are not to be searched. For instance, when you use the fact that X has not yet been found to try and support the original statement you are essentially creating a new statement which is a subset of the original that says “You will not see X if you search through the set of Y which has already been searched.” Obviously this statement is supported by the data, but that’s because it is a simple statement of fact. It’s not a prediction, but a post-diction. I think you’ll agree that limiting Y for the express purpose of making the statement true isn’t a very good reason for limiting Y.

    Let’s here it for Kris who has joined the ranks of nit pickers but never offer substance.

    You misspelled “hear”.

    uoflcard

    It’s more like “After many years of testing, not X yet, so what about Y?

    What is strange to me is that 150 years is always considered such a very long time for such a test. Meanwhile the Cambrian Explosion is considered to be such an extremely short amount to time that it is almost considered a falsification on it’s own. In other words, in 150 years you believe it is highly likely that we should have witnessed something that took nature the amazingly short time of only 5 million years to do. This problem is exacerbated by the fact that only a natural observation counts, since any lab or man-made experiment is automatically rejected on the grounds that it is designed by man, and therefore not an accurate test (I’m thinking of Ev as an example).

    So I ask you this, is it your contention that evolution should reasonably be expected to create any kind of CSI in only 150 years? If so, what would that say about the rate of evolution in general?

    DaveScot

    KRiS is yet another sock puppet from the Panda’s Thumb forum.

    Actually I’ve never been on that forum. I view the Panda’s Thumb blog from time to time, but for the most part this is the only place that I post (it’s no fun debating with people that agree with you). Thank you for defending me, B L Harville and JayM.

    PaV

    Which is th eworse argument: arguing that evidence of design implies the presence of an intelligent agent, or arguing from one’s personal notions about what God can and cannot do, or, rather, what God would or would not do? I’m interested in your answer.

    Obviously the first argument is the better argument. There are several arguments and ideas presented by Darwin that have been shown to be either false, or poorly argued (as you demonstrate so well). However, there are many many more arguments that are very persuasive and logically sound which also support evolution. It’s a good thing he didn’t allow himself to rely solely on one argument or even one type of argument.

  88. Could you possible give me some explanation as to why I am unable to post? Was I rude? Was I disrespectful? Did I break some rule of commenting etiquette? You have my email address, so feel free to send an email explaining what I’ve done to get banned.

  89. PS: Switch Bounce.

    This is a case in point of the difference between paper models and reality. A switch is a 1-bit entity, i.e. the shortest digital string.

    A simple, easy case of how a short string can trigger qa much longer one, nuh?

    Nope.

    For, a switch is a mechanical device and has dynamics so that flicking it triggers physically multiple contacts across milliseconds, that can easily derange a system. (One workaround is to use conductive, soft rubber switches [nice, overdamped behaviour, no bounces] . . . and there are tradeoffs on number of expected operations int eh system’s reasonable working life.)

    My favourite solution was to use a JK flipflop, with the switch set so that once it goes to the on state, it latches the f/f to its storage state. that is, tie J to K to NOT-Q o/p, so that it is inactive high (on system turnon, force a reset on the reset input for the f/f].

    The actual mechanical switch then feeds the clock i/p. On triggering, we not only go to the 10 o/p state but latch the JK on its storage state, with J = K = 0. On handling the interrupt so triggered, reset the f/f. [This also automatically prevents a further interrupt on the switch triggering before you are ready to handle it -- and your design has to factor in "lost" inputs like that. That way you do not get out of control! [And easy case for this: IRQ handle takes a few 10's of ms, and you do not expect real signals to repeat like that, and if they do, it won't make a serious difference.]

    On paper, real simple. On the ground, a lot more subtle than that.

    GEM of TKI

  90. Switch Bounce.

    A bouncy switch isn’t a binary string generator it is a variable length bit stream generator.

    If you design a switch with a debounce circuit then, for the purposes of a paper model, you can treat the ‘switch’ as a single entity and ignore the details of how the debounce properties are implimented. From the point of view of the system receiving the switch input it is not nessasary to ‘know’ if the switch has a particular debounce circuit just as long as it behaves as a debounced switch.

    Taking your approach, you could argue that the paper model must take into account the exact structure of the switch right down to the atomic level. No two switches would be alike even if they were functioally equavalent and their subtle differences had no effect on their function. Your point about the differences between paper models and reality is important but it is not always pertinent to every aspect of every model. It is important to pay attention to the subtleties of something you are trying to model but it is also important tounderstand what is relevant to the task in hand. Taking every atom into account whenever you model something is practically impossible.

  91. “How did chance create life? I say – that is not my concern. I am a chance theorist. I just look for evidence of chance.”

    The hypocrisy of statements like this really extends the envelope of credulity.

    No one today is inhibiting the study of chance as an explanation of anything. And no one after hundreds of years of study has yet found any phenomenon in this world due to chance that results in the kind of complexity we see in life. Nor has anyone found this type of complexity the result of law either.

    What is being done is that chance as an explanation is being shoved down the throat of the students of this world as an explanation for a major issue in science and life when there is no evidence to support it. And why is this nonsense being imposed on others and why is this nonsense defended by people who come here when there is no basis for it and they can not provide any. Maybe we could have a contest to find the proper words to describe this behavior.

    Are you not aware of the debate? Of course you are and given that, the statement made is ultra illuminating.

  92. Laminar:

    I cited the bouncy switch to show precisely the gap between the “simplicity” of on paper and in theory modelling, vs the on the ground complexities of real physical systems.

    The precise context is one in whi9ch someone is trying to get around the import of observing FSCI by suggesting getting shorter calling strings/ pgms, until you find a short enough one to say FSCI has “vanished.”

    I have pointed out that that, taking the shortest possible string length, 1 bit, the realities of PHYSICAL functionality make for a lot more complexity than is evident on the surface. In short, FSCI has NOT been “disappeared.”

    GEM of TKI

  93. Jerry:

    Point.

    And the case in view — US$ 1/2 mill for indoctrination in materialism under false, Lewontinian colour of science — is sobering.

    GEM of TKI

  94. PS: Laminar, note where there is a1-bit string, in my JK event switch example: the Q/NOT-Q outputs and internal latch.

  95. Mr. MacNeil

    I was in the commercial sector all my adult life so, not surprisingly, your question about why I, like you, failed to become a professor at a university is nonsensical. In the commercial sector I was quite successful and retired young to pursue less serious things like spanking evolution lecturers from Cornell.

  96. On MacNeill’s page: I was first excited then thoroughly disappointed when reading it. I thought that since MacNeill is constantly active in this area he would have known of better examples that perhaps I was unaware of.

    Oh, well.

    Joseph at #44 highlighted the main issue: definitions. MacNeill starts with an overly broad definition. That definition may served well enough 20+ years ago, but we’re not just considering “large-scale pattern[s] of change over time”, we’re looking at the specific informational basis for these patterns that we now have access to.

    I’ve been having discussions about macroevolution over the last month or so and the examples being highlighted would probably qualify as macroevolution under MacNeill’s definition but unfortunately from an informational basis they were all well under 100 informational bits and also did not result in IC objects. Stripped of their informational basis, these examples become meaningless talking points that do not belong in a modern debate.

    BTW, Dave, why insult MacNeill over this? Just point out how he’s incorrect.

    Venus Mousetrap #73

    Already discussed before. In short, the major problem is that we don’t have any indirect pathway to use as a starting basis for a hypothesis.

    bfast #78

    If I understand ReMine correctly, he did his calculations assuming about 1.5% difference between human and chimp. The most evolutionary value I have been able to find these days is more like about 6% difference in coding DNA.

    To put that in perspective here is a quote about the draft Neandertal genome:

    So far the results indicate that there is a roughly eight to 12.8 percent divergence between Neandertals and human reference sequences.

    Although I should note the caveat that the current sequence data representing about 63 percent of the genome. And the informational difference may only correspond to data which controls systems irrelevant to the debate at hand. As in, we don’t know the exact percentage of information that defines the key differences between chimp and human. But to know that we’ll first have to figure out how the biological code works in its entirety…

    At the same time everything I’ve read about Neandertals indicates that their technology, music, culture, etc. was pretty much the same as “humans”. Considering the changes we see in dogs I’ve felt that Neandertals should probably be considered just another variant of human, sort of like how labradors and golden retrievers are both dogs.

    EDIT: Joseph and I discussed the repercussions of the informational divide in a previous thread.

  97. especially when you consider that St. Bernards are dogs as well toy poodles etc.

  98. Mark Frank sez:

    The subtle point that the argument leaves out is that both high information content and irreducible complexity are defined in terms of low probability of an alternative explanation. To see that this is true you only have to ask yourself whether you would still have high information content or irreducible complexity if you had a plausible explanation.

    What appears to be a positive argument is actually a negative argument in disguise.

    That is incorrect. The PROOF- that is mathematical proof- is the IMPROBABILITY.

    The POSITIVE case comes from experience-

    That is every time we have observed X degree of IC and knew the cause it has ALWAYS been via an intelligent agency.

    CSI is the same- EVERY time we have observed CSI and knew the cause it has ALWAYS been via an intelligent agency.

    IOW those are POSITIVES.

    And in both cases we have NEVER observed nature, operating freely doing so.

    So yes, all that has to be done to falsify the inference is to demonstrate that nature, operating freely CAN account for it.

    That said all YOUR position is “we haven’t observed the designer(s) in action, therefore nature did it.”

  99. JayM,

    You haven’t “shot down” anything.

    Fortunately it has been demonstrated that such information requires agency involvement.

    And as you have been told empty claims of MNs do not amount to evidence.

    Do you think we determine artifacts by flipping a coin? Believe it or not we have tried and true design detection methods.

    And until someone, ANYONE can demonstrate IC or CSI coming into existence via nature, operating freely, I say it is safe to infer it cannot do so.

    Add to that we have direct observational knowledge of agencies bringing both into existence and the design inference is solidified.

    And yes, as with ALL scientific inferences, the design inference can either be confirmed or refuted with future knowledge.

  100. To Patrick
    “I’ve been having discussions about macroevolution over the last month or so and the examples being highlighted would probably qualify as macroevolution under MacNeill’s definition but unfortunately from an informational basis they were all well under 100 informational bits and also did not result in IC objects.”

    Fantastic I am having an email conversation with someone about this very subject and I need all the help I can get. Can you show me the calculations to get the ‘under 100 bits’ number please? It would be very useful.

  101. I just noticed

    “Unfortunately, as has been discussed extensively in multiple threads here, there is no rigorous definition of specified complexity that can be objectively applied to arrive at the same answer by different, independent individuals.

    Further, it has not been demonstrated that such information is uniquely the product of intelligence.”

    We have been getting a lot of incredible statements from people here lately. Thank God for the moderation changes that lets in the people who have problems with ID and who demonstrate how ill informed they are about the debate. So that those who never comment and are honestly seeking information can see what one side has to offer versus the other. We have to assume that those who come here represent the best out there and as such it means the fights are easier than we thought. A great example is Allen MacNeill, an evolutionary biologist, and his thoughts on macro evolution. The people who come here cannot use the usual ad hominems or the inane arguments used elsewhere and expect to get away with them to distract from the content of their arguments. They are forced to deal with facts and logic and it is amazing to see how they dodge both.

    Take the comment in question above. How many times have we said that relevant to biology the concept is FSCI not CSI and yet this comment repeats the non issue again. And it is easy to assign some calculations to this information and those here using the concept bend over backwards to be conservative on the magnitude of the calculations.

    If one wants to question the proposition that this type of information is the product of intelligence alone then I suggest they find even one small example of something similar that is not the product of intelligence. Of course one never gets an answer to this but only that it is unfair to say it cannot be due to non intelligent causes. Well we do not actually say that. We say it is highly likely that it has an intelligent origin not that it is absolute. We say that is highly unlikely that it is due to non intelligent origins. And we have the data to show that.

    The objections are getting infantile.

  102. jerry:

    You have raised a serious point. Cf here from 172 on, as a current case in point.

    G

  103. GSV #98

    Can you show me the calculations to get the ‘under 100 bits’ number please? It would be very useful.

    Calculating is actually very easy. The shorter version is that 2 bits are required to represent each nucleotide. The examples being touted typically consist of only 3 amino acid changes. 6 informational bits should be enough to encode each amino acid but I personally bump it up to 8 bits for ease of calculation (which should also help account for any minor data compression). Thus they’re 24 informational bit indirect or direct pathways. More specifically, the trypsinogen gene in Antarctic notothenioid fish–which I think is probably the best example available at this time–consists of repeats of three amino acids.

    I’ll copy over my English word explanation that should makes things easy to understand.

    the Explanatory Filter can take multiple types of inputs (which also makes it susceptible to GIGO and thus falsification). Two are (a) the encoded digital object and (b) hypothetical indirect pathways that lead to said objects. My name “Patrick” is 56 informational bits as an object[each letter is represented by 8 bits]. My name can be generated via an indirect pathway in a GA. An indirect pathway in a word-generating GA is likely composed of steps ranging from 8 to 24 informational bits.

    Let’s say you take this same GA and have it tackle a word like “Pseudopseudohypoparathyroidism” which is 30 letters or 240 informational bits. It can be broken down into functional components like “pseudo” (48 informational bits) and “hypo” (32 informational bits). Start with “thyroid” (56 informational bits). For this example I’m not going to check if these are actual words, but add “ism”, then “para”, and then “hypo”. “hypoparathyroidism” is a functional intermediate in the pathway. The next step is “pseudohypoparathyroidism”, which adds 48 informational bits. Then one more duplication of “pseudo” for the target.

    That may be doable for this GA but what about “Pneumonoultramicroscopicsilicovolcanoconiosis” (360 informational bits) or, better yet since it’s more relevant Dembski’s work (UPB), the word “Lopado­temakho­selakho­galeo­kranio­leipsano­drim­hypo­trimmato­silphio­karabo­melito­katakekhy­meno­kikhl­epi­kossypho­phatto­perister­alektryon­opto­kephallio­kigklo­peleio­lag?io­siraio­baph?­tragano­pterýg?n” (1464 informational bits). I’m not going to even try and look for functional intermediates.

    And I’d add that none exist, although the entire word consists of functional components. So someone could argue that an indirect pathway could duplicate all of them from other words and somehow assemble them into a coherent whole.

    Now the hard part is looking at the raw code and figuring out what biological information corresponds to which biological functionality. Never mind if the entire system is like a self-decompressing executable…so keep in mind these are estimates for the “true” informational content.

  104. Patrick,
    given your method of calculation, how did you apply it to ALlen’s examples? i didn’t see one piece of sequence data in the whole post, so how did you do your calculations?
    second, why don’t you count each repeat of the amino acid triplet in the antarctic fish in your informational equation? if they repeat over 4 times (which they do), they will exceed 100 informational bits.
    my last few comments have been hung up in moderation, but I hope this one won’t be.

    I was gone for a couple days, and much conversation has taken place, so I will embed my response here for future readers:

    In the other link I gave I noted that the “ice fish carr[ies] a partially deleted copy of alpha1 and lack the beta globin gene altogether. These deletions are inextricably linked to its lower blood viscosity…” IOW, a destructive mutation that gives a benefit in this limited environment. The number of repeats apparently required for this “functionality” is 4 repeats or 96 informational bits. AFAIK additional repeats are unnecessary duplications. As I mentioned tying function to biological information is the hard part, so I may be wrong on this and this example might require more than 100. No big deal either way. Not to mention, I suppose it could be argued that a degenerative change like this should not even count as FCSI, although I’d leave that determination to the experts. I personally believe there will be found special exceptions where 500+ informational bits can be exceeded by non-foresighted processes, and ID theory will need to account for them, but that’s just my opinion. – Patrick

  105. jerry:

    Take the comment in question above. How many times have we said that relevant to biology the concept is FSCI not CSI and yet this comment repeats the non issue again.

    Perhaps your annoyance should be directed toward Joseph, to whom JayM was responding. Joseph said:

    Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.

  106. R0b,

    I think you reading comprehension skills need improvement. If JayM knew the difference and he should know the difference given all the comments that have been made on this here, the proper response is not to chastise Joseph with a meaningless comment but to say something like,

    Joseph, you should use FCSI and not just plain specified information.

    So Joseph was just fine because we all knew what he was talking about, but JayM was showing his stripes. If you did not see this, then as I said your reading comprehension needs some work.

  107. jerry, so when Joseph says “specified complexity”, we all know that he means FCSI. But when JayM uses the same term and its synonymous (according to Dembski and Meyer) term CSI in his response, we know that he doesn’t mean FCSI. Got it. I’ll remember that, lest my reading comprehension be disparaged again.

    Does the same rule apply to the Stephen Meyer quote in #41?

  108. R0b,

    FSCI is a subset of CSI. It that hard to understand. Dembski is attempting to model all intelligent actions not just those contained in biology which are more easily modeled because they are so obvious.

    Meyer uses the examples of language and computer software to describe the information in DNA so it is probably best if he had used FCSI to make the distinction so the slow witted can understand better.

    Just trying to improve everybody’s reading comprehension skills so this point does not come up again.

    Just so you can understand it better. If Craig Venter puts his name in the DNA using some code, a different functional relationship is being used. He is using the nucleotides to specify a name while a gene is specifying a protein. It is probably likely that a different specifying scheme is being used for a lot of the remaining part of the genome. If you do not understand this, I or someone else will spell it out in more detail so these misconceptions don’t go on and get repeated.

  109. FSCI is a subset of CSI. It that hard to understand.

    No, I understand. What I don’t understand is that a lot of ID proponents, including Meyer and Joseph, fail to choose what you think is the best term, and it’s no big deal. But when JayM mirrors Joseph’s terminology, it’s cause for complaint.

    Just trying to improve everybody’s reading comprehension skills so this point does not come up again.

    Whose reading comprehension are you trying to improve?

    If Craig Venter puts his name in the DNA using some code, a different functional relationship is being used. He is using the nucleotides to specify a name while a gene is specifying a protein.

    You realize that non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. Does that mean that all DNA, including non-coding, has FSCI?

  110. Also, jerry, in regards to FCSI vs. CSI, you said in another thread:

    In FCSI the information under analysis is doing the specifying and is easily understood when expressed that way. In CSI the information is what is specified (the opposite of FCSI) and does not necessarily have a function nor a logical connection to anything and that is where the morass is.

    So answer me this: Is it possible for information to specify something but not be specified by something else?

  111. Rob:

    There is a history there on the terminology. It starts with noted origin of Life researcher, Orgel, in 1973:

    Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]

    Thus, even CSI had its modern discussion roots in a molecular- biological context.

    Dembski, recognising the general applicability of the concepts implicated in the above sort of remarks, and in light of his observations on how statistical and mathematical reasoning sought to distinguish two types of contingency: directed and credibly directed, has sought a general mathematical framework and associated models. (How successfully may be debated, but I suspect he has been significantly more successful than his detractors will admit.)

    Going beyond that, over the past several years at UD, some of the commenters and contributors — going back to the sort of remarks that we read in Thaxton et al’s TMLO ch 8 on Yockey, Wickens etc, have begun to use FSCI as a descriptive term for just what is stated: functionally specified, complex information, such as we see in not only DNA but computers, cybernetics and telecomms etc.

    In parallel with all of that, since 2005, Trevor & Able and now Durston Chiu et al are using functional sequence complexity [FSC] as a related and measurable concept, contrasted with orderly and random sequence complexity. As at 2007, 35 measured values of FSC in Fits — functional bits — have been published. for proteins and related molecules.

    As to the use of DNA sequences in a context of recognising their uniqueness — but not necessarily having identified function — that is simply high tech fingerprinting.

    When the regulatory language[s?] increasingly evidently in DNA begin to be cracked, then we will be in a position to say a lot more on FSCI in DNA. (I for one look forward to being in a position to reverse engineer the self-assembling, self-directing factory technology at work here. I can think of a lot of possible areas of application of such science and technologies! Just, this time around, please, let’s keep it out of the hands of the generals!)

    But already, just the protein coding portions are telling us plenty.

    GEM of TKI

  112. OOPS: directed and credibly UN-directed

  113. Rob:

    Re: Is it possible for information to specify something but not be specified by something else?

    Read here, on lucky noise.

    What is in principle possible — chance can access any configuration of a contingent system whatsoever — under the relevant circumstances becomes so maximally improbable on the gamut of our observed universe, that — even absent direct evidence — we confidently infer on best, observationally anchored, explanation, that intelligence is responsible for FSCI.

    GEM of TKI

  114. kairosfocus:

    There is a history there on the terminology. It starts with noted origin of Life researcher, Orgel, in 1973:
    Living organisms are distinguished by their specified complexity.

    I would love to see someone make a case that Orgel meant the same thing by “complexity” that Dembski does.

    As to the use of DNA sequences in a context of recognising their uniqueness — but not necessarily having identified function — that is simply high tech fingerprinting.

    So do Venter’s watermarks have function?

  115. kairosfocus:

    Re: Is it possible for information to specify something but not be specified by something else?

    Read here, on lucky noise.

    What is in principle possible…

    You’re saying that it’s logically possible but not probable? I think we’re talking past each other. I’m curious to see jerry’s answer.

  116. Mark Frank (50):

    Presumably this is the lizard you are talking about. It did evolve some interesting new features in about 30 generations. But it was hardly saltation. A cecal valve is not an “entirely new digestive system” and there is no reason to believe it appeared fully formed in one generation. It is an example of a relatively significant change in the phenotype in a just a few generations.

    Dear Mark:

    Please think before answering. What does this last sentence mean: “It is an example of a relatively significant change in the phenotype in … just a few generations.”?

    This is an observation, devoid of thought.

    Where did the information come for all these changes? Did all this “new” information come in just 36 generations (or was it 30, or 20 , or 10, or 5—–since no one was looking)? There is no way on earth that all this “new” information could come about so quickly. Therefore, it is safe to assume the information was already present. This moves us, then, into the realm of “evo-devo”, and major genetic networks being turned on and off (in PZ Meyer’s take on all this, he points out that behavior had changed, and that skull size had changed to allow larger bite size, etc. That is, coordinated changes). Well, “evo-devo” and “gradualsim” cannot coexist. And Darwinism, if it represents ANYTHING AT ALL, it represents “gradualsim”. So, we now have two problems represented by this lizard: the death of Darwinism, and the formulation of “new” information, since Darwinism=Modern Synthesis, is now dead.

    Notice I’ve thought about the implications of what we know about this “new” phenotype and how it happened. You have not.

  117. “Is it possible for information to specify something but not be specified by something else?”

    I am not aware of any. That is the issue under debate. In some cases we do not know what is doing the specifying but we with reason assume an intelligence. In the case of language, it is a person who is doing the speaking or writing but may be unknown as in a cave drawing. In the case of computer code it is a person doing the specifying. For DNA, we do not know what specified it

    But since there is no known example of any natural process that specifies or leads to anything with FCSI, we assume it is not a natural process. It is always a possibility but the only know specifiers of FCSI is intelligence.

  118. jerry @117

    But since there is no known example of any natural process that specifies or leads to anything with FCSI,

    First, what is this FCSI of which you speak? Do you have a rigorous mathematical definition for it? Have you shown that the rigorous definition is applicable to biological systems? Does it reflect known evolutionary mechanisms?

    I thought not.

    we assume it is not a natural process. It is always a possibility but the only know specifiers of FCSI is intelligence.

    You can’t even show that. Further, you are again assuming your conclusion. If we assume, for the sake of argument, that FCSI is rigorously defined and that it has been demonstrated to exist in biological systems, you can no longer say “we assume it is not a natural process.” There is no valid, rational, scientific reason for making that assumption. From the perspective of methodological naturalism, the operating philosophy of modern science, confirming the existence of FCSI in a biological system would more strongly suggest that it is not a unique product of intelligence than that some intelligence created biological systems.

    Proof by repeated assertion is unconvincing.

    JJ

  119. “You realize that non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. Does that mean that all DNA, including non-coding, has FSCI?”

    You should be able to answer this yourself. The answer is more than likely no. There could be lots of examples such as a retro virus. Much of the so called junk DNA is now thought to specify a function even if they do not know what that function is. This is based on the fact that most of the genome is transcribed. But it is likely that some or a lot of it may be just what it was called, junk, and may not specify anything.

    As research findings accumulate, there may be a different conclusion. So for DNA, some is definitely FSCI, some other is likely and some is probably not and the proportions will probably change over time as biologists figure out more about genomes.

  120. “Intelligence” doesn’t do anything. Intelligent agents do things. That is, intelligent agents do things that require “intelligence”, such as making choices, specifying means, directing processes toward outcomes (and compensating for deviations produced by objects and processes in the environment. Programs don’t write themselves; they are written by “intelligent agents”. Ergo, if “intelligent design” exists, it exists because of the actions of an “intelligent agent“.

    The same is the case for natural selection. Natural selection doesn’t do anything. It’s an outcome, not a “process”. To be specific, natural selection is an outcome of three separate, but related processes:
    1) variation,
    2) inheritance, and
    3) reproduction.
    Given these three processes, the outcome in an environment with limited resources is:
    4) unequal, non-random survival and reproduction. This outcome is what evolutionary biologists mean by the term “natural selection”.

    Ergo, natural selection cannot be both a “creative process” by which biological entities and processes come into being and an outcome of such a process. On the contrary, processes 1 through 3 listed above are the means by which biological entities and processes come into being, and #4 is what we perceive as the outcome: change in the characteristics present in a population of organisms over time (i.e. evolution). [1]

    This means that if “intelligent design” happens, it must happen by means of the actions of an intelligent agent in one or more of the processes listed above. I think that most people who post and comment at this website would agree that it probably operates as part of #1 (variation). This was Asa Gray’s belief upon reading Darwin’s Origin of Species.

    Ergo, ID isn’t even in complete opposition to the concept of evolution by natural selection. ID supporters simply disagree with evolutionary biologists on the source of the variations which provide the “raw material” for the demographic “sorting and preservation” that produces the outcome we refer to as “natural selection”.

    So, what “intelligent agents” are proposed to explain the origin of “intelligently designed” variations, and via what mechanism(s) do such agents operate?

    [1] Note that this “change in the characteristics present in a population of organisms over time” may be either gradual or episodic (the fossil record inclines toward the latter conclusion).

  121. Allen_MacNeill:

    Ergo, ID isn’t even in complete opposition to the concept of evolution by natural selection. ID supporters simply disagree with evolutionary biologists on the source of the variations which provide the “raw material” for the demographic “sorting and preservation” that produces the outcome we refer to as “natural selection”.

    Yes! That is correct, and very clearly stated. For many of us natural selection is not the problem, random (non-foresighted) variation is the problem.

    Allen_MacNeill:

    So, what “intelligent agents” are proposed to explain the origin of “intelligently designed” variations…

    Certainly one proposed “intelligent agent” is this character known as God. The God that has been proposed would seem to have some of the essential criteria: all knowing, ever-present, timeless.

    That said, one certainly encounteres some fundimental differences between the common view of “God” and the nature of a designer of nature. If designer of nature, then said designer is an experimenter (weren’t there about 100 phila generated during the cambrian explosion, but only about 20 have survived?) Said designer of nature uses violence and aggressiveness to his own end. Others, of course, have proposed other “designers”.

    , and via what mechanism(s) do such agents operate?

    I actually propose that the designer has influenced individual mutational events to pull off nature. I personally suspect that the designer’s playground is the quanta. That said, I do not agree with Ken Miller who also suggests that God dances in the quanta in that I believe that God’s activity is detectable, that there is good evidence for design, for foresight.

  122. JayM:

    Re 118: First, what is this FCSI of which you speak? Do you have a rigorous mathematical definition for it? Have you shown that the rigorous definition is applicable to biological systems? Does it reflect known evolutionary mechanisms?

    Perhaps you are not monitoring the Thesaurus thread anymore, but from 106 – 108, 112, and especially 172 – 173 on Thursday Feb 19, your point was answered. This, complete with links to and citations from the peer-reviewed literature. In that literature, there is a published table of 35 values of functional sequence complexity in fits, i.e funcitonal bits [one of the relevant quantitative metrics].

    And, that is after the matter is already addressed in the weak argument correctives and the associated glossary, with a link to the relevant paper.

    Besides, FSCI — “functionally specified complex information” — is not a a mysterious quantity requiring special definition but, instead the phrase is a simple and accurate DESCRIPTION of something that is very familiar, as near as your hard disk drive’s files which have in them X Bits or bytes [8-bit clumps] of functional data in bits. Indeed, when you post a typical comment here, you are using in excess of 1,000 7-bit ASCII characters, at 7 bits per character. it is no fault of modern digital technology that it so happens that DNA has in it 4-state digitally coded data strings that specify the protein sequences that will fold, agglomerate and function in life systems.

    In short, you seem to be trying to make a rhetorical mountain out of an easily flattened out mole-hill.

    And, when it comes to the wider concept, CSI, I find the Dembski mathematical models coherent and relevant, though challenging to apply to DNA or the like.

    You will see in the WACs and the glossary a brief discussion of his metric for CSI, including a calculated value for a hand of 13 cards; a simple case raised in this blog by Mark Frank. if Dr Dembski’s metric were incoherent or irrelevant to the real world, it would not have been possible to make such a calculation.

    Similarly, much heavy weather has been made by Darwinist advocates of the “dubious” practice of using flat distributions of probabilities across what in statistical thermodynamics are called microstates.

    Of course, that is simply the commonplace default Laplacian indifference criterion at work, in a context where we often have no real reason to move away from such a default. (E.g. in DNA chaining, we have no reason to see teh side chains as stroingly blocking successions between A, G, C, t monomers;a nd in proteins the constraints are not decisive on chaining — teh real constraints are post-chaining, i.e on folding especially.)

    But in fact, from e.g. Bradley’s June 2003 Cytochrome-C value for ICSI [and note whose work he is building on; this has been in my App 1 my always linked for quite some time now . . . ] and from the simple fact of the H-metric, i.e a standard info theory eqn for ENTROPY , i.e. avg info per symbol,

    H = – SUM pi log2pi,

    we see that we can address non-equioprbable cases. [BTW, this is also connected to the thermodynamic version of entropy, which is why teh name. Cf discussion in my always linked, with onward links. Robertson's Statistical Thermophysics is a useful read.]

    Bradley’s value (after first dealing with an equiprobable distribution model, and then factoring in teh observed non-even distribution of aa’s in the protein] is:

    ICSI = log2 (4.35 x 10^74) = 248 bits

    [with] Wo/W1 = 1.85 x 10^137 / 4.26 x 10^62 = 4.35 x 10^74 [this "easily" converts from statistical weights of macrostates to a probability estimate . . . ]

    [Where he also cites that] Two recent experimental studies on other proteins have found the same incredibly low probabilities for accidental formation of a functional protein that Yockey found

    1 in 10^75 (Strait and Dewey, 1996) and

    1 in 10^65 (Bowie, Reidhaar-Olson, Lim and Sauer, 1990).

    In short, we see here highlighted the key difference between rhetoric and scientific dialogue towards truth. Rhetoric seeks to persuade of a given conclusion; science at its best is seeking the truth about our world, in light of the facts, wherever they may point.

    GEM of TKI

  123. JayM:

    Also re yr: If we assume, for the sake of argument, that FCSI is rigorously defined and that it has been demonstrated to exist in biological systems, you can no longer say “we assume it is not a natural process.” There is no valid, rational, scientific reason for making that assumption. From the perspective of methodological naturalism, the operating philosophy of modern science, confirming the existence of FCSI in a biological system would more strongly suggest that it is not a unique product of intelligence than that some intelligence created biological systems.

    1 –> The nature and demonstrated status of FSCI is not an assumption, it is a fact of routine experience and observation; as just pointed out. Rhetoric flying determinedly in the teeth of the evidence does not turn a fact into an assumption.

    2 –> We have many observed cases of origin of FSCI — empirical data is the foundation of inductive, scientific reasoning — and, in all of these, it is the product of observed intelligence in action. (if you dispute this, simply produce a credibly observed counterexample, e.g. 143+ ASCII characters in contextually responsive English, credibly produced by chance + necessity, e.g. zener diode noise digitised, evened off and spewed across a disk’s surface. [If it's good enough to run lotteries, it's good enough for me as a random source.])

    3 –> This is multiplied by the needle- in- a- haystack search challenge that hypothetical chance + necessity mechanisms face, before they can get to the beaches of islands of function, once we are in excess of about 1,000 functional bits; e.g. natural selection “responds” to differences in degree of FUNCTION. [Onlookers,note how this keeps on getting ducked -- a well known tactic of the rhetor -- pass by as if it does not exist, what does not fit your case. Including Behe's observed edge of evolution based on more reproductive events per year -- across the better part of a century -- than are credibly true of all mammalia for its entire existence on earth. ]

    4 –> We also see here the injection of Lewontinian- NAS a priori materialism in the name of “modern” science. in case you have not been watching in recent weeks, let us again cite the former noting that as at 2005 – 2008, the NAS has plainly made it “official” dogma:

    We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [1997, NY review of books]

    5 –> In case you didn’t get the memo that this is now official dogma, courtesy the US NAS acting as friendly local magisterium, let me cite:

    In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations. Any scientific explanation has to be testable — there must be possible observational consequences that could support the idea but also ones that could refute it. Unless a proposed explanation is framed in a way that some observational evidence could potentially count against it, that explanation cannot be subjected to scientific testing. [Science, Evolution and Creationism, 2008, p. 10]

    6 –> Translating: (i) explanations must either be in terms of chance + mechanical deterministic forces or else reducing on origin to the spontaneous action of such, and (ii) the only possible contrast to “natural” is “supernatural” — strictly verboten!

    7 –> But a simple read of Newton’s 1688 General Scholium to his famous Principia [the point of departure work for true modern science] will show that he GROUNDS modern science on a theistic worldview, including the vision that Pantokrator has set an ordering law for the realm of nature that he intelligently created and sustains, which we may study through natural philosophy; giving rise to reliable knowledge of the world, i.e science. [Lewontin is simply grossly wrong when he went on to assert that a wold in which miracles are possible is one in which nature would be chaotic.]

    8 –> In fact, Newton goes on to infer that the project of what is called natural theology [cf e.g Rom 1:19 - 20 etc] is either integral to or a reasonable extension from Natural Philosophy:

    This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being. And if the fixed stars are the centres of other like systems, these, being formed by the like wise counsel, must be all subject to the dominion of One; especially since the light of the fixed stars is of the same nature with the light of the sun, and from every system light passes into all the other systems: and lest the systems of the fixed stars should, by their gravity, fall on each other mutually, he hath placed those systems at immense distances one from another [i.e. grounds the uniformity principle of science on God's universal dominion; hence, LAWS of nature]. . . .

    We know [God] only by his most wise and excellent contrivances of things, and final cause [i.e from his designs]: we admire him for his perfections; but we reverence and adore him on account of his dominion: for we adore him as his servants; and a god without dominion, providence, and final causes, is nothing else but Fate and Nature. Blind metaphysical necessity, which is certainly the same always and every where, could produce no variety of things. [i.e necessity does not produce contingency] All that diversity of natural things which we find suited to different times and places could arise from nothing but the ideas and will of a Being necessarily existing . . . And thus much concerning God; to discourse of whom from the appearances of things, does certainly belong to Natural Philosophy.

    8 –> Of course, some would object 9in the teeth of massive history) that this founding father of modern science is not representative of “modern” science. To which, the proper rejoinder is that the materialism presented to us in the name of science is not modern science either — most of its chief views, attitudes, agendas, conclusions and difficulties were more than anticipated in Lucretius’ philosophical poem on the nature of things something like 2,000 years ago. As Wiki summarises:

    The poem opens with a magnificent invocation to Venus, whom he addresses as an allegorical representation of the reproductive power, after which the business of the piece commences by an enunciation of the great proposition on the nature and being of the gods, which leads to a grand invective against the gigantic monster religion, and a thrilling picture of the horrors which attends its tyrannous sway. Then follows a lengthened elucidation of the axiom that nothing can be produced from nothing, and that nothing can be reduced to nothing (Nil fieri ex nihilo, in nihilum nil posse reverti); which is succeeded by a definition of the Ultimate Atoms, infinite in number, which, together with Void Space (Inane), infinite in extent, constitute the universe . . . .

    The problem that arises from an entirely deterministic and materialistic account of reality is free will. Lucretius maintains that the free will is possible through the random tendency for atoms to swerve (Latin: clinamen).

    9 –> It is a simple point to observe that randomness is no more rational than raw, Sir Francis Crick style reductionistic determinism. Reppert’s summary on the problem of trying to get to a credible mind from chance + necessity alone is apt:

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

    10 –> But, it is notorious that we rely — for excellent reason — on our ability to at least some of the time think rationally [and Plantinga has shown how NS by rewarding behaviour not belief, has a hard time supporting accuracy of especially abstract beliefs, as many different contradictory beliefs are behaviourally equivalent without being true]. So, materialism is both factually challenged and self-referentially incoherent. [A challenge that is glossed over in ever so much of the confident presentation of materialism as the chief foundation stone of that epitome of rationality, science.]

    11 –> And that fact-challenged status comes right out in JM’s plainly question-begging assertion that if FSCI is found in a biosystem, then that somehow in effect “proves” that it had to come about by in effect evolutionary materialistic processes.

    12 –> For, we already see that there is a massive probabilistic challenge to get to functional complex info of the magnitude found in DNA by chance + necessity, starting with any empirically credible pre-biotic environment. Then, we also routinely observe such FSCI being produced by intelligent agents. So on inference to best explanation, design is a far better candidate for explaining DNA than C + N in some imaginary pre-biotic soup. But, the NAS acting as magisterium will have none of this unfettered inference to best explanation nonsense!

    And therein lieth the REAL issue.

    GEM of TKI

  124. To Patrick
    “I’ll copy over my English word explanation that should makes things easy to understand.”

    Thanks you for the reply but can you show me the math instead of the words please? I am a computer programmer with a mathematics degree, words are not my strong point, numbers are.

    I was gone for a couple days, and much conversation has taken place, so I will embed my response here for future readers:

    Machine code is binary, thus one bit. The biological code is a quaternary code, thus 2 bits. I was just explaining the overall concept and using an easy example. I didn’t bother pulling up any sequence data, but in short

    informational bits = (length of functional sequence) X 2

    For the ice fish example I was assuming that the 3 amino acids were encoded via ~12 nucleotides. So 12 X 2 = 24 informational bits. Then 24 X 4 repeats = 96 informational bits. As I said, easy.

    Also, the biological example I gave on this thread were based upon generalizations. So the accuracy could be questioned but I highly doubt the numbers are going to change dramatically. And in general I prefer to deal in straight informational bits instead of probabilities (1 in 10^150 corresponds to 500 informational bits).

    Here’s 2 other examples where I ran the numbers: here and here

    - Patrick

  125. Just so that I am clear-

    Specified complexity/ CSI as it relates to biology equates to biological function as stated by Wm Dembski in “No Free Lunch”:

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL

    In the paper “The origin of biological information and the higher taxonomic categories”, Stephen C. Meyer wrote:

    Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information–that is, specified complexity from mere complexity. This review will use this term as well.

    In order to be a candidate for natural selection a system must have minimal function: the ability to accomplish a task in physically realistic circumstances.- M. Behe page 45 of “Darwin’s Black Box”

    He goes on to say:

    Irreducibly complex systems are nasty roadblocks for Darwinian evolution; the need for minimal function greatly exacerbates the dilemma. – page 46

    IC- A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, non-arbitrarily individuated parts such that each part in the set is indispensable to maintaining the system’s basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. Page 285 NFL

    Numerous and Diverse Parts If the irreducible core of an IC system consists of one or only a few parts, there may be no insuperable obstacle to the Darwinian mechanism explaining how that system arose in one fell swoop. But as the number of indispensable well-fitted, mutually interacting,, non-arbitrarily individuated parts increases in number & diversity, there is no possibility of the Darwinian mechanism achieving that system in one fell swoop. Page 287

    Minimal Complexity and Function Given an IC system with numerous & diverse parts in its core, the Darwinian mechanism must produce it gradually. But if the system needs to operate at a certain minimal level of function before it can be of any use to the organism & if to achieve that level of function it requires a certain minimal level of complexity already possessed by the irreducible core, the Darwinian mechanism has no functional intermediates to exploit. Page 287

  126. <blockquote?Allen-
    Ergo, if “intelligent design” exists, it exists because of the actions of an “intelligent agent“.

    Bingo!

    Allen:
    The same is the case for natural selection. Natural selection doesn’t do anything. It’s an outcome, not a “process”. To be specific, natural selection is an outcome of three separate, but related processes:
    1) variation,
    2) inheritance, and
    3) reproduction.
    Given these three processes, the outcome in an environment with limited resources is:
    4) unequal, non-random survival and reproduction. This outcome is what evolutionary biologists mean by the term “natural selection”.

    How can NS be non-random if it depends on random inputs?

    Variation is random.

    What gets inherited is random.

    And fecundity can only be judged after the fact.

    Also added to ID would be some type of artificial selection.

    Now this AS could be part of the built-in programming.

    This programming allows parts to be kept even though they do not yet provide any advantage. And this allows for construction to occur and keep occuring until the final product is put into play.

  127. jerry:

    “You realize that non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. Does that mean that all DNA, including non-coding, has FSCI?”

    You should be able to answer this yourself.

    Okay, I will, by rephrasing the question as a statement:

    Non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. So non-watermarked, noncoding DNA has function and specifies something, just like watermarked DNA. Therefore, it has FSCI.

    What’s wrong with that logic?

  128. kairosfocus @123

    The nature and demonstrated status of FSCI is not an assumption, it is a fact of routine experience and observation; as just pointed out.

    I’m afraid that you didn’t point anything out, you simply continued to make the same baseless assertions I’ve been challenging here. FSCI is not a rigorous measurement that can be applied to biological systems, not least because it assumes creation ex nihilo. It does not take into consideration the known evolutionary mechanisms that build on previous success.

    FSCI, as you described it, boils down to “Gee, it’s pretty unlikely that this protein or gene came together all at once, therefore some intelligence must be behind it.” Of course it’s unlikely to have come together all at once, that’s why no biologist suggests that.

    Nowhere in your prodigious amount of text have you addressed the core issues I raised:

    1) What is the rigorous definition of CSI (or whatever other measure you wish to use)? Note that this must allow anyone with the requisite mathematical skills to compute the same value in the same units from the same object.

    2) Demonstrate that the measurement is applicable to biological systems. Assumptions of uniform distribution or creation ex nihilo are an indication that the measurement is not applicable. MET mechanisms must be accounted for.

    3) Demonstrate that this measurement uniquely identifies intelligence. The argument that “We see CSI in human artifacts and we see CSI in biological systems so biological systems must require intelligent input.” is begging the question.

    Neither you nor any other CSI/FSCI proponent here has addressed any one of these issues, yet you continue to repeat your baseless assertions. That is a significant reason why mainstream scientists do not ID seriously — we make it too easy to ignore us.

    JJ

  129. Attn: Moderators

    I see that my posts are still being subject to moderation delays. I would like to request that you allow this one through, as a courtesy to the people with whom I’m conversing.

    I am bowing out of all threads in this forum due to the delays related to moderation. It is difficult enough to keep up with all the threads and respond to everyone who has taken the time to participate in the discussions without the added overhead of delays and posts that simply do not appear. I appreciate the time and effort you have all contributed, even when we don’t agree. If you wish to continue the conversation in an unmoderated forum, please suggest one and I’ll join you there.

    JJ

  130. “Non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. So non-watermarked, noncoding DNA has function and specifies something, just like watermarked DNA. Therefore, it has FSCI.

    What’s wrong with that logic?”

    Because it is not logical. The connection is mediated by an intelligent person and does not automatically specify something else as DNA does with a protein. The connection would disappear with out the intelligent intermediary who is the one actually making the connection. You could use the same argument with a rock you found in the woods or at a crime scene that it could be used to build a house or be a murder weapon.

    Come on, don’t you see what you are doing. You must. You appear desperate to find a gotcha and not directing you energies to understanding the issues.

    If the FSCI is not a valid argument find an alternative and how it arose and not use an example that takes an intelligence to make the connection. If there were an example, after all these thousands of years someone would have noticed it and made a big deal of it.

    At best the DNA in your example points to itself and does not beget another entity with a function.

  131. Hi Mr. ReMine. Just wanted to say that I would be interested in learning more about Message Theory.

    Have a good weekend everyone.

  132. jayM:

    First, what is this FCSI of which you speak? Do you have a rigorous mathematical definition for it? Have you shown that the rigorous definition is applicable to biological systems? Does it reflect known evolutionary mechanisms?

    Let me wade into this one.

    FCSI is complex information (having too much information to reasonably have occurred by chance {chance: 1 in 10^150}) that specifies (is a map used to make) something that functions.

    Now, the definition of FCSI does not ipso facto establish cause. It has been established that intelligence (human) can produce FCSI (technical drawings of machenery, for example).

    The neo-Darwinists argue that evolution can also. For them to make their case, they must first show that an evolvable situation can naturally occur that requires less than complex information (such as a reproducing molecule set that contains less than 1 in 10^150). Second, they must show a reasonable and statistically supportable path from this simple reproducer to modern comlexity in 4 billion years. Can they do that? They certainly have not yet done it — far from it.

  133. bFast:

    JayM has for several das been putting up remarks on FSCI not being a well defined or measurable concept.

    He has been answered several times in several contexts, e.g. [e.g. cf WAC and glossary on CSI, FSCI and measures, also excerpts from peer reviewed papers as linked at 102 above, as well as the one you picked up at 118 or so]. But, still keeps on saying effectively the same thing; dismissing all answers and onward links as though they are meaningless.

    That includes dismissing a 2007 Durston et al peer reviewed paper that publishes a table of 35 values of FSC in Fits; as well as a wider discussion of the OSC vs RSC vs FSC concept and its application by Trevors and Abel.

    To give one instance from the table:

    Flu PB2, with 608 amino acid residues [aa], has 1,692 sequences with 2,628 bits in the null state, so FSC is 2,416 Fits, and FSC density is 4.0 Fits/aa.

    Not to mention, the core FSCI concept is quite simple and obvious [i.e it is a description of a common fact in today's information age], e.g. we have situations where information functions in systems, and can be measured in bits; so that when the bit length gets beyond say 1,000 even if the observed universe were regarded as a search for the FSCI, it could not sample more than 1 in 10^150 of the config space. (That gives teeth to your 1 in 10^150 remarks just above.)

    Sad to say, JayM’s behaviour over the past few days does not come across like a serious, dialogue based on addressing of empirically based facts, towards understanding and truth.

    Let us hope that we see a serious engagement from him over the next day or so, in light of the above and the linked.

    GEM of TKI

  134. JayM:

    re 128:FSCI is not a rigorous measurement that can be applied to biological systems, not least because it assumes creation ex nihilo. It does not take into consideration the known evolutionary mechanisms that build on previous success.

    1 –> Could you kindly specifically document this claim?

    2 –> For instance, kindly explain how the TA and Durston et al papers fail to address biological contexts, and fail to produce valid FSCI values; including how such a presumably gross error escaped the attention of the peer reviewers.

    3 –> Also, please point out just how the FSCI concept — that we have empirically observed information that functions and so is functionally specific, and that is complex as it takes up a significant storage [and is not simply compressible or easily discoverable to a random walk search] — ASSUMES Creation Ex nihilo?

    (FYI, a view of Creation that God used a big bang at 13.7 BYA and guided OOL and macroevoltuion thereafter qualifies as creation ex nihilo — i.e. it is a view that God creates the physical cosmos, and that matter (in whatever form) is not eternal. [Contrast, say the fairly common circa C1- 3 AD Gk concept of the Demiurge forming recalcitrant but eternally existing matter to crudely reflect the eternal forms; leading to a messed-up physical world, so that the body becomes the prison of the soul and salvation is the business of acquiring secret knowledge so we can escape being re-imprisoned. (Try Simon Magus and his First Thought, Helen the former slave and lady of the night.)])

    4 –> Now, in general, once coded info of significant complexity is used, MOST configs are non-functional. E.g. words of 1,000 bits length or equivalent will specify a space of 10^301 configs, so that only a tiny fraction can be functional. Evolutionary mechanisms as proposed, deal with differential success of functional configs. But the first challenge is getting to an island of function in a pre-biotic context. So, how does a measure of the config space challenge that starts BEFORE evolutionary mechanisms may apply is failing to account for such?

    5 –> Similarly, post OOL, we are looking at body-plan level transformations for macroevo. These credibly require increments in DNA — which is a functional, complex, digital data string — of order 10′s to 100′s of millions of bits, many dozens of times over. How, then, does pointing out that functional islands in such astonishingly large config spaces will be very hard to find, fail to address the capacity of evolutionary mechanisms — mechanisms [RV + NS etc, so we have differential reproductive success] that focus on incremental improvements WITHIN islands of function?

    GEM of TKI

  135. All:

    At this stage, I suspect that we are seeing a Panda’s Thumbster or Talk Origins [or ilk] attempt to mischaracterise FSCI in order to then — without proper warrant — claim that it is a confused, and useless concept; brushing it aside rhetorically.

    But in fact, it is at root s simple descriptive phrase, one that is pretty much self defining if anything:

    1 –> Some things require/use information to function, and that info is specified by the functionality it achieves.

    2 –> That information is sometimes fairly complex, and when that happens, it requires a fair quantity of storage; which can be measured in bits at work, i.e bits that are functional.

    3 –> to illustrate, think here, of a CD which is empty — 700 or so MBytes of bits that in that context are set up to provide storage — i.e formatting and precise organisation. Then load some files, of reasonable size.

    4 –> That will be complex and functionally specific in some externally recognisable context, requiring hardware, algorithms, programs, programming/storage languages and associated onward target applications to read and put it to work, i.e info storage systems require FURTHER FSCI to work.

    5 –> Look at the DNA-ribosome-enzyme etc info storage and processing system in cells: DNA stores, Ribisomes etc read and translate, creating amino acid chins that then fold, agglomerate and are transported to use-sites,w here they may for instance self-assemble into a functioning flagellum. (Three weeks back, Feb 3, there was a video hosted here at UD on that onward self-assembly, which is in part based on the precise structures and capacities of the assembled proteins.)

    6 –> Then, observe back to the OOL researchers in the 1970′s to 80′s, to see that this has been recognised for a generation at least as applying to life in the cell, leading them to naturally form the concepts — NB definitions try to give precise borders to concepts, i.e concepts are based on abstracting key commonalities of examples and are logically/ epistemologically prior to precising definitions — CSI and FSCI:

    Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]

    [TBO summarise in TMLO ch 8, 1984:] Yockey7 and Wickens5 develop the same distinction, that “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future. [I add: save in obfuscatory rhetoric]

    7 –> So, by 1984, the CONCEPTS for FSCI and CSI were identified and exemplified with “this is that” and “this other is NOT that” cases, by leading — and non-ID — OOL researchers.

    8 –> So, the initial generation of ID thinkers and researchers, starting with Thaxton et al, built on EXISTING concepts that were known to be relevant to the context of OOL and the functioning of the cell based on information rich organisation.

    9 –> In particular, we may observe that FSCI and CSI are actually fairly familiar to those who have had to design, develop, debug or troubleshoot information-based technological systems. [Mystery solved on why such a high proportion of ID thinkers and workers come form fields that use CSI and FSCI based systems, making us familiar with the concepts and their most credible causes. Lets just note that biologists as such are usually not familiar at design and development level with such complex info-based systems.]

    10 –> And so, the inference that where we see such systems design is a known cause, and therefore a serious candidate for best explanation, is a very obvious one. But, how does one make such a distinction on a reasonable and objectively warranted basis?

    11 –> Already in TMLO, there is a hint, for Thaxton et al address not only classical thermodynamics but statistical thermodynamics on trying to work out the likelihood of forming proteins and/or DNA on a platnetary scale, thus the equilibrium conc in a hypothetical [and very generous] pre-biotic soup. For, they bring in Brillouin Information.

    12 –> this brings up the info school of thermodynamics, and the astonishing parallel between thermodynamic entropy and the H-metric of average info per symbol in info theory. For, as Harry Robertson summarises in his Statistical Thermophysics:

    . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . .

    [deriving] S({pi}) = – C [SUM over i] pi*ln pi

    [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .[pp.3 - 6]

    . . . . S, called the information entropy, . . . correspond[s] to the thermodynamic entropy, with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context [p. 7] . . . .

    Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [p. 36.]

    13 –> In the 1990′s Drs Dembski and Behe enter the picture, the latter focusing on the origins of complex, multi-part organisation, and the former on the associated information.

    14 –> In effect, first, it becomes very hard to get to complex multi-part functionality without intentional creation of parts and/or deliberate adaptation of existing parts to interface and work together in a new whole. (And there is as a rule a core of parts that are necessary if a function is to work art all.)

    15 –> This is a major — and (rhetoric notwithstanding) unsurmounted — hurdle for RV + NS based schemes of thought on origination of body-plan level and micro-functional life systems.

    16 –> Dembski’s CSI models helped us to quantify the CSI, providing a metric. The universal probability bound puts up a conservative threshold where logically possible organisations become so improbable that they are unlikely to have formed on the scope of our observed cosmos by chance. the explanatory filter — especially when focused on ASPECTS of the entity under investigation, uses a reasonable extension of statistical inference to infer on best explanation across the three long since known causal factors: chance, necessity, intelligence.

    17 –> In that general context, FSCI at one level is a simple way to look at the bottomline: if an OBSERVED function uses 500 – 1,000 bits or more of storage capcity, it is beyond reasonable doubt a case of being beyond the credible reach of chance on the gamut of our observed cosmos, so we may confidently infer to design as its best explanation. (That is, once we refuse to censor out the possibility that design can give rise to systems; i.e we refuse to go with Lewontinian a priori materialism as the NAS has now explicitly imposed for “science” in the US. this silences material facts and factors before they can speak, and subverts science from being an empirically based unfettered exploration of the truth about our world.)

    18 –> And now, as pointed out at 133 above, several professional workers are giving a more formal approach to FSCI, and have now published a table of 35 measured values of FSC for proteins and related molecules.

    ____________

    So, we can see for ourselves the true state of the balance on the merits, and it is plainly not in favour of the obfuscators.

    GEM of TKI

  136. jerry:

    “Is it possible for information to specify something but not be specified by something else?”

    I am not aware of any.

    The reason I asked is this: According to your understanding, CSI is, by definition, specified by something, and FSCI, by definition, specifies something, although it is not necessarily specified by anything. Under those definitions, FSCI is not defined such that it’s a subset of CSI.

  137. Sorry for being so late to reply. This thread has run its course but I thought I’d answer these 2 questions quickly. But I feel the kairosfocus did a better job of covering the topic in-depth, anyway. So if you guys don’t understand the concept at this point I don’t know what else to add.

    Khan #104,

    In the other link I gave I noted that the “ice fish carr[ies] a partially deleted copy of alpha1 and lack the beta globin gene altogether. These deletions are inextricably linked to its lower blood viscosity…” IOW, a destructive mutation that gives a benefit in this limited environment. The number of repeats apparently required for this “functionality” is 4 repeats or 96 informational bits. AFAIK additional repeats are unnecessary duplications. As I mentioned tying function to biological information is the hard part, so I may be wrong on this and this example might require more than 100. No big deal either way. Not to mention, I suppose it could be argued that a degenerative, and repetitive, change like this should not even count as FCSI, although I’d leave that determination to the experts. I personally believe there will be found special exceptions where 500+ informational bits can be exceeded by non-foresighted processes, and ID theory will need to account for them, but that’s just my opinion.

    GSV #124,

    Machine code is binary, thus one bit. The biological code is a quaternary code, thus 2 bits. I was just explaining the overall concept and using an easy example. I didn’t bother pulling up any sequence data, but in short

    informational bits = (length of functional sequence) X 2

    For the ice fish example I was assuming that the 3 amino acids were encoded via ~12 nucleotides (if I’m wrong in this assumption please correct me). So 12 X 2 = 24 informational bits. Then 24 X 4 repeats = 96 informational bits. As I said, easy. Although as I’ve mentioned this is just an estimate of the true biological information content since we’re still trying to figure out exactly how everything is encoded. So I’m sure there are plenty of caveats my quick example does not take into account. For example, how to calculate for frameshifting and an encoding scheme where the same information is reused for multiple different applications?

    Also, the biological example I gave on this thread were based upon generalizations. So the accuracy could be questioned but I highly doubt the numbers are going to change dramatically. And in general I prefer to deal in straight informational bits instead of probabilities (1 in 10^150 corresponds to 500 informational bits) when it comes to measuring complexity in biology since these are information-based replicators, not pebbles on a beach where probabilities would be more appropriate.

    Here’s 2 other examples where I ran the numbers: here and here

  138. Rob:

    Kindly look at the glossary on FSCI. You will see that it is usually used in several expanded and sufficiently comparable senses that “specifying” and “specified” make no effective difference.

    FSCI is a subset of CSI in any case, as the issue is that the specification is tied to observed functionality. And indeed, both concepts arise from the same context: observing the functionality of the nanomachines in living cells.

    CSI went more general, FSCI sticks to the OBSERVED functionality focus for specifying the complex oragnisation in question. So, DNA exhibits FSCI, so does a string of 143 ASCII characters giving a message in English. So does an arrowhead or a Jumbo Jet — the design spec relative to their functions.

    GEM of TKI

  139. kairosfocus:

    Kindly look at the glossary on FSCI.

    Kindly look at my comment where I say, “According to your [jerry's] understanding”. I’m not talking about the FAQ’s characterization, I’m talking about jerry’s.

  140. Patrick:

    1,000 bits is a better “practical” limit, as the search window set by our observed cosmos is 10^150 states.

    So, since 1 k bits has a config space that is 10 times the square of 10^150, it means that a cosmic scope search could not sample more than 1 in 10^151 actually of the space.

    That aptly captures the odds of worse than 1 in 10^150 in Dembski’s UPB.

    GEM of TKI

  141. Rob:

    Pardon, but I must insist: the context is that you are discussing a concept that is broader than any one person, and wehich as I note has been used in several phrasings.

    Over the years I have been using FSCI as a DESCRIPTIVE term for what Orgel, Yockey, Wickens etc were getting at, I emphasised “functionally specific/ specified complex info” and recently find I like the alternate “function-specifying complex info” — esp in contexts of algorithms or structures that are precisely organised or shaped to function, like 747s and arrow heads.

    In ALL cases a subset of CSI is intended, just that the function in question is what gives the de facto, observable specification. (Ef it ain’t wuk, it ain’t wot we does want . . . [Cf the philosophical concept that, e.g. “food” is functional stuff.)

    GEM of TKI

  142. kf,

    I’d have to agree that 1000 is more practical since even relatively “simple” biological systems exceed that and it prevents any “gotcha moments” where Darwinists may attempt to trumpet aloud any special exception that might be found.

    BTW, did you find any errors in my own explanation in #103 and #137? I believe I’m correct but it’d be nice to be double-checked by an expert so I don’t go around repeating errors.

  143. Patrick

    Took a look.

    Last I checked each aa is coded for with 3 nucleotide bases, and in turn each base has 4 states, so in effect we are looking at up to six bits or so per aa. [Actually, there will be various slight mods on AA constraints and observations, but that is good enough for rough work.]

    I see no material fault with your work.

    But that is not the real problem. the real issue is that we are facing a situaiton where people developed a pre-info ager theory, which threw out an unexpected bridge to info theory, in 1948 – 53. And, once we did rthe stiudies on DNA and proteins, we see that we are dealign with very sophisticated info sysrtems. And, info systems we know come from intelligence. but that “cannot” be permitted under the dominant paradigm of evo mat.

    So — with all due respect to those to whom this does not apply — no end of delaying, foot dragging and even temper tantrum tactics on the obvious conclusion of the matter.

    GEM of TKI

Leave a Reply