Home » Intelligent Design » The Image of Pots and Kettles ….

The Image of Pots and Kettles ….

I was just reading this fairly-well written article, and came upon one of the last paragraphs.

It’s an interesting take by a, shall we say, “non-scientist”:

“These scientists argue that only ‘rational agents’ could have possessed the ability to design and organise such complex systems.

Whether or not they are right (and I don’t know), their scientific argument about the absence of evidence to support the claim that life spontaneously created itself is being stifled – on the totally perverse grounds that this argument does not conform to the rules of science which require evidence to support a theory.”

You have to like this logic: the scientific community doesn’t want to entertain the idea of ID with its implicit argument that there is no evidence to support RM+NS, since ID is not a scientific theory given that it doesn’t have evidence to support its theory.

Yes, indeed, the “image of pots and kettles”!

Here’s the link.
Arrogance, dogma and why science – not faith – is the new enemy of reason

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

86 Responses to The Image of Pots and Kettles ….

  1. That lady sure has an excellent grip on the situation.

  2. Mat I draw Denise O’Leary’s attention to an article in today’s (6 August) London Daily Telegraph by Dr. James Le Fanu. Le Fanu, a medical doctor, refers to “five cardinal mysteries of the mind” unknowable to science, and suggests these provide “one in the eye for the noisy professor Richard Dawkins and his chums.”

  3. Is it really the argument of ID (or of anyone here) that there is no evidence to support RM+NS?

    And as long as I’m asking — is ID on the same footing as any other scientific investigation or program (as, I think, is implied when Dembski and others say that we make inferences to design all the time), or does it require relaxation of methodological naturalism?

  4. Is it really the argument of ID (or of anyone here) that there is no evidence to support RM+NS?

    No, of course not. What is under contention is the limitations–the “edge”–of these Darwinian mechanisms.

  5. mgarelick: “Is it really the argument of ID (or of anyone here) that there is no evidence to support RM+NS?”

    There’s no evidence that it can explain the fossil record.

  6. mgarelick,
    And as long as I’m asking — is ID on the same footing as any other scientific investigation or program (as, I think, is implied when Dembski and others say that we make inferences to design all the time), or does it require relaxation of methodological naturalism?

    When you refer to methodological naturalism, you are really referring to the materialistic philosophy. Yet pure scientific investigation is suppose to have no preconceived philosophical bias prior to investigation. In actuality the track record of methodological naturalism (materialism) is terrible in predictions. Please let me illustrate this problem more clearly for you.
    There are two prevailing philosophies vying for the right to be called the truth in man’s perception of reality. These two prevailing philosophies are Theism and Materialism. Materialism is sometimes called philosophical (methodological) naturalism. Materialism is the current hypothesis entrenched over science as the nt hypothesis guiding scientists. Materialism asserts that everything that exists arose from chance acting on an eternal material basis. Whereas, Theism asserts everything that exists arose from the purposeful will of the spirit of Almighty God who has always existed in a timeless eternity. A hypothesis in science is suppose to give proper guidance to scientists and make, somewhat, accurate predictions. In this primary endeavor, for a hypothesis, Materialism has failed miserably.

    1. Materialism did not predict the big bang. Yet Theism always said the universe was created.

    2. Materialism did not predict a sub-atomic (quantum) world that blatantly defies our concepts of time and space. Yet Theism always said the universe is the craftsmanship of God who is not limited by time or space.

    3. Materialism did not predict the fact that time, as we understand it, comes to a complete stop at the speed of light, as revealed by Einstein’s special theory of relativity. Yet Theism always said that God exists in a timeless eternity.
    4. Materialism did not predict the stunning precision for the underlying universal constants for the universe, found in the Anthropic Principle. Yet Theism always said God laid the foundation of the universe, so the stunning, unchanging, clockwork precision found for the various universal constants is not at all unexpected for Theism.
    5 Materialism predicted that complex life in this universe should be fairly common, Yet statistical analysis of the many required parameters that enable complex life to be possible on earth reveals that the earth is extremely unique in its ability to support life in this universe. Theism would have expected the earth to be extremely unique in this universe.
    6. Materialism did not predict the fact that the DNA code is, according to Bill Gates, far, far more advanced than any computer code ever written by man. Yet Theism would have naturally expected this level of complexity in the DNA code.
    7. Materialism presumed a extremely beneficial and flexible mutation rate for DNA, which is not the case at all. Yet Theism would have naturally presumed such a high if not, what most likely is, complete negative mutation rate to an organism’s DNA.
    8. Materialism presumed a very simple first life form. Yet the simplest life ever found on Earth is, according to Geneticist Michael Denton PhD., far more complex than any machine man has made through concerted effort. Yet Theism would have naturally expected this level of complexity.
    9. Materialism predicted that it took a very long time for life to develop on earth, Yet we find evidence for photo-synthetic life in the oldest sedimentary rocks ever found on earth (Nov. 7, 1996, study in Nature). Theism would have naturally expected this sudden appearance of life on earth.
    10. Materialism predicted the gradual unfolding of life to be self-evident in the fossil record. The Cambrian Explosion, by itself, destroys this myth. Yet Theism would have naturally expected such sudden appearance of the many different and completely unique fossils in the Cambrian explosion.

    11. Materialism predicted that there should be numerous transitional fossils found in the fossil record. Yet fossils are characterized by sudden appearance in the fossil record and stability as long as they stay in the fossil record. There is not one clear example of unambiguous transition between major species out of millions of collected fossils. Theism would have naturally expected fossils to suddenly appear in the fossil record with stability afterwards as well as no evidence of transmutation into radically new forms.

    Now I ask you mgarelick, Which prevailing philosophy has been more acurate in it’s predictions? and which philosophy has earned the right to be the prevailing hypothesis of science?

  7. In his Intelligent Design: The Bridge Between Science and Theology, Dembski argues that intelligent design theory requires that one relax the demand for methodological naturalism.

    One can set up the terms as something like this:

    metaphysical naturalism holds that everything that exists can be understood in terms of “the natural world,” i.e. through some present or future physical theory.

    methodological naturalism holds that all scientific knowledge is knowledge of “the natural world,” i.e. some present or future physical theory.

    One of the key differences is methodological naturalism leaves open the possibility of non-scientific knowledge of non-natural entities. That is, science yields only knowledge of nature, but there are other epistemological and metaphysical discourses.

    Methodological naturalism without metaphysical naturalism is critical for “theistic evolution.”

    Whereas Dembski is very clear that he regards ID as rejecting methodological naturalism as well as metaphysical naturalism. It must do this in order to yield scientific knowledge of entities that transcend the natural world.

    I take it, then, that when we make ‘design inferences’ all the time, we are making inferences that transcend, or take us beyond, the natural world.

  8. bornagain77 — I hope you didn’t spend too much time composing that answer; there is a lot there, but it is nonresponsive. I didn’t ask about the merits of methodological naturalism. Are ID and methodological naturalism inherently incompatible? If so, is it disingenuous to appeal to analogies with forensic science and archeology to support the scientific pedigree of design inference?

  9. Whereas Dembski is very clear that he regards ID as rejecting methodological naturalism as well as metaphysical naturalism. It must do this in order to yield scientific knowledge of entities that transcend the natural world.

    I take it, then, that when we make ‘design inferences’ all the time, we are making inferences that transcend, or take us beyond, the natural world.

    The examples, or at least some of the examples, I’ve heard him use, though, tend to be solidly within the natural world. And the phrase “all the time” implies that there is nothing unusual about what he is proposing.

  10. Prof Sachs:

    First, I have commented over at the Scoville thread.

    Next, I mus take issue with two things, [1] your implied definition of science as being in effect physicalist and methodologically naturalistic, and [2] the implied distorting re-definition of design theory in your:

    “Dembski is very clear that he regards ID as rejecting methodological naturalism as well as metaphysical naturalism. It must do this in order to yield scientific knowledge of entities that transcend the natural world . . . . when we make ‘design inferences’ all the time, we are making inferences that transcend, or take us beyond, the natural world.

    –> For the first, a good place for us to start in this semi-popular forum is with Dan Peterson’s article here. For the point is that Science does not have to be and historically has not normally been understood as being methodologically naturalistic or metaphysically rooted in evolutionary materialism.

    –> Specifically, it is highly question-begging, historically inaccurate, unphilosophical and unreasonable to try to redefine science as in effect, that which reductionistically explains everything from hydrogen to humans by the cascade of materialistic evolutions, and associated blind forces tracing to physics in the end.

    –> Instead, let’s start with what commonly available highly respected dictionaries pre-ID debates said about science and its methods:

    science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 -- and yes, they used the "z" Virginia!]

    scientific method: principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster's 7th Collegiate, 1965]

    –> These definitions do not beg worldviews questions, and have abundant historical foundation and reflect the common praxis of real world science.

    –> Now, too, in speaking of “making inferences that transcend, or take us beyond, the natural world” you are dangerously ambiguous. If one wishes, he can speak of mind/agency as beyond the natural world, but the terms you have used are freighted with the unnecessary issue/ confusion/ implication: SUPERNATURAL — i.e miracle-working — world. (I hardly need to again underscore the rhetorical, legal and public policy contexts of how frequently ID is misrepresented as an improper injection of the supernatural into science.)

    –> Let us then go to Dembski’s summary of the project of ID: intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? . . . Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence.

    –> I am sure that you are aware that in every case where we directly observe the origin of entities that are functionally specified as exhibiting/using complex information, they are seen to be artifacts of intelligent agents. That is, CSI is a reliable, empirically well warranted sign of agency.

    –> And so, the study of signs of intelligence is a reasonable scientific project, indeed, it appears in many guises in day to day science. It is only when the issue that the agents involved in certain notorious cases may be counter to the demands of evolutionary materialism, that there is a controversy on the matter.

    –> I suggest a fairly simple, step by step solution:

    [1] recognise, with Plato and others ever since, that objects that are caused are contingent in the first sense of not being necessary beings [BTW, so much for Mr Dawkins' attempt to dismiss God as more complex than the systems of life, requiring a causal explanation; for if God is, he is a Necessary Being], and

    [2] that the relevant causal forces trace to one or more of [a] natural regularities, [b] chance, [c] agency. Thence,

    [3] Where we observe the further degree of contingency expressed through intense potential multiplicity of states/configurations, we may rule out NR as the primary relevant cause as the essential feature is potential variety, e.g the wording of this post. So,

    [4] Where the configuration is not only complex [in the Dembski UPB sense or substantially equivalent formulations] but also specified independently [especially by being functional in an information processing situation], we may also see that such a state is relatively sparse in the configuration space of possible configs. [E.g. an English language text string of say 200 characters.] Thence,

    [5] In such a case, we normally infer to agency as the best causal explanation, as it cuts down on the search volume and makes it feasible to occur relative to available probabilistic resources, a major challenging facing a chance-based null hypothesis.

    GEM of TKI

  11. Carl,

    You have admitted in the past that no science conducted under the premise that there is a God and that this God may have intervened at some point and time in the natural world would be any different from science conducted under the premise that there is no God.

    So why do some make the distinction and eliminate God as a possible explanation. That is what current science does and it is unnecessary. In fact one could argue that the range of hypotheses is greater with the assumption that God exists than without HIm. The specious argument that appealing to God interferes with science never interfered before when most scientists assumed God so why would it now. You know and I know the real reason and it has nothing to do with the quality and purity of science.

    I am not sure what you are trying to get at. Most here are not used to arguing philosophy of science but are rather people used to dealing with the implication of facts. There are engineers, doctors, computer programers, lawyers here and all are used to arguing the implication of facts since on a practical basis this is what their occupations have been about. ID is an attempt to draw inferences from facts and that is what good science has always been. It does not necessarily take us beyond the natural world but it could.

    Unfortunately ever since LaPlace made a fool of Newton, scientists have been reluctant to admit God into the cause and effect of the natural world. Hence the frequent evoking of the “God of the Gap” argument by anyone who wants to eliminate the idea that God may have intervened somehow at some point in time. To many it is a religion that God has never intervened and some of these actually believe in Him. Funny contradiction.

  12. First, I’d like to stress my recognition of the fact that science is itself a dynamic and evolving (if you will) social practice — and therefore it cannot be rigidly defined. (In other words, I think that Laudan is right to think the “demarcation problem” is a red herring.)

    So I don’t insist that science necessarily be methodologically naturalistic — not at all — because I don’t think concepts of social practices, such as “science”, have necessary or sufficient conditions at all. So it is entirely possible that some future science will abandon methodological naturalism. The only debate ID proponents will get from me is over whether or not we are now at a point where a commitment to methodological naturalism is presently “blocking the path of inquiry” (as Peirce would say).

    In any event, I only introduced that terminology because someone above asked if ID accepts methodological naturalism, and Dembski, for what it’s worth, explicitly denies methodological naturalism. (My books are all packed, otherwise I’d provide a citation.)

    Second, I’ll acknowledge that design inferences by themselves do not lead us out of the natural world (i.e. the world as represented in our best physical theories).

    However, intelligent design theory is coupled together, in most presentations of it, with an explicit denial of the claim that intelligence or rationality or mind (take your pick) could be a mere effect of matter-in-motion. Clearly, if intelligence is itself merely natural, then the dichotomy of intelligent causation vs. natural causation cannot be sustained.

  13. mgarelick,
    All I know mgarlick is that materialism is false. I hope you are not confusing the scientific method with methodological naturalism which is truly only materialism. Let me point out two areas materialism is hampering scientific progress.

    1 Transcranial Magnetis Stimulation studies, Hemisphererectomy studies, And scientific after-life studies by Dr. Pim Van Lommel all establish as being separate from consciousness. Yet materialists cannot admit this clear point of evidence for they would give away their whole game if they did. Yet man is in good position technologically to investigate thoroughly how the consciousness interacts with the matter of the brain. Needless to say outstanding scientific breakthroughs are possible in this area yet man is hampered because of the false materialistic philosophy.

    And now Gravity. Scientists and mathematicians have had to invent “missing dark matter” to account for an “excessive” amount of gravity in the universe to keep the equations of gravity from becoming ineffective. Theism is not committed into inventing such hypothetical matter and is free to expect the force of gravity to arise independent of the matter from a “primary higher dimension” in order to enable life to exist in this universe.

    Scientists estimate that 90 to 99 percent of the total mass of the universe is missing matter. Bruce H. Margon, chairman of the astronomy department at the University of Washington, told the New York Times, “It’s a fairly embarrassing situation to admit that we can’t find 90 percent of the universe”

    The philosophy of Materialism has a huge problem, to put it mildly, if it can’t find 95% of the material of this universe it insists is suppose to exist. What’s more the problem may be intractable for materialism, because the “matter” had to be “invented” to keep the equations of gravity, that explain gravity (space/time curvature) to a material basis, from becoming ineffective. Yet, there very well may be a way around this problem with the general relativity equations. If scientists and mathematicians were to treat the force of gravity as a primary constituent of the universe and were to treat matter as subordinate to gravity (as Theism postulates), then the equations that explain gravity may very well be able to be reconfigured, or reinterpreted, to reflect this proposed truth found from the Theistic perspective. The Theistic postulation would state that space is curved from a higher dimension to enable matter to exist in the first place, and to have an existence that is conducive for life exist in this universe. In fact, gravity is already found to be conducive (finely-tuned) for life at the level of star formation. That is to say, gravity is found in the anthropic principle (which is a Theistic principle) to be exactly what it needs to be in order to allow the right type of stars to form, for the right duration of time, to allow life to be possible in this universe. Thus, the theistic postulation for gravity has already found preliminary validation. The question that truly needs to be asked, to solve this missing matter mystery, is not the vain materialistic question of “Where is the missing matter in this universe?” but is the Theistic question of “Why is it necessary for this precise amount of gravity to emanate from a higher dimension in order for life to exist in this universe?” It seems a preliminary answer to this question is already found in the anthropic principle once again. If gravity were not at it’s “just right” value in the big bang, a universe conducive to life would not exist. That is to say, gravity is found to act as the counterbalance of the big bang. If gravity were weaker, the big bang would have been “too e” and matter would have been too thinly spread out to allow the formation of galaxies, stars and planets. Thus, life in this universe would not have been possible. If gravity were a bit stronger at the big bang, matter would have collapsed in on itself shortly after the big bang. Again life, as we know it, would not have been possible. Thus in the anthropic principle, which is actually a naturally occurring postulation of the Theistic philosophy, we already find a preliminary reason for the huge amount of “missing matter’ to exist, whereas the materialistic philosophy can postulate no reason why the matter is missing and is left vainly searching for non-existent matter in this universe to account for the “excessive” gravity that is found in this universe. I believe the amount of “missing matter” can be further refined to the anthropic principle. For instance, the missing matter may be further refined to reflect the fact that the huge amount of missing matter actually allows us the truly fortunate privilege of scanning the universe unimpeded with our telescopes ( “The Privileged Planet” by Guillermo Gonzalez Ph.D.). That is to say, if the huge amount of missing matter actually did exist, the universe would be a lot less “see through” than what it currently is. Our knowledge of the history of the universe would suffer dramatically as a result of this reduced visibility. As well, it is very likely that an answer for why the galaxies rotate at the much greater “unpredicted” value that they do will be found in the anthropic principle instead of the materialistic philosophy. As pointed out earlier, the Theistic postulations in science have already provided many correct predictions with stunning empirical validations. Predictions that materialism not only did not predict but was blatantly incorrect on. Thus, it is only natural to look to the Theistic postulations to answer the many remaining questions we have about the universe. To give further evidence of this “missing matter” problem, all matter is reducible to energy as illustrated by Einstein’s famous equation of e=mc2. Thus it may be plainly said that all matter has been created out of energy. Yet energy in and of itself does not produce the force of gravity (space-time curvature). In fact, energy has exactly the opposite effect of gravity. According to the anthropic principle, energy actually makes space “expand”, by “exactly the right amount” to allow life to be possible. Put simply, matter is not justified by the overall empirical evidence in science to have a totally equal status with gravity in gravity equations. Theism is free to expect gravity to arise independently of material objects from a higher dimension without ever having to “invent” matter that will, by all current indications of empirical evidence, never be found in the “physical” dimension of this universe but will only be found when taking into consideration the “primary higher dimension” of the Theistic philosophy.
    The following is a released statement from experts that gives further illustration to this “missing material” problem of the universe.

    The abstract of the September 1006 Report of the Dark Energy Task Force (which, “was established by the Astronomy and Astrophysics Advisory Committee [AAAC] and the High Energy Physics Advisory Panel [HEPAP] as a joint sub-committee to advise the Department of Energy, the National Aeronautics and Space Administration, and the National Science Foundation on future dark energy research”) says: “Dark energy appears to be the nt component of the physical Universe, yet there is no persuasive theoretical explanation for its existence or magnitude. The acceleration of the Universe is, along with dark matter, the observed phenomenon that most directly demonstrates that our (materialistic) theories of fundamental particles and gravity are either incorrect or incomplete. Most experts believe that nothing short of a revolution in our understanding of fundamental physics will be required to achieve a full understanding of the cosmic acceleration. For these reasons, the nature of dark energy ranks among the very most compelling of all outstanding problems in physical science. These circumstances demand an ambitious observational program to determine the dark energy properties as well as possible.”

    The first law of thermodynamics states that energy can neither be created nor destroyed. As well Light has been proven to be timeless by Einstein’s special theory of relativity. Therefore energy most likely, from honest appraisal of empirical evidence, arose from some other “higher timeless” dimension prior to the big bang. As such, since the fundamental force of gravity does not arise from energy and also travels at the “timeless” speed of light, it falls to reason gravity must also arise from this other “primary higher timeless” dimension. Many people who do not believe in God say “Just show me God and I will believe!” Yet the foundation of this “material” universe that is found in relativity and quantum mechanics blatantly displays actions that defy our concepts of time and space. Defying time and space is generally regarded by most people to be a miraculous occurrence. It is considered to be a miraculous occurrence because it blatantly defies all materialistic presumptions that have been put forth! Indeed, the foundation of this universe has the fingerprints of God all over it.
    Many times materialist object to theist by saying “God did it that way is not a scientific answer.” Well I have news for the materialists “God DID do it that way and the scientific answer is to try and figure out how God did it that way! As demonstrated repeatedly by the failed predictions of materialism, the materialistic philosophy is a blatant deception that only impedes further true scientific progress.
    To remedy the Gravity problem it is necessary to define, as best as we can, this “primary higher dimension” that our universe came from and to shed the last vestiges of materialism that are blinding us to what is right in front of us! Having a proper mathematical foundation for gravity in science may very well enable even more wonderful breakthroughs in science. This problem of missing matter is a blatant gap in man’s knowledge and my assertion is simply that the mathematical remedy for the problems in gravity equations will not be solved until the proper Theistic approach is used in solving them.

    Colossians 1:17
    He was before all else began and it is His power that Holds everything together.

  14. bornagain77: Sadly, I’m much too busy to read your entire post carefully, so I hope I’m not missing something fundamental that would render my comment nonsensical.

    You said:

    Many times materialist object to theist by saying “God did it that way is not a scientific answer.” Well I have news for the materialists “God DID do it that way and the scientific answer is to try and figure out how God did it that way!

    One of the main reasons why I can’t take ID seriously as science is that it seems that almost nobody is involved in trying to “figure out” the application of design. The most obvious explanation of this curious reticence is the Chinese wall between the “scientific” face of ID and religious/philosophical underpinning — which is, as I see it, of a piece with the dischord of Behe and Gonzalez et al. expecting to be taken seriously by the science academy while Johnson inveighs against naturalism.

  15. Mgarelick,
    True science is conducted with no preconceived philosophical bias, You are blinded to the bigger picture of reality if you will only except material explanations. I’ve sited several studies indicating that consciousness is indeed separate from the brain. Man’s technology is at the point to truly investigate this fascinating topic thoroughly, Yet the materialistic dogma that is currently entrenched over our scientific institutions prevent the widescale study that would unleash this knowledge. Science is being doing a vast disservice by materialism. Every since the big bang materialism has only sent scientists down blind alleys as far as cutting edge knowledge is concerned. It truly is sad that you can’t see your materialistic bias is blinding you.

  16. Jumping on the back of bornagain77′s last post, with the hope of getting back to the orginal cast of this thread, here is the state of evolutionary dialogue:

    IDer to NDEr: “I don’t see any evidence showing that RM+NS can lead to macroevolution. Yes, there is such a thing as microevolution, that validates a mechanism such as RM+NS. But where is the evidence for macroevolution?”

    NDEr to IDer: “Evolution is a fact. There is enormous evidence pointing to evolution in action, such as moths and finch beaks. There is a scientific consensus among all scientists who are doing the actual work in these areas. It’s just a matter of time before any questions that haven’t already been answered, will be answered.”

    IDer to NDEr: “Well, I just don’t see any evidence for RM+NS, except for small, very minor phenotypic changes that occur among species. When it comes to macroevolution, the fossil record shows just the opposite. I’m thinking of the Cambrian Explosion.”

    NDEr to IDer: “Oh, the Cambrian Explosion is no kind of explosion at all. We see a great number of intermediate forms in the Ediacaran, which is just as Darwin expected.”

    IDer to NDEr: “But those intermediate forms arise at a much quicker pace than anything Darwin imagined. And, when you look at the complexity of organisms, how could that have happened by chance? Just look at the DNA code itself: it’s a code, just like intelligent beings have created. All the evidence certainly suggests intelligence is responsible for the DNA code. It gives all the evidence of design.”

    NDEr to IDer: “Well, then, who is the Designer? If we don’t know who the Designer is, then we can’t know anything about the design. You have no proof. And your attempt to talk about a Desinger just proves that ID is no more than creationism. Unless you can come up with evidence for ID, ID is no more than an updated form of creationism; it’s certainly not science. Science deals with evidence. If ID doesn’t have any evidence, then it isn’t a science.”

    In this hypothetical discussion, notice that the IDer begins by asking for evidence for evolution, conceding microevolution; then when the discussion turns to macroevolution, no evidence at all is given by the NDEr, except to confute the Cambrian Explosion, and the NDEr does this using evidence that contradicts Darwin’s predictions about the fossil record; finally the NDEr pleads complete ignorance when it comes to a Designer, and, without having presented ANY evidence at all for macroevolution, without having any argument whatsoever to counter the rather obvious inference of intelligence that DNA illicits, simply dismisses ID out of hand with the allegation that: “Science deals with evidence. If ID doesn’t have any evidence, then it isn’t a science.”

    As they quip: this is rich!

  17. Well said PaV,
    Science does deal with evidence. ID does have extensive evidence , Yet IDs evidence is automatically dismissed as invalid since it does not lead to a purely materialistic explanation.
    Yet as I pointed out in my earlier post leading scientists admit their current theories are insufficient to explain Dark Matter and Dark Energy.

  18. Prof Sachs:

    However, intelligent design theory is coupled together, in most presentations of it, with an explicit denial of the claim that intelligence or rationality or mind (take your pick) could be a mere effect of matter-in-motion. Clearly, if intelligence is itself merely natural, then the dichotomy of intelligent causation vs. natural causation cannot be sustained.

    The Spartans had a classic reply to this sort of reasoning, in one word:

    IF.

    For:

    1] We have excellent reason to infer that on the gamut of the observed cosmos, the functionally specified, complex information that is embedded in the nanotechnology of life, and in the explosion of FSCI required to give rise to the scope of biodivrsity we see, chance + necessity alone are on excellent grounds blatantly causally inadequate. (And, the speculative, after the fact ad hoc proposed quasi-infinite expansion of the cosmos is blatant metaphysics, and without a serious instance of empirical support.)

    2] But, we also know independent of any theorising, through much direct experience, that intelligent agents produce such FSCI-bearing systems and structures every day — even this thread is a case in point.

    3] Moreover, we know that in EVERY case of FSCI for which we know the cause directly, that is the case.

    4] So, relative to what we do know, on inference to best explanation, the FSCI in the nanotech of life and in the onward diversification required to account for biodiversity at body plan level, such agency is the best explanation.

    5] Further to this, agency makes a very good explanation of the observed fine-tuning of the cosmos; and,

    6] Unlike evolutionary materialism’s cascade from hydrogen to humans, it has no truly serious difficulties accounting for (a) the credibility of mind and (b) the binding nature of morals.

    7] Integrating the above themes and issues from a philosophical perspective, a very compelling understanding is that our Cosmos as we experience it is the product of an intelligent agent who intended to implement a cosmos habitable by life such as we enjoy it, and has then proceeded to create and sustain such life. (Some would argue that the insistence by many among the scientific elites and the phil of sci elites on begging the relevant questions by imposing so-called methodological question is motivated by their particular view of the integrative view just now outlined. And, not without evidence.)

    8] Finally, I wish to hear of a credible, non-question-begging reason for inferring that “[verbalising, abstractly conceptualising] intelligence is itself merely natural” as opposed to what we see: an artifact of mind. The very minds that have to be credible to even reason to and draw out conclusions of materialism.

    In short, “IF . . .” has to be substantiated, not begged.

    GEM of TKI

    PS: Onlookers may wish to look at my always linked through my name, and here for an overview of some of the issues by Dallas Willard of USC. This issue was also put in the Scoville Scale thread of July 26, in response to some of the same challenges.

  19. I would slightly dispute the above interpretation of the current state of play.

    As I see it, the whole contested terrain is one of “inference to the best explanation” or “abduction.” That is, what should be posited in order to explain what is observed? In this case, should we posit the existence of a rational and intelligent mind, one that guides and directs natural processes, in order to explain what is observed? Or is such an inference unnecessary, superfluous, i.e. ruled out by Occam’s razor?

    The problem I have with intelligent design, insofar as I understand it, is this. The argument seems to proceed as follows:

    1) Organisms and artifacts (i.e. tools, machines) both exhibit some feature (“FSCI,” “irreducible complexity,” etc.).

    2) The origin of such information, in the case of artifacts, is best explained in terms of intelligent behavior of the tool-maker.

    3) Therefore, it is at least possible that the origin of such information, in the case of organisms, is also best explained in terms of intelligent behavior of the ‘organism-maker.’

    That pat of the argument look like a valid abductive inference. I don’t have any problem with it.

    However, I do have a problem with attempts to generate substantial metaphysical conclusions from (3). (3) provides at most a suggestion for research. It does not tell us anything about what actually is or is not the case. It offers mere possibility.

    To generate the desired conclusion, ID arguments require an additional premise:

    4) It is impossible that anything other than the activity of a rational, intelligent mind could have caused the kind of information observed in organisms.

    (In other words: it is possible that p; it is impossible that not-p; therefore p.)

    And it is here, with premise (4), that I have a disagreement. The root of that disagreement is that the arguments for (4) don’t fully establish it. What they show instead is something much weaker than (4). I’ll call this (5):

    5) For some specified natural process, it can be calculated that it is highly improbable that such a process could generate FSCI, etc.

    The weakness of (5) is that it depends on what one takes the relevant specified natural process to be. Subsequent discoveries can always require a change in one’s calculations.

    So (5) cannot license the inference to (4), and so there does not seem to be any way to warrant the further inference that only a rational and intelligent mind could have caused the features of organisms that they share with artifacts.

  20. Friendly amendment to Carl Sach #19: an additional weakness with premise (5) is the failure to consider the possibility of ID. Even if one assumes that ID is the only alternative to a natural process, it can only reap the benefit of the status of logical complement if it is possible that “the activity of a rational, intelligent mind” could be responsible for organisms. I would argue that in order to assess this possibility, you would need some direct evidence (not an inference, because the goal of the argument is to justify that inference as a legitimate methodology) that design or creation of a living organism by an intelligent mind has ever occurred.

  21. Sachs and Mgarelick,
    To me it seems that you do not even believe it possible for their to be a spiritual realm in the first place so I submit these “few” evidences as proof that reality is far grander than any materialistic scenario can expect or imagine.

    Neuro-physiological (brain/body) research is now being performed, using a new scientific tool, trans-cranial magnetic stimulation (TMS). This tool allows scientists to study the brain non-invasively. TMS can excite or inhibit normal electrical activity in specific parts of the brain, depending on the amount of energy administered by TMS. This tool allows scientists to pinpoint what is happening in different regions of the brain (functional mapping of the brain). TMS is wide-ranging in its usefulness; allowing the study of brain/muscle connections, the five senses, language, the patho-physiology of brain disorders, as well as mood disorders, such as depression. TMS may even prove to be useful for therapy for such brain disorders. TMS also allows the study of how memories are stored. The ability of TMS for inhibiting (turning off) specific portions of the brain is the very ability which reveals things that are very illuminating to the topic we are investigating. Consciousness and the brain are actually separate entities.
    When the electromagnetic activity of a specific portion of the brain is inhibited by the higher energies of TMS, it impairs the functioning of the particular portion of the body associated with the particular portion of the brain being inhibited. For example; when the visual cortex (a portion of the brain) is inhibited by higher energies of TMS, the person undergoing the procedure will temporarily become blind while it is inhibited. One notable exception to this “becoming impaired rule” is a person’s memory. When the elusive “memory” portion of the brain is inhibited, a person will have a vivid flashback of a past part of their life. This very odd “amplification” of a memory indicates this fact; memories are stored in the “spiritual” consciousness independent of the brain. All of the bodies other physical functions which have physical connections in the brain are impaired when their corresponding portion of the brain loses its ability for normal electromagnetic activity. One would very well expect memories to be irretrievable from the brain if they were physically stored. Yet memories are vividly brought forth into consciousness when their corresponding locations in the brain are temporarily inhibited. This indicates that memories are somehow stored on a non-physical basis, separate from the brain in the “spiritual” consciousness. Memory happens to be a crucially integrated part of any thinking consciousness. This is true, whether or not consciousness is physically or spiritually-based. Where memory is actually located is a sure sign of where the consciousness is actually located. It provides a compelling clue as to whether consciousness is physically or spiritually-based. Vivid memory recall, upon inhibition of a portion of brain where memory is being communicated from consciousness, is exactly what one would expect to find if consciousness is ultimately self-sufficient of brain function and spiritually-based. The opposite result, a ening of memories, is what one would expect to find if consciousness is ultimately physically-based. According to this insight, a large portion, if not all, of the one quadrillion synapses that have developed in the brain as we became s, are primarily developed as pathways for information to be transmitted to, and memories to be transmitted from, our consciousness. The synapses of the brain are not, in and of themselves, our primary source for memories. Indeed, decades of extensive research by brilliant, Nobel prize-winning, minds have failed to reveal where memory is stored in the brain. Though Alzheimer’s and other disorders affect the brain’s overall ability to recover memories, this is only an indication that the overall ability of the brain to recover memory from the consciousness has been affected, and does not in any way conclusively establish that memory is actually stored in the brain.
    In other developments, Dr. Olaf Blanke recently described in the peer-reviewed science journal “Nature” a patient who had “out of body experiences (OBEs)”, when the electrical activity of the gyrus-angularis portion of the brain was inhibited by higher energy TMS. Though some materialists try to twist this into some type of natural explanation for spiritual experiences, by saying the portion of the brain is being stimulated, it is actually a prime example clearly indicating consciousness is independent of the brain; for the portion of the brain is in fact, being inhibited, instead of stimulated ! This patient, Dr. Olaf Blanke described, should be grateful that consciousness is independent of the brain. If consciousness were truly dependent on the brain for its survival, as materialist insist, then the patient would have most likely died; at least while that particular portion of the brain was being inhibited. Obviously, that portion of the brain which was inhibited in the patient, is the very seat of the brain’s consciousness.
    In other compelling evidence, many children who have had hemispherectomies (half their brains removed due to life threatening epileptic conditions) at Johns Hopkins Medical Center, are in high school; and one, a college student, is on the dean’s list. The families of these children can barely believe the transformation; and not so long ago, neurologists and neuro-surgeons found it hard to believe as well. What is surprising for these people is that they are having their overriding materialistic view of brain correlation to consciousness overturned. In other words; since, it is presumed by Materialism that the brain is the primary generator of consciousness; then, it is totally expected for a person having half their brain removed to be severely affected when it comes to memory and personality. This is clearly a contradiction between the Materialistic and Theistic philosophies. According to Materialistic dogma, memory and personality should be affected, just as badly, or at least somewhat as badly, as any of the other parts of the body, by removal of half the brain. Yet, as a team of neuro-surgeons that have done extensive research on the after effects of hemispherectomy at John Hopkins Medical Center comment: “We are awed by the apparent retention of the child’s memory after removal of half of the brain, either half; and by the retention of the child’s personality and sense of humor.” Though a patients physical capacities are impaired, just as they were expected to be immediately following surgery; and have to have time to be “rewired” to the consciousness in the brain, the memory and personality of the patient comes out unscathed in the aftermath of such radical surgery. This is exactly the result one would expect, if the consciousness is ultimately independent of brain function and is spiritually-based. This is totally contrary to the results one would expect if the consciousness were actually physically-based, as the materialistic theory had presumed. In further comment from the neuro-surgeons in the John Hopkins study: “Despite removal of one hemisphere, the intellect of all but one of the children seems either unchanged or improved. Intellect was only affected in the one child who had remained in a coma, vigil-like state, attributable to peri-operative complications.” This is stunning proof of consciousness being independent of brain function. The only child not to have normal or improved intellect is the child who remained in a coma due to complications during surgery. It is also heartening to find that many of the patients regain full use, or almost full use, of their bodies after a varying period of recuperation in which the brain is “rewired” to the consciousness.
    II Corinthians 5:1
    For we know that if our earthly house, this tent (Our Body), is destroyed, we have a building from God, a house not made with hands, eternal in the heavens.

    Yet, more evidence for the independence of consciousness is found in Dr. Pim van Lommels’ study of sixty-two of his cardiac patients who had near experiences (NDE’s). NDE’s are the phenomena of someone being physically for a short time; yet, when they are revived, they report they were in their spiritual bodies, outside of their physical bodies and taken to another dimension. Dr. Lommel’s research found no weakness in the Theistic presumption of a spiritually independent consciousness. He and his colleagues published their research in the peer-reviewed journal (Lancet, Dec. 2001). Not only did their research not find any weaknesses in the Theistic presumption; their findings severely weakened or ruled out all Materialistic presumptions that had been put forth such as anoxia in the brain, release of endomorphines, NMDA receptor blockage or medications given. Their findings also ruled out suspected psychological explanations as well; such as a coping mechanism brought on by the fear of imminent or fore-knowledge of NDE. They even had a patient in the NDE study who identified the exact nurse who took his dentures while he was in cardiac arrest. This is something only someone who was conscious of the operating room, even though he was physically , could have seen the nurse doing (Many NDE report floating above their bodies, observing the operating room from the ceiling, before going to another dimension). In other similar studies, cases in which was extracted at the time of the NDE did not support the anoxia or hypercarbia theories. It is also established that the administered to the patients, such as painkillers, appeared to inhibit and confuse rather than cause the NDE. The combination of all data from recent and retrospective research provides a large amount of evidence, which can no longer be ignored or explained away. The fact that clear, lucid experiences were reported during a time when the brain was proven to be devoid of activity (Aminoff et al., 1988, Clute and Levy 1990, de Vries et al., 1998), does not sit easily with the current scientific belief system of materialism. In another fascinating study (Kenneth Ring and Sharon Cooper, 1997) of thirty-one blind people who had a NDE, twenty-four of the blind people reported that they could see while they were out of their physical bodies. Many of them had been blind since birth. Likewise, many deaf people reported they were able to hear while they were having a NDE.
    So, in answer to the question: “Is consciousness a physically or spiritually-based phenomena?”; we can, with the assurance of scientific integrity backing us up, reply that consciousness is indeed a spiritual phenomena capable of living independently of the brain, once the brain ceases to function. Dr. Lommel illustrates in his paper that the real purpose of the brain is as a mediator of the physical world to the spiritual consciousness. He compares the brain to such things as a television, radio and cell phone, to illustrate the point. The point he is trying to make clear is this; the brain is not the end point of information. It is “only” a conveyor of information to and from the true end point, our spiritually-based consciousness which is independent of the physical brain and able to live past the of our brains.

    Genesis 2:7
    And the Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living being.

    It is clear from these recent developments, the materialistic philosophy will only severely impede further scientific progress in this very promising area. Instead of scientists investigating how the consciousness actually interacts with the material brain, and making important discoveries of how the spiritual realm actually interacts with the material realm, scientists will be forced into blind goose chases trying to explain how consciousness arises from a purely materialistic basis.
    The hard evidence makes it clear that the presumptions of Materialism have been proven to be false at both the level of the universe’s foundational reality and at the level of consciousness in human beings. Whereas the Theistic presumptions of the universe’s creation from a transcendent Creator, and of the consciousness’s ability to live completely separate from the brain, are both strongly supported by the hard evidences that have been brought forth by recent discoveries in science. Now, having established that Materialism has an extremely shaky foundation to begin with if indeed it can be found to have any foundation at all; let us take a hard look at a one of the more famously documented Near Experiences. The NDE of Pam Reynolds. This is the account of that NDE.

    A team in Phoenix specializes in an extreme form of neurosurgery called hypothermic cardiac arrest that has been created to allow operation on aneurysms deep in the brain. A 35-year-old woman undertook this surgery. Her eyes were taped shut to prevent them from drying out. They put electrodes in the auditory section of the brainstem and put molded speakers in her ears which played a constant beep, a setup designed to gauge responsiveness in the brainstem. These speakers prevented her from hearing anything in the room besides the beeps. They cooled her body to 60 degrees, which lowered her metabolic rate enough so that the surgeons could operate for a long time deep in the brain. They then rerouted her from a femoral artery into a heart-pump, though they had to switch legs because the first vessel was too small, thereby prolonging the surgery. When the EEG was flat and the brainstem stopped responding, she was by most standard medical criteria . flowed out into the heart-pump and back into the body. Next they shut off the pump and tilted the table up so that all the drained out of her brain. Only then was it safe to open her skull to clip off the aneurysm. The time of anesthetization in this procedure is about 90 minutes./// The woman reported leaving her body and hearing a D-natural buzzing sound. She watched the surgery and was puzzled by what appeared to be an electric toothbrush which one member of the team was using on her head. She also reported hearing the woman doctor say, “These vessels are too small. We can’t use them for the pump.” At that point, she got distracted, saw the light, went through a tunnel, saw a deceased grandmother and a few other deceased relatives who told her she had to go back. As she was coming out of the surgery, she had a cardiac arrest and they had to shock her twice to get her back. When the procedure was all over, she described to the neurosurgeon everything she saw, including the strange electric toothbrush and the box that it came in with several different attachable heads. It turned out she had accurately described a Midas Rex saw, which is used only for this procedure, and which makes a buzzing sound. So, with this case we have an example of someone who was visually and auditorally isolated, had a flat EEG, and should not have been able to think, and yet she commented that she had never thought so clearly in her life.

    The paragraph below is a quote of the same event from an anonymous writer in a NDE newsgroup:

    Such is the case of Pam Reynolds who is quite well known in the NDE community. She was having surgery performed to remove an aneurysm from her brain. Her body was cooled to below 60 degrees F. and all of the was drained from her body. Her EEG and brain stem response showed no activity, the definition of brain in many states. During all of this, she reported rising from her body and seeing the operation performed below her. She also reported contact with “The Light” and many of her deceased relatives in heaven. Remember, she had no brain activity whatsoever. Even hallucinations register brain activity. It is interesting that upon recovering she recounted accurately many details of her operation, including conversations heard and a description of the surgical instruments. It has been postulated by a NDE skeptic, that Pam overheard the sounds in the room and generated a “mental map” of things around her. What the skeptic failed to acknowledge though is that instruments were inserted into Pam’s ears that generated clicks to measure brain stem response. Her brain stem response throughout the surgery was inactive. If conversations were heard, her brain stem response should have registered them.
    According to Pam, she was present, above her body, viewing the whole surgical operation; her consciousness, memory, personality; her whole individuality intact. She proved this with an accurate, detailed description of the instruments, conversation, and procedures used during the surgery. At the same time, science, using scientific monitoring instruments, was proving that her body was . No brain response, no heart response, no response of any kind. Obviously, the brain, nor any other organ of the body, was needed to sustain her life, and this account is just one example of the that exist in NDE literature.

    I believe this is as conclusive as proof gets. Clear, solid proof that man is a “spiritual” being inhabiting a physical body.

  22. It is my intention here to criticize the arguments made by IDers at those places where I see difficulties in the arguments. I care about good reasoning and hate bad reasoning. But apart from that I have no intention to speak in my own voice or to defend any set of epistemological or metaphysical claims.

    I am interested only in being a “gadfly,” and I’ll leave if my services as skeptic and questioner are not desired.

    By corollary, the rest of you are free to entertain whatever hypotheses seem most reasonable to you with regards to my philosophical or religious attitudes and commitments.

    Thus, in response to (21), I would like to say only that I am uncertain as to the basis of the confidence you appear to have in the opposition between “the spiritual” and “the physical.” How do you know that humanity is essentially one and not the other, or that the one is the opposite of the other? On what grounds, and with what confidence, can we be sure that one is not an aspect of the other, or that both are not different aspects of something else, or that both are abstractions from a more basic kind of reality?

    Surely such metaphysical thoughts are neither absent from the history of human speculation nor inconsistent with whatever empirical research has thus far revealed.

  23. Prof Sachs:

    I appreciate your taking time to respond on points in 19 above. I note in further response:

    1] CS: should we posit the existence of a rational and intelligent mind, one that guides and directs natural processes, in order to explain what is observed? Or is such an inference unnecessary, superfluous, i.e. ruled out by Occam’s razor?

    The point of Occam, of course, is that assumptions should not be multiplied WITHOUT NECESSITY. (That is, he is inter alia adverting to the fact that we do not spin our theories out of connexion with observed facts.)

    And, as from Plato on [The Laws, Bk X] we have on the record, it is well a observed fact that:

    Ath. . . . we have . . . lighted on a strange doctrine.
    Cle. What doctrine do you mean?
    Ath. The wisest of all doctrines, in the opinion of many.
    Cle. I wish that you would speak plainer.
    Ath. The doctrine that all things do become, have become, and will become, some by nature, some by art, and some by chance . . .

    Indeed, this precisely forms the underlying context for Monod’s famous 1970′s work on his materialistic view of evolution, “Chance and Necessity.” But, he overlooked a major consideration: in all cases of observed functionally specified, complex information, where we do observe the cause directly, agency is the source. So, relative to quite impressive empirical data it is a well-warranted inference that once the available probabilistic resources are exhausted we have grounds for inferring to agency as cause.

    Thus, the idea that inference to agency is “unnecessary” and so superfluous, is simply a begging of the question at stake in the teeth of well-known facts. [Indeed, the point is further that agency is not to be confused with “human agency,” as the accident of association is not the same as the essence of a matter.]

    The implication of this comes out when we see your premise 2 on your view of the ID case:

    2] The origin of such information, in the case of artifacts, is best explained in terms of intelligent behavior of the tool-maker.

    This is quite significantly different from my bolded above: in all cases of observed functionally specified, complex information, where we do observe the cause directly, agency is the source.

    For, too, we have evidence that agents — not just agents that happen to be human — make FSCI. That is, we have no good, non- question- begging, reason to confine agency to humans.

    We further have no direct observation that chance and necessity only, produce FSCI. Also, we see that on the same analytical grounds of exhaustion of available probabilistic resources that we use to reconcile the classic with the statistical form of the 2nd law of thermodynamics, or to infer to reject null hypothesis on chance and/or necessity in statistical inference etc etc, that there is an excellent probability theory -related reason for the pattern of our observations.

    3] Therefore, it is at least possible that the origin of such information, in the case of organisms, is also best explained in terms of intelligent behavior of the ‘organism-maker.’

    The conclusion as you worded it is also poorly stated relative to what design thinkers hold, and in the context of say Hume’s attempt to distinguish artifacts and organisms, is loaded. For, the specific phenomenon that is at stake is that in the nanotechnology of the living cell, lies the DNA, a coded digital string of length 500, 000 to 3 – 4 bn [at the low end of the range, 500 k 4-state elements encode from a configuration space of 4^500k ~ 9.9 * 10^301,029, comfortably beyond the range that we can argue that functional states will be sufficiently close that we can island hop from an arbitrary initial point to get to functionality]. This is informational, it is specified in terms of a partly understood code, and it is functional in an information system that algorithmically applies that code to produce the key molecules of life, which in turn function based on their composition as specified through DNA.

    In short, the DNA molecule is an instance of a now very familiar entity, one much studied in computer science and information theory: a digital storage device, albeit of course it uses 4-state not 2-state elements. So, the Humean distinction artifact/organism, plainly fails.

    Further to this, there is a problem of the fallacy of selective hyper-skepticism, Cliffordian evidentialism form at work, as seen in:

    . . .

  24. 4] (3) provides at most a suggestion for research. It does not tell us anything about what actually is or is not the case. It offers mere possibility.

    Of course, any fact-based reasoning does not offer proof beyond rational dispute. However, the process of abduction in a factually well-anchored setting offers sufficient credibility and reliability that its deliverances are routinely treated as knowledge, with the caveat that its conclusions are provisional. But, with suitable safeguards against hasty and unsafe conclusions – e.g. the criminal law’s “proof” beyond reasonable doubt, and the civil law’s “proof” on the preponderance of evidence, etc., and the scientific and engineering praxis of empirical testing and openness to re-testing — we are able to routinely function in a world that is less than absolutely certain.

    So, on the explanatory filter and design inference, we have a broad empirical base to infer to agency, we are looking at a case that is well beyond exhausting credibly available probabilistic resources, and it also instantiates a well-known phenomenon that is directly known to be the product of agency. Why, then, do we see all of a sudden the refusal to accept that this is “good enough for government work”? (Ans: because of the dominant institutional ethos being implicitly challenged, evolutionary materialism.)

    Thus too, we see the problem with your discussion of 4 and 5:

    5] The weakness of (5) is that it depends on what one takes the relevant specified natural process to be. Subsequent discoveries can always require a change in one’s calculations.

    What I have bolded is of course now trivially known to be true of any significant research programme in science. So, why is what is an acceptable risk in all other fields suddenly a crucial defect in this case? Ans: because of selective hyper-skepticism.

    This also obtains for the amendment proposed in 20:

    6] MG, 20: an additional weakness with premise (5) is the failure to consider the possibility of ID. Even if one assumes that ID is the only alternative to a natural process, it can only reap the benefit of the status of logical complement if it is possible that “the activity of a rational, intelligent mind” could be responsible for organisms

    The key problem here is of course that this misses the point that DNA is a known case of a specific type of entity as discussed above, one which is of course routinely — and only — observed to be the product of agency. That is, a discrete-state, chained, information-storage unit of large capacity. This is not mere analogy, it is identity.

    So also, we may respond to:

    7] MG, 20: you would need some direct evidence (not an inference, because the goal of the argument is to justify that inference as a legitimate methodology) that design or creation of a living organism by an intelligent mind has ever occurred.

    Notice the substitution of “living organism,” meant to underscore the Humean claim of a weak analogy.

    But the problem is that we have good reason to trust knowledge by inference on cases of like kind, and so the insistence that because the information system component is in a living organism that should prevail over the observed fact that it is a known digital entity, is again selective hyper-skepticism.

    Let us reverse the challenge a bit: MG and CS; kindly provide a case where such an entity with a complex functional code of at least 2048 bits capacity [2 kbits] has ever been created by chance and/or necessity, or that it is reasonably feasible to do so. 2^2048 ~ 3.23 * 10^ 616.

    To illustrate what I am saying, let us imagine that we can take 1 million floppies on PCs using Windows, and spew bit patterns across them at random in a 1024 bit block, say using a Zener diode-driven noise source. And say let us think we can do that once and test the machines every second. In a year we would scan 10^6 x 365.25 x 24 x 3600 ~ 1.32*10^12 codes. At that rate it would take ~ 2.45 *10^604 years to scan the configuration space. So even if there are say 10^500 recognisable and runnable short programs of that length, we would have to wait 10^104 years, i.e far longer than the known universe will last.

    8] On the four big bangs

    Notice the underlying issue: no credible empirically anchored reason [as opposed to questionable epistemology and underlying metaphysics] has been advanced for us to accept that origin of a fine-tuned cosmos, origin of life based on cellular nanotechnologies, body-plan level biodiversification and of the credible mind and conscience has been advanced.

    GEM of TKI

  25. Sachs,
    The PLAIN reason materialism and Theism are in opposition is because they are indeed in opposition to one another as far as basic claims of reality are concerned. (One claims a spiritual origination of reality the other claims a material origination of reality). Theism has been overwhelming empirically validated as far as the foundation of reality is concerned whereas materialism has been found wanting over and over again. If you want to claim the “Shaman” position that material and spiritual are two sides of the same coin, then you would expect that the empirical validations of the foundation of the universe would have fallen in between to two extreme postulations found for the materialistic and theistic philosophies; but that is not the case! all evidence has backed up the theistic presumptions. Prof. Sachs, science is ultimately evidence driven. Clearly you cannot deny the evidence found in relativity and quantum mechanics, not to mention the big bang and the anthropic principle, overwhelming support theism. If you do deny as such then you are being unreasonable. I believe kairosfocus pointed out as much in his excellent response to your assertion that it is unreasonable to infer an intelligent agent for the stunning complexity we find in DNA. I appreciate your trying to find weaknesses in the ID theory and may the ID theory become stronger for it, Yet there comes a time when such denial of empirically driven evidence becomes unreasonable in the extreme.

  26. If we confine ourselves, initially, to “direct observation,” then the only agents relevant for FSCI are humans. So it’s not non-arbitrary.

    I’m willing to grant that organisms and machines both exhibit FSCI. And I’m willing to grant that in all cases that we’ve observed, agency is required for FSCI. But that is only because we’ve observed human agents in the process of making machines and ‘artificial organisms’.

    What, then, does this tell us about the origins of ‘natural organisms’? It tells us, I think, at most that agency could have been involve, not that it was (or is).

    Put otherwise: from the fact that “chance” + “necessity” have not been observed to produce FSCI, it does not follow that they could not have. Instead one must inquire into which scenario seems most warranted, on the basis of available evidence and theories.

    The last thing I want is to be seen as someone who wants to block the path of inquiry. But I also do not think that inquiry can be replaced by a priori considerations about what “chance” or “necessity” o “matter” can or cannot do.

    So I do not find it unreasonable to pose as a hypothesis for investigation, “was an intelligent being responsible for the origins of the universe and/or life?” I only find it unreasonable to think that the design inference is sufficient to allow us to answer that question in the affirmative.

    The design inference opens up a route for inquiry, but it does not settle it.

  27. Bornagain77,

    My principal objections to what you say are epistemological, not metaphysical. In other words, I have nothing to say for or against materialism or theism.

    Firstly, I do have some skeptical worries about the basis for that opposition as starkly conceived. Suppose it is the case that human beings have “spiritual properties” and “material properties.” Does it follow that they are opposed to one another? A cube made of steel has spatial properties, mass, chemical properties — they are distinct but not opposed.

    Secondly, there general epistemological problems with strict empiricism. One problem is “the underdetermination of theories by evidence.” In a nutshell, the problem is this: for any set of observations, the data are consistent with more than one theory. (The argument for this comes from the philosopher Quine, if you want to look it up.) Several things follow from this.

    For one, the selection of a theory cannot be determined solely by data. For another, what the data are taken to be evidence for is also influenced by theoretical considerations.

    I do not doubt for a moment that a non-theist could present an interpretation of trans-cranial magnetic stimulation and of near-death experiences which is as consistent with the data as are the interpretations you’ve presented here. Likewise with respect to the origins of life, the “fine-tuning problem,” etc.

    The reason why I work in epistemology and ethics, an avoid metaphysics as much as I can, is because, unlike with scientific theories, I have no idea what would count as a reason to think that a given metaphysical view is true. (Although I find metaphysics fascinating and enjoyable!)

  28. On a closely related note: the lesson I’ve taken from James’ “The Will to Believe” is that Cliffordian evidentialism is appropriate with respect to scientific questions — James’ complaint is that one could not live as a Cliffordian.

    On the one hand, what works well for us as scientists does not work well for us as human beings. On the other hand, what is necessary for us to live well as human beings does not always make for good science.

  29. GEM,

    Thanks for the link to the Willard article!

  30. Prof. Sachs

    You stated:
    For one, the selection of a theory cannot be determined solely by data. For another, what the data are taken to be evidence for is also influenced by theoretical considerations.

    I do not doubt for a moment that a non-theist could present an interpretation of trans-cranial magnetic stimulation and of near-death experiences which is as consistent with the data as are the interpretations you’ve presented here. Likewise with respect to the origins of life, the “fine-tuning problem,” etc.

    It is very interesting that you say a theory cannot be selected solely by data, When it is the very basis of the scientific method that says evidence has the primary authority in science to determine which of the competing theories are true. Are you saying that the scientific method is not first and foremost driven by evidence? This seems a blatant practice in obfuscation to me if you do so. Second, the , across the board , interpretation of the evidence from all branches of knowledge (cosmology, biology, etc etc) conform to the theistic postulations best. I’ve debated both materialist and what I call material/spiritual dualist, and all their conjectures for explanations of the evidence fail in crucial areas of logic in key areas of the evidence! ONLY Theism stays true to the evidence throughout comparative examination. So your assertion that a “non-theists” can come up with rational explanations to explain the evidence across the board is simply false.

  31. Re #24:

    The key problem here is of course that this misses the point that DNA is a known case of a specific type of entity as discussed above, one which is of course routinely — and only — observed to be the product of agency. That is, a discrete-state, chained, information-storage unit of large capacity. This is not mere analogy, it is identity.

    But the very question before us is whether DNA is the product of agency. How can it be persuasive to say that it must be the product of agency because it is the sort of thing that is only the product of agency?
    And I would vigorously dispute that it is “identity.” For every member of the set you are designating as “observed to be the product of agency,” the more precise description is “observed to be the product of human agency.”

  32. Prof Sachs and BA (et al),

    It seems reasonable to pick up the thread of discussions on points, first noting that Prof Willard has a most interesting collection of articles here.

    On points of interest:

    1] CS, 26: If we confine ourselves, initially, to “direct observation,” then the only agents relevant for FSCI are humans.

    Not at all. First, a bit of a side-note: there are millions who would dispute — e.g., based on their personal encounter with God in the face of the risen Christ, going back to a certain famous, course- of- history changing Sunday morning just outside Jerusalem’s Northern Gate — any and all claims that the only agents known to exist through direct observation are humans! (I cite this to show that the full scope of evidence and the issues of selectivity and inconsistent, question-begging skepticism are a bit more widespread on this issue than we like to think.)

    But more directly, we know notoriously that humans are intentional, intelligent agents. Interesting things follow from that that are crucial:

    –> We therefore credibly know that agents exist, and so we must be open to the possibility that humans do not exhaust the list of existing or possible agents. Then, when we encounter evidence such as a complex digital storage unit functioning in a complex information system based on code-bearing, algorithm-expressing nanotechnologies that are self-replicating, that should be allowed to speak for itself relative to what we know about information systems and their origins.

    –> All worldviews that claim to give a credible account of reality must be able to account for the existence of agency and associated mentality. That means that we must be able to account for, precisely, the non-material, mental aspects of agency. Things like propositions, truth, implication, moral obligation, intent, one idea in two or more locations at the same time, etc, etc.

    –> It further means that when a worldview, such as evolutionary materialism, notoriously and on multiple grounds, struggles to account for the credibility of mind and the binding nature of moral obligation then that worldview is plainly seriously factually inadequate and since mentality is a self-referential issue, it is arguably self-refuting as well. [Hence, the previously linked form Willard as one of many phil arguments on those lines.]

    2] CS, 26: organisms and machines both exhibit FSCI. And I’m willing to grant that in all cases that we’ve observed, agency is required for FSCI . . . . What, then, does this tell us about the origins of ‘natural organisms’? It tells us, I think, at most that agency could have been involve, not that it was (or is) . . . . from the fact that “chance” + “necessity” have not been observed to produce FSCI, it does not follow that they could not have.

    First, we are dealing with a specific, common point across both the cell and the computer: digital storage functioning in a code-bearing, algorithm implementing system. We know that agents can create such, but we also have not only the point that chance + necessity have not been observed to produce such, but also an excellent, physically anchored reason why: the configuration spaces involved in creating such systems through “lucky noise” so vastly outstrip opportunities in the observed cosmos, that we can comfortably look at the C + N null hyp and eliminate it. At the same time we know agents routinely produce such systems and components. And, after 2,400+ years of thought on the matter, there is no credible fourth alternative.

    Thus, relative to the empirical facts and observations, there is excellent inductive/scientific warrant to infer to agency on the origin of the nanotech in the cell. Of course that does not amount to demonstrative proof, but that is a characteristic of all scientific inference of consequence. Besides, post-Godel, not even Mathematics is capable of proof beyond reasonable doubt. (So, the issue4 of selective hyper-skepticism, Cliffordian evidentialism form, again surfaces.)

    3] Cs, 26: I also do not think that inquiry can be replaced by a priori considerations about what “chance” or “necessity” o “matter” can or cannot do.

    The limits on what we may reasonably expect chance and necessity to do, are not a priori, but are a posteriori, in the context of the rise of modern probability and inferential statistics, as well as the related development of modern statistical thermodynamics as an extension of that same reasoning into precisely the world of the molecule and other microparticles.

    What has been presented is a comparison, and an observationally anchored elimination [of course subject to type I and I errors, as are all such arguments] by exhaustion of observed available probabilistic resources. That is why many worlds interpretations of the cosmos are suddenly popular among physicists of materialistic bent, as they know all too well the force, success and credibility of the underlying reasoning in statistical thermodynamics. [Cf my already linked microjets thought experiment.]

    4] Cs, 26: I only find it unreasonable to think that the design inference is sufficient to allow us to answer that question in the affirmative.

    The warrant for inference to design as the most credible explanation of say the nature and function of DNA, is at least as good as many another warrant in science, pure and applied. The excerpted is, sadly, simply a statement of selective hyper-skepticism.

    . . .

  33. 5] CS, 27: Suppose it is the case that human beings have “spiritual properties” and “material properties.” Does it follow that they are opposed to one another? A cube made of steel has spatial properties, mass, chemical properties — they are distinct but not opposed.

    That various complementary properties may be held by a given entity and are interacting and compatible — cf. here the effect of a large enough dose of say barbituates — has nothing to do with the essential divergence between mind and matter.

    Let it stand as a case in point, that if one knows that a given train of thought has credibly been in effect simply physically [etc] caused rather than being anchored in evidence and reasoning, that is usually enough to discredit it. That is why Marxists loved to use dialectical materialism to discredit their class-interested foes, why Freudians were quick to point to potty training of their uptight critics, and why Skinnerians were quick to point to operant conditioning, etc., etc. But in each case, there is a self-reference involved, and it undermines their own case. The problem here is that materialism, in general, as a monistic system open only to chance and necessity as causal forces at origins, credibly runs into insuperable difficulties accounting for the credibility of the mind and morals relative to its premises. [The link is to a 101 level discussion for those new tot he issues; Willard and many more have more technical discussions; some are linked in the just linked.]

    6] Cs, 27: general epistemological problems with strict empiricism. One problem is “the underdetermination of theories by evidence.” In a nutshell, the problem is this: for any set of observations, the data are consistent with more than one theory . . .

    Beautiful. So you are willing to agree with me that reason and belief are inextricably intertwined in the roots of all of our worldviews, and particfularly our scientific theorising? That ,therefore, we ALL live by faith, whether Christian or Buddhist, atheist, materialist, scientist or mathematician?

    (Thence, that we should look at comparative difficulties and make warranted though provisional in principle conclusions as to what and why we believe? Thus, too, that we should embrace diversity in thought at the table of serious discussion, including in our teaching and praxis of science in general and of the science of origins in particular? So, also, what have you been doing about NCSE-style, Judge Jones-enforced censorship?)

    7] Cs, 28: the lesson I’ve taken from James’ “The Will to Believe” is that Cliffordian evidentialism is appropriate with respect to scientific questions — James’ complaint is that one could not live as a Cliffordian.

    Actually, the specific lesson James drew relative to strictly scientific issues in general, was that they are not the sort of issues where one is existentially forced to make a momentous commitment, so one can afford to be tentative and non-committal on such. [But note that at research programme level, scientific disciplines often embed implicit and potentially momentous and forced worldview commitments, and may even be held to “warrant” them. As my always linked discusses, what is happening with NDT and associated evolutionary materialism, is that its warrant is being undermined by advances in information theory relevant to the origin of systems exhibiting what I have abbreviated as FSCI, a subset of WD's CSI.]

    My more direct concern is the later objections, implicated in the sort of claim Sagan popularised: “extraordinary claims require extraordinary evidence.” The first problem being that the claims perceived as requiring more than merely ADEQUATE evidence/warrant are those that happen to cut across one’s worldview expectations [cf above], so one is often begging the question. Next, if one claim is only to be believed relative to extraordinary evidence, that evidence with such implications is in turn most extraordinary and so requires even more extraordinary evidence, leading to an infinite regress. Thus, we are also implicated in a self referential inconsistency as the finite and fallible mind cannot handle such a regress.

    Thence, a better solution is to accept the reality that we live by faith, seek adequate evidence and warrant, being open to comparative difficulties across live options – which sweeps question-begging and selective hyper-skepticism off the table. [And, if one's credible evidence, say, involves personal knowledge of God through encounter in the face of the risen Christ, that of course shifts the balance of the evidence to be addressed rather dramatically!]

    8] BA: I’ve debated both materialist and what I call material/spiritual dualist, and all their conjectures for explanations of the evidence fail in crucial areas of logic in key areas of the evidence!

    You are here pointing to the importance of comparative difficulties-based worldviews analysis, across factual adequacy, coherence and explanatory power: a sound worldview must account for the relevant facts, must hang together logically and dynamically, and must be elegantly simple, not either simplistic or an ad hoc patchwork.

    This is precisely where science is revealed as a process within the wider discourse of philosophy, and in which it must be reasonably tentative and open-minded to fresh evidence and reasoning. NDT as a research programme, and associated broader evolutionary materialism models for the claimed cascade from hydrogen to humans, is manifestly failing at that bar.

    GEM of TKI

  34. mgarelick

    And I would vigorously dispute that it is “identity.” For every member of the set you are designating as “observed to be the product of agency,” the more precise description is “observed to be the product of human agency.”

    I fail to see how adding “human” to agency changes the identity. Humans are intelligent agents. All the products mentioned are still the result of agency.

    If you find something that has many of the same properties as an apple, and you have observed apples growing on apple trees, the most reasonable inference is that what you found was produced by something like an apple tree. Indeed, to infer that what you found just spontaneously formed on the ground from inanimate matter would be entirely unsupported.

    The only abstract coded information processing and manufacturing machinery where the origin can be determined is a product of human intellect. There is one other known instance of this type of machinery, it’s found only in living things, and its origin is undetermined. We only have one confirmed producer of these kinds of machines and the producer is human intellect. The only reasonable inference that can be made is that machines of unknown origin were produced by something like a human intellect.

    I don’t see how this isn’t a perfectly valid scientific inference. It is exactly the kind of hypothesis that Popper exemplified with white swans.

    Popper’s Hypothesis: All swans are white.

    ID Hypothesis: All abstract code driven information processing and manufacturing machinery, which isn’t simply a replication of prexisting machinery of the same type, was produced by intelligent agency.

    Popper’s hypothesis, he said, could never be proven because there could never, even in principle, be a way of knowing that a black swan doesn’t exist somewhere. Popper said the key thing that made it a scientific hypothesis was that it could be falsified in principle by observing a single black swan.

    ID’s hypothesis can never be proven because we can never know, even in principle, that no non-intelligent process is able to design these kinds of machines. ID’s hypothesis however can be falsified by observing a single non-intelligent process creating these kinds of machines.

  35. DS – I’ll follow you over to the next thread.

  36. Re Mgarelick,
    Following Dave Scots excellent response to you. I would like to point out that the information in the DNA had to originate somehow. The options for how the information came to be in DNA is severely limited. Basically there are only two options. Chance or Design. Can you think of any other option? If you can, please do tell. Thus by logic if chance is overwhelmingly ruled out (as it has been repeatedly) that only leaves design as the only viable option.

  37. bornagain77, the “information” in DNA – namely, the amino acid-codon correspondence – is a manifestation of perfectly normal interactions between RNA and amino acid. Interactions that fall way on this side of “chance” (whatever that is).

    FYI.

  38. Art2

    amino acid-codon correspondence – is a manifestation of perfectly normal interactions between RNA and amino acid

    Wow. Someone solved the problem of homochirality and I missed it. I need to get out more I guess.

    I hope you can provide a link to this great achievement. FYI

  39. DaveScot, the work I cryptically refer to has nothing to do with homochirality. The amino acid-codon correspondence is rather a different ball of wax.

    The best I can do by way of pointers is a citation that gives a good (if somewhat incomplete, as it’s already “old”) overview of the experiments and their implications. A cut’n'paste from the pdf:

    Annu. Rev. Biochem. 2005. 74:179–98
    doi: 10.1146/annurev.biochem.74.082803.133119
    Copyright c 2005 by Annual Reviews. All rights reserved
    First published online as a Review in Advance on February 18, 2005
    ORIGINS OF THE GENETIC CODE: The Escaped
    Triplet Theory
    Michael Yarus,1 J. Gregory Caporaso,2 and Rob Knight2
    1Department of Molecular Cellular and Developmental Biology, 2Department of
    Chemistry and Biochemistry, University of Colorado, Boulder, Colorado 80309-0347

    As always, enjoy.

  40. art

    The article you reference is interesting but hardly conclusive and far reaching in implication.

    The gist of it appears to be that amino acid binding sites were clipped out of a ribosome randomly assembled, amplified, and some statistical anomalies were identified in that triplets (2 codon and 6 anticodon) coding for certain amino acids appear more often in the binding site for that residue than pure chance would predict.

    They then use this sequence bias to suggest there is biochemical rhyme and reason behind the genetic code mapping rather than it merely being some arbitrary result of pure chance that it is what it is.

    I’d put forward that rhyme and reason is more characteristic of design than chance. What was your point again?

  41. Hi Dave:

    It seems that Art is raising a “new” version of the old Kenyon Biochemical Predestination thesis. (Of course, DK abandoned it after he saw the evidence and analysis by Bradley et al, and has subsequently become a design thinker.)

    That is not irrelevant, as Art would have to first answer to the evidence on distributions and interlocking, codes etc. Then, he would need to answer: how it is that written into the chemistry of the various molecules of life, lo and behold, is a natural regularity that is life-promoting? How do you think that nature as we observe it just happened to be in such an interestingly life-promoting state? [You hint at that in 40 . . .]

    Also, the issue of chirality cannot be dodged so easily: how do you get to a credible prebiotic soup with sufficiently complex ingredients, and the balance of homochiralities that we observe, relative to chance + necessity only on plausible planetary or comet environments etc? [TBO's TMLO has an extensive assessment of this that is a good starting point.]

    I think Art has some serious analysis and persuasive but fair on the merits summarising to do.

    GEM of TKI

    PS: let’s see whart Prof Sachs has to say further.

  42. Thanks Dave Scot,

    Art you stated “Chance whatever that is”

    Dr. Dembski has set the chance probability bound at 10^150. This number represents far more quantum events than will ever happen in the universe. This number is “violated many times in molecular biology. You cite a very hypothetical interpretation of the RNA world scenario then scoff at me and expect your very superficial treatment of the evidence to carry more weight than what I can produce. Let me suffice it to say that chance “whatever it is” is not a defense of the molecules to man hypothesis!

  43. There are several threads going on here, all of which deserve careful examination.

    One that interests me quite a bit is what seems to me an assumption made by “Bornagain”: that our epistemological situation is one of beginning with a “blank slate,” as it were, utterly devoid of not only prejudices but seemingly all concepts altogether, and that mere observation alone can determine the correctness of a world-view.

    Like Bornagain and GEM, I’m interested in how one can compare world-views, and I’m also interested in how one can justify the assertion that one world-view is more rational than another.

    However, I am skeptical of several assumptions that I think are at work in this discussion.

    For one thing, I think that world-views are very different from theories — and so criteria of theory-choice are different from criteria of world-view choice. (If anyone ‘chooses’ his or her world-view!)

    For another, I simply do not see the debate between ID and neo-Darwinism as a debate between competing world-views. This is why I’ve been trying to steer clear off “materialism vs theism,” except to offer skeptical challenges to the way in which this opposition is starkly posed. Likewise I’m frankly puzzled by the emphasis on the putative ethical or political consequences of neo-Darwinism or ID.

    Instead, I see this debate as one between competing “research programmes” in Lakatos’ sense. And so I’m puzzled and fascinated by the emotional energy with which ‘both’ sides (as if there were only two sides to any interesting question!) invest this problem.

    In a related note, I’m gearing up to teach Plato in a few weeks, so I’m thinking a lot about how ‘theoretical’ discourses (metaphysics, epistemology) take on practical significance in response to cultural crisis.

  44. Art you state:
    chance means different things to different people. I don’t want to deconvolute the term, just note that it is a pretty empty one IMO.

    Ahh, but chance means a very specific thing on this blog. Since chance is the only viable option to design, chance is rigorously defined and debated on this site. So if you want to say chance did something biologically on this blog, be prepared to rigorously defend your assertion, I assure you, you will be called on it most strenuously.

  45. the notion, often put forth by ID proponents, that the genetic code is an arbitrary mapping of amino acid upon DNA sequence

    Well I certainly don’t agree with that. An arbitrary mapping would be something I’d expect from a random chance mechanism. A mapping that exploits subtle electro-chemical properties to best advantage is something I’d expect in an engineered system.

  46. I’m concerned that, once the only viable options are starkly posed as “chance” (or “chance” + “necessity”) vs. “design,” too great a chokehold has been placed on our metaphysical imagination. And yet I’m also fascinated by the need to accept this constraint.

  47. Art you state that:

    As for homochirality, any system that involves surface-based catalysts is inevitably, inexorably going to move towards a homochiral state.

    Even if you did overcome the homochiral problem, (you sited no study for me to refute) you would still face the insurmountable hurdle of probabilities. i.e.

    It is commonly presumed in many grade school textbooks that life slowly arose in a primordial ocean of pre-biotic soup. Yet, there is absolutely no hard evidence, such as chemical signatures in the geologic record, indicating that a ocean of this pre-biotic soup ever existed. The hard physical evidence scientists have discovered in the geologic record is stunning in its support of the anthropic hypothesis. The oldest sedimentary rocks on earth, known to science, originated underwater (and thus in relatively cool environs) 3.86 billion years ago. Those sediments, which are exposed at Isua in southwestern Greenland, also contain the earliest chemical evidence (fingerprint) of “photosynthetic” life [Nov. 7, 1996, Nature]. This evidence has been fought by naturalists, since it is totally contrary to their evolutionary theory. Yet, Danish scientists were able to bring forth another line of geological evidence to substantiate the primary line of geological evidence for photo-synthetic life in the earth’s earliest known sedimentary rocks (Indications of Oxygenic Photosynthesis,” Earth and Planetary Science Letters 6907 (2003). Thus we have two lines of hard conclusive evidence for photo-synthetic life in the oldest known sedimentary rocks ever found by scientists on earth! The simplest photosynthetic bacterial life on earth is exceedingly complex, too complex to happen by even if the primeval oceans had been full of pre-biotic soup.
    The smallest cyano-bacterium known to science has hundreds of millions of individual atomic molecules (not counting water molecules), divided into nearly a thousand different species of atomic molecules; and a genome (DNA sequence) of 1.8 million bits, with over a million individual complex protein molecules which are divided into hundreds of different kinds of proteins. The simplest of all bacteria known in science, which is able to live independent of a more complex host organism, is the candidatus pelagibacter ubique and has a DNA sequence of 1,308,759 bits. It also has over a million individual complex protein molecules which are divided into several hundred separate and distinct protein types. The complexity found in the simplest bacterium known to science makes the complexity of any man-made machine look like child’s play. As stated by Geneticist Michael Denton PhD, “Although the tiniest living things known to science, bacterial cells, are incredibly small (10-12 grams), each is a veritable micro-miniaturized factory containing thousands of elegantly designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machine built by man and absolutely without parallel in the non-living world”. So, as you can see, there simply is no simple life on earth as naturalism had presumed – even the well known single celled amoeba has the complexity of the city of London and reproduces that complexity in only 20 minutes. Here are a couple of quotes for the complexity found in any biological system, including simple bacteria, by two experts in biology:

    “Most biological reactions are chain reactions. To interact in a chain, these precisely built molecules must fit together most precisely, as the cog wheels of a Swiss watch do. But if this is so, then how can such a system develop at all? For if any one of the specific cog wheels in these chains is changed, then the whole system must simply become inoperative. Saying it can be improved by random mutation of one link, is like saying you could improve a Swiss watch by dropping it and thus bending one of its wheels or axis. To get a better watch, all the wheels must be changed simultaneously to make a good fit again.” Albert Szent-Györgyi von Nagyrapolt (Nobel prize for Medicine in 1937). “Drive in Living Matter to Perfect Itself,” Synthesis I, Vol. 1, No. 1, p. 18 (1977)

    “Each cell with genetic information, from bacteria to man, consists of artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction and a capacity not equaled in any of our most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours” Geneticist Michael Denton PhD.

    To give an idea how impossible “simple” life is for naturalistic blind chance, Sir Fred Hoyle calculated the chance of obtaining the required set of enzymes for just one of any of the numerous types of “simple” bacterial life found on the early earth to be one in 1040,000 (that is a one with 40 thousand zeros to the right). He compared the random emergence of the simplest bacterium on earth to the likelihood “a tornado sweeping through a junkyard might assemble a Boeing 747 therein”. Sir Fred Hoyle also compared the chance of obtaining just one single functioning protein (out of the over one million protein molecules needed for that simplest cell), by chance combinations of amino acids, to a solar system packed full of blind men solving Rubik’s Cube simultaneously.

    The simplest bacteria ever found on earth is constructed with over a million protein molecules. Protein molecules are made from one dimensional sequences of the 20 different L-amino acids that can be used as building blocks for proteins. These one dimensional sequences of amino acids fold into complex three-dimensional structures. The proteins vary in length of sequences of amino acids. The average sequence of a typical protein is about 300 to 400 amino acids long. Yet many crucial proteins are thousands of amino acids long. Proteins do their work on the atomic scale. Therefore, proteins must be able to identify and precisely manipulate and interrelate with the many differently, and specifically, shaped atoms, atomic molecules and protein molecules at the same time to accomplish the construction, metabolism, structure and maintenance of the cell. Proteins are required to have the precisely correct shape to accomplish their specific function or functions in the cell. More than a slight variation in the precisely correct shape of the protein molecule type will be for the life of the cell. It turns out there is some tolerance for error in the sequence of L-amino acids that make up some the less crucial protein molecule types. These errors can occur without adversely affecting the precisely required shape of the protein molecule type. This would seem to give some wiggle room to the naturalists, but as the following quote indicates this wiggle room is an illusion.

    “A common rebuttal is that not all amino acids in organic molecules must be strictly sequenced. One can destroy or randomly replace about 1 amino acid out of 100 without doing damage to the function or shape of the molecule. This is vital since life necessarily exists in a “sequence—disrupting” radiation environment. However, this is equivalent to writing a computer program that will tolerate the destruction of 1 statement of code out of 1001. In other words, this error-handling ability of organic molecules constitutes a far more unlikely occurrence than strictly sequenced molecules.” Dr. Hugh Ross PhD.

    It is easily demonstrated mathematically that the entire universe does not even begin to come close to being old enough, nor large enough, to ally generate just one small but precisely sequenced 100 amino acid protein (out of the over one million interdependent protein molecules of longer sequences that would be required to match the sequences of their particular protein types) in that very first living bacteria. If any combinations of the 20 L-amino acids that are used in constructing proteins are equally possible, then there are (20100) =1.3 x 10130 possible amino acid sequences in proteins being composed of 100 amino acids. This impossibility, of finding even one “required” specifically sequenced protein, would still be true even if amino acids had a tendency to chemically bond with each other, which they don’t despite over fifty years of experimentation trying to get amino acids to bond naturally (The odds of a single 100 amino acid protein overcoming the impossibilities of chemical bonding and forming spontaneously have been calculated at less than 1 in 10125 (Meyer, Evidence for Design, pg. 75)). The staggering impossibility found for the universe ever generating a “required” specifically sequenced 100 amino acid protein by would still be true even if we allowed that the entire universe, all 1080 sub-atomic particles of it, were nothing but groups of 100 freely bonding amino acids, and we then tried a trillion unique combinations per second for all those 100 amino acid groups for 100 billion years! Even after 100 billion years of trying a trillion unique combinations per second, we still would have made only one billion, trillionth of the entire total combinations possible for a 100 amino acid protein during that 100 billion years of trying! Even a child knows you cannot put any piece of a puzzle anywhere in a puzzle. You must have the required piece in the required place! The simplest forms of life ever found on earth are exceedingly far more complicated jigsaw puzzles than any of the puzzles man has ever made. Yet to believe a naturalistic theory we would have to believe that this tremendously complex puzzle of millions of precisely shaped, and placed, protein molecules “just happened” to overcome the impossible hurdles of chemical bonding and probability and put itself together into the sheer wonder of immense complexity that we find in the cell.

    Instead of us just looking at the probability of a single protein molecule occurring (a solar system full of blind men solving the Rubik’s Cube simultaneously), let’s also look at the complexity that goes into crafting the shape of just one protein molecule. Complexity will give us a better indication if a protein molecule is, indeed, the handi-work of an infinitely powerful Creator.
    In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, that is 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it is estimated it will take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape. In real life, the protein folds into its final shape in a fraction of a second! The computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. That is the complexity found for JUST ONE “simple” protein. It is estimated, on the total number of known life forms on earth, that there are some 50 billion different types of unique proteins today. It is very possible the domain of the protein world may hold many trillions more completely distinct and different types of proteins. The simplest bacterium known to man has millions of protein molecules divided into, at bare minimum, several hundred distinct proteins types. These millions of precisely shaped protein molecules are interwoven into the final structure of the bacterium. Numerous times specific proteins in a distinct protein type will have very specific modifications to a few of the amino acids, in their sequence, in order for them to more precisely accomplish their specific function or functions in the overall parent structure of their protein type. To think naturalists can account for such complexity by saying it “happened by chance” should be the very definition of “absurd” we find in dictionaries. Naturalists have absolutely no answers for how this complexity arose in the first living cell unless, of course, you can take their imagination as hard evidence. Yet the “real” evidence scientists have found overwhelmingly supports the anthropic hypothesis once again. It should be remembered that naturalism postulated a very simple “first cell”. Yet the simplest cell scientists have been able to find, or to even realistically theorize about, is vastly more complex than any machine man has ever made through concerted effort !! What makes matters much worse for naturalists is that naturalists try to assert that proteins of one function can easily mutate into other proteins of completely different functions by pure chance. Yet once again the empirical evidence we now have betrays the naturalists. Individual proteins have been experimentally proven to quickly lose their function in the cell with random point mutations. What are the odds of any functional protein in a cell mutating into any other functional folded protein, of very questionable value, by pure chance?

    “From actual experimental results it can easily be calculated that the odds of finding a folded protein (by random point mutations to an existing protein) are about 1 in 10 to the 65 power (Sauer, MIT). To put this fantastic number in perspective imagine that someone hid a grain of sand, marked with a tiny ‘X’, somewhere in the Sahara Desert. After wandering blindfolded for several years in the desert you reach down, pick up a grain of sand, take off your blindfold, and find it has a tiny ‘X’. Suspicious, you give the grain of sand to someone to hide again, again you wander blindfolded into the desert, bend down, and the grain you pick up again has an ‘X’. A third time you repeat this action and a third time you find the marked grain. The odds of finding that marked grain of sand in the Sahara Desert three times in a row are about the same as finding one new functional protein structure (from chance transmutation of an existing functional protein structure). Rather than accept the result as a lucky coincidence, most people would be certain that the game had been fixed.” Michael J. Behe, The Weekly Standard, June 7, 1999, Experimental Support for Regarding Functional Classes of Proteins to be Highly Isolated from Each Other
    “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed – along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering)

  48. All:

    Several thematic, selective, summary points (insomnia power can do only so much across several threads . . .):

    1] Art, 43: Kenyon (and Thaxton), quite frankly, had and hasn’t an inkling of the the work I point to . . . As for homochirality, any system that involves surface-based catalysts is inevitably, inexorably going to move towards a homochiral state. This issue is a non-starter as far as the OOL is concerned.

    On the contrary, it is you who have to account – without burden of proof shifting or the rhetoric of dismissal — for getting to a plausible scenario for creating the monomers for proteins and nucleic acids etc in a reasonable pre-boiotic cosmos. If on Earth, you have to account for the credible atmosphere [not the circa 1953 conveniently reducing one for the spark in gas expts etc], and then the resulting chaining in the case of the thermodynamics that supports racemic not chiral soups has to be faced. If in space, you have the further burden of getting to earth through a credible panspermia mechanism and reaching here in time for the first fossils to be there as soon after formation of a crust as we see just above, on the usual timelines.

    In short, on the evidence in hand, planetary etc systems don’t spontaneously form the antecedents to the molecules of life, much less the now partly elucidated information systems including their algorithms and codes. [Onlookers, cf my always linked for more.]

    2] “chance” means different things to different people

    Let’s use a simple example to specify a point of reference: heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! [From Section A, my always linked.]

    The relevant point is, that on RM, chance is the prime claimed innovator, as NS only selects from what has been innovated. But, Behe’s observed edge of evolution points to the sharp limitation on chance as innovator. Not surprising, in light of say longstanding findings of stat mech and information theory on the creation/ searching out of specified functional complexity by chance-dominated searches across configuration spaces.

    3] CS, 4: I think that world-views are very different from theories — and so criteria of theory-choice are different from criteria of world-view choice.

    I am not so sure of that, as many theories are quite ideological, and at research programme level, deeply embed metaphysical commitments. Further to this, we do have significant power of choice on worldviews as we do on reasoning in general – or else we are back at the point that we have a self-referential frame of thought that discredits all thought.

    4] I’ve been trying to steer clear off “materialism vs theism,”

    Problem is, that through games like imposing the unwarranted concept of methodological naturalism, and even hints at determinism and chance driving the life of the mind, materialism and its fellow travellers are being injected by the power brokers in today’s academy.

    5] I see this debate as one between competing “research programmes” in Lakatos’ sense.

    So do I and I see that therein lieth much of the ideological matter and the meta level worldviews and associated epistemology questions. Sometimes I think Feyerabend was right to say in effect that across time and schools, in aggregate, anything goes – as a matter of fact. So we need to sort out the chaos.

    It might help you to see that I cut my intellectual eye-teeth in the 1970′s in a Marxism-dominated intellectual climate as an objector to the ideological hothouse mentality. All the current NCSE, ACLU etc tactics are very very familiar to me – and I predict a very similar outcome probably over the next decade.

    In short, as the Peloponnesian war showed, “democracies” subjected to Plato’s Cave power games and manipulation are on a road to suicide. But that can be very costly to those who try to say “stop the madness” before it is evident to all from its consequences – cf here the literature on group-think. And, even decades after the fact there are still many who are trying to make excuses and to repackage the old empirically discredited notions and agendas. [E.g. too much of environmentalism is Watermelon: green outside, red inside.]

    6] CS, 47: I’m concerned that, once the only viable options are starkly posed as “chance” (or “chance” + “necessity”) vs. “design,” too great a chokehold has been placed on our metaphysical imagination.

    At core level, ever since at least Plato in The Laws, X, ~ 2,400 ya, we do have that interacting trichotomy, as I noted on above. Can you identify a fourth generic alternative that does not reduce to one or more of the three causal forces? Give us a concrete case in point, kindly.

    However, in concrete situations, the general theme is fleshed out as in e.g my appendix 1 to my always linked, point 6 on the microjets in a vat. Similarly, you can see the debate over Caputo and the Flagellum in that ever so prolonged thread on Padian etc.

    So, please do not let summary remarks distract from that context of examining what is meant by necessity, chance and agency in specific settings. And I would love to see the fourth alternative, with an example that is concrete.

    GEM of TKI

  49. PS: The points for a prebiotic earth also extend to hydrothermal vents etc, as I discuss in my always linked.

  50. You have one tough row to hoe, to connect information to chemistry. The work you reference is similar to the amino acid proof of Miller in 1953. It only hints at a very minimum possibility of generating a negligible and very questionable few “words” of CSI if that at all, You do this while ignoring the sheer wall of complexity that has to be scaled to get to a functional life form whose information content is measured in ,at bare minimum, kbits. Plus you ignore the evidence for the interrelated whole of the genome! When you quote such suggestive work then say you have refuted ID, it reveals your bias to the naturalistic/materialistic philosophy, especially considering the overwhelming weight of evidence from the other sciences that conform to the anthropic hypothesis.
    i.e.,, The human genome, according to Bill Gates the founder of Microsoft, far, far surpasses in complexity any computer program ever written by man. The data compression (multiple meanings) of some stretches of human DNA is estimated to be up to 12 codes thick (Trifonov, 1989)! No line of computer code ever written by man approaches that level of data compression (poly-functional complexity). Further evidence for the inherent complexity of the DNA is found in a another study. In June 2007, a international team of scientists, named ENCODE, published a study that indicates the genome contains very little unused sequences and, in fact, is a complex, interwoven network. This “complex interwoven network” throughout the entire DNA code makes the human genome severely poly-constrained to random mutations (Sanford; Genetic Entropy, 2005). This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code. Thus even though a random mutation to DNA may be able to change one part of an organism for the better, it is now proven much more likely to harm many other parts of the organism that depend on that one particular part being as it originally was. This “interwoven network” finding is extremely bad news for naturalists!

    Naturalists truly believe you can get such staggering complexity of information in the DNA from some process based on blind chance. They cannot seem to fathom that any variation to a basic component in a species is going to require precise modifications to the entire range of interconnected components related to that basic component. NO natural law based on blind chance, would have the wisdom to implement the multitude of precise modifications on the molecular level in order to effect a positive change from one species to another. Only a “vastly superior intelligence” would have the wisdom to know exactly which amino acids in which proteins, which letters in the DNA code and exactly which repositioning of the 25 million nucleosomes (DNA spools) etc .. etc .. would need to be precisely modified to effect a positive change in a species. For men to imagine blind chance has the inherently vast wisdom to create such stunning interrelated complexity is even more foolish than some pagan culture worshipping a stone statue as their god and creator. Even if evolution of man were true, then only God could have made man through evolution. For only He would have the vast wisdom to master the complexity that would be required to accomplish such a thing. Anyone who fails to see this fails to appreciate the truly astonishing interwoven complexity of life at the molecular level. Even though God could have created us through “directed evolution”, the fossil record (Lucy fossil proven not ancestral in 2007) and other recent “hard” evidence (Neanderthal mtDNA sequenced and proven “out of human range”) indicates God chose to create man as a completely unique and distinct species. But, alas, our naturalistic friend is as blind and deaf as the blind chance he relies on to produce such changes and cannot bring himself to face this truth. Most naturalists I’ve met, by and large, are undaunted when faced with such overwhelming evidence for Divine Intelligence and are convinced they have conclusive proof for naturalistic evolution somewhere. They will tell us exactly what it is when they find it. The trouble with this line of thinking for naturalists is they will always take small pieces of suggestive evidence and focus on them, to the exclusion of the overriding vast body of conclusive evidence that has already been established. They fail to realize that they are viewing the evidence from the wrong overall perspective to begin with. After listening to their point of view describing (with really big words) some imagined evolutionary pathway on the molecular level, sometimes I think they might just be right. Then when I examine their evidence in detail and find it wanting, I realize they are just good story tellers with small pieces of “suggestive” evidence ignoring the overwhelming weight of “hard” evidence that doesn’t fit their naturalistic worldview. Instead of them thinking,” WOW look how God accomplishes life on the molecular level,” they think” WOW look what , dumb and blind chance accomplished on the molecular level.” Naturalists have an all too human tendency to over-emphasize and sometimes even distort the small pieces of suggestive evidence that are taken out of context from the overwhelming body of “conclusive” evidence. This is done just to support their own preconceived philosophical bias of naturalism. This is clearly the practice of very bad science, since they have already decided what the evidence must say prior to their investigation.
    I could help them find the conclusive proof for evolution they are so desperately looking for if they would just listen to me. For I know exactly where this conclusive proof for evolution is; it is right there in their own imagination. What really amazes me is that most naturalists are people trained in exacting standards of science. Yet, they are accepting such piddling and weak suggestive evidence in the face of such overwhelming conclusive evidence to begin with. This blatant deception; , dumb, blind chance has the inherent wisdom to produce staggering complexity, is surprisingly powerful in its ability to deceive! That it should ensnare so many supposedly rational men and women is remarkable. Then, again, I have also been easily misled by blatant deception many times in my life, so, maybe it is not that astonishing after all. Maybe it is just a painful and all too human weakness we all share that allows us to be so easily deceived.

  51. Art:

    1] Labelling issues as “misconceptions” does not answer them on the merits. In particular, you still have to get TO the first cell with chiral molecules fulfilling a geometrically linked role in a code-based information system, with empirical anchoring.

    2] Meyer is saying in effect that there is no universally forced, crystal-like bonded sequence in nucleotides in DNA chains, or of amino acid residues in proteins; otherwise the first would not work well as a storage medium, and the latter would lose its flexibility as the workhorse molecule in the cell. [Hence the comparison to inked glyphs on paper.]

    3] Where there is a point, is that the amino acid residues are not evenly distributed in proteins, and that may also be so for dna strands. That is we do not see 5% each for 20 amino acids in every case, and it is possible to deviate from 25% each for G, C, A, T too.

    4] And, if the laws of chemistry have “life” written into them, that simply opens up another level of design inference.

    –> In short for evolutionary materialism to properly prevail, you need to have an empirically credible way to life by chance plus natural regularities that are not themselves also “fishy.”

    –> Let’s see you lay it out in summary and link it. [Onlookers cf my always linked, esp. Sections A and B, for a 101 level intro.]

    GEM of TKI

    PS: I have no interest in joining ARN’s debates.

  52. art

    Yarus’ work indicates that chemistry in fact does underlie the genetic code, and thus “the arrangement of nucleotide bases”. This is my point in citing Yarus et al., to disabuse readers here of the notion that the genetic code is in some way disconnected from chemistry (as Meyer does).

    Thus the particular mapping of codons to amino acids (to be more precise).

    At least for some codon/amino acid mappings there’s some sort of chemical affinity. This seems to be a reasonable conclusion if there was no bias introduced by the experimental equipment or procedures.

    Interpretation is still largely a matter of confirmation bias. Engineered systems exploit properties of materials to best effect. If you see it as an engineered system this is an expected finding.

    The larger problems for OOL remain untouched. No known natural environment can produce the purified and concentrated homochiral reagent mixture used in the experiment. A fair claim is that Yarus’ experiment is further evidence that intelligent agency is required to get these kinds of biochemical reactions to happen.

    “An honest man, armed with all the knowledge available to us now, could only state that in some sense, the origin life appears at the moment to be almost a miracle, so many are the conditions which would have had to have been satisfied to get it going.” (Crick, Francis H.C. [Co- discoverer of the structure of DNA, Nobel laureate 1962, Professor at the Salk Institute, USA], “Life Itself: Its Origin and Nature,” Simon & Schuster: New York NY, 1981, p.88).

    This is only more true now than it was when Crick first penned it.

    I realize this goes beyond your point that there is experimental evidence of chemical affinity between some amino acids and the codons they map to in the universal genetic code. I write it because I don’t want readers disabused into thinking this is any significant breakthrough for abiogenesis hypotheses. It’s actually more like clutching at straws.

  53. mg

    Are ID and methodological naturalism inherently incompatible?

    Only if you consider intelligent agency to be an unnatural phenomenon.

    This should be a source of cognitive dissonance for chance & necessity pundits. They purport that human intelligent agents self-assembled over the course of a few billion years from inanimate elements and in the same breath say it’s unscientific to propose any precursor intelligence was involved in the process. If intelligence can arise “naturally” then intelligent agency is by definition a force of nature that must be considered as a possible factor.

    Moreover the Copernican principle of mediocrity implies we should expect intelligent agents to exist elsewhere in time and space (no special creation). It’s a good thing for a lot of Darwinists that I’m not the president of the materialist club because I’d revoke a lot their memberships for spitting in the eye of the Copernican paradigm. What should we call that; post-enlightenment or dark age thinking?

  54. A [Yodaish?] note:

    A look at the ENV discussion on abiogenesis here, worth taking, it is.

    Material, indeed, it is . . .

    GEM of TKI

  55. art

    Comments sometimes get eaten for reasons unknown.

    I have a question regarding the ARN link you gave. Has anyone duplicated Sidney Fox’s thermal protein experiment and where is it published?

    This seems to be the stuff of urban legend but I didn’t spend long searching for more.

  56. Fox’s studies were replicated by many people.

    Soap bubbles were replicated by other people too. Fox’s protocells are doublespeak. They are nothing like real cells.

    This is like baking a rectangular piece of clay and suggesting it’s like a computer chip by alluding to some superficial similarities.

    Fox did create something and publish it in peer review. He was rumored on the short list for the Nobel prize.

    However ID proponents Thaxton, Bradley, and Olsen gave Fox’s theories a total thrashing.

  57. Art, Dave and Sal:

    The drubbing is online, in the three chapters of TBO’s TMLO, esp. Ch 9, though the only available on paper Ch 10 is also relevant to the sort of scenarios that are so often touted but do not stand up to even first level serious scrutiny from a less than credulous perspective.

    I excerpt:

    Sidney Fox31 has pioneered the thermal synthesis of polypeptides, naming the products of his synthesis proteinoids. Beginning with either an aqueous solution of amino acids or dry ones, he heats his material at 2000oC for 6-7 hours.

    [NOTE: Fox has modified this picture in recent years [i.e. to 1984] by developing “low temperature” syntheses, i.e., 90-120oC. See S. Fox, 1976. J Mol Evol 8, 301; and D. Rohlfing, 1976. Science 193, 68].

    All initial solvent water, plus water produced during Polymerization, is effectively eliminated through vaporization. This elimination of the water makes possible a small but significant yield of polypeptides, some with as many as 200 amino acid units. Heat is introduced into the system by conduction and convection and leaves in the form of steam. The reason for the success of the polypeptide formation is readily seen by examining again equations 8-15 and 8-16. Note that increasing the temperature would increase the product yield through increasing the value of exp (- [delta]G / RT) [Cf Chs 7 - 8 on this, my always linked App 1 gives a short version]. But more importantly, eliminating the water makes the reaction irreversible, giving an enormous increase in yield over that observed under equilibrium conditions by the application of the law of mass action.

    Thermal syntheses of polypeptides fail, however, for at least four reasons. First, studies using nuclear magnetic resonance (NMR) have shown that thermal proteinoids “have scarce resemblance to natural peptidic material because beta, gamma, and epsilon peptide bonds largely predominate over alpha-peptide bonds.”32

    [NOTE: This quotation refers to peptide links involving the beta-carboxyl group of aspartic acid, the gamma-carboxyl group of glutamic acid, and the epsilon-amino group of lysine which are never found in natural proteins. Natural proteins use alpha-peptide bonds exclusively].

    Second, thermal proteinoids are composed of approximately equal numbers of L- and D-amino acids in contrast to viable proteins with all L-amino acids. Third, there is no evidence that proteinoids differ significantly from a random sequence of amino acids, with little or no catalytic activity. [It is noted, however, that Fox has long disputed this.] Miller [of Miller-Urey!] and Orgel have made the following observation with regard to Fox’s claim that proteinoids resemble proteins:

    The degree of nonrandomness in thermal polypeptides so far demonstrated is minute compared to nonrandomness of proteins. It is deceptive, then, to suggest that thermal polypeptides are similar to proteins in their nonrandomness.33

    Fourth, the geological conditions indicated are too unreasonable to be taken seriously. As Folsome has commented, “The central question [concerning Fox's proteinoids] is where did all those pure, dry, concentrated, and optically active amino acids come from in the real, abiological world?”34 . . .

    Maybe Art can enlighten us further on these issues and the recent developments he links, etc?

    How do they overcome the four issues identified, and other challenges to OOL scenarios and models?

    GEM of TKI

  58. Kairofocus, if you had read the ARN post, you would know that most of the points you mention are irrelevant, red herrings of sorts. The exception is the claim that thermal proteinoids do not possess catalytic activities. In this, TBO are wrong. Plain and simple. The discussions on the ARN board, as well as two of the three recent references I posted, are quite explicit in this regard.

    The bit about catalytic activity is important and interesting. I have no idea why TBO would make such an obviously incorrect claim, but the fact that relatively low information collections of polymers would possess catalytic activities of various sorts (the list is interesting, even provocative) tells us something about the ID theorists claims about CSI, etc. Indeed, one might se Fox’s work as a presaging of sorts of the experiments that have shown that functional proteins are actually low-information entities.

    A couple of ISCID threads that elaborate more on the latter matter. The first is long and detailed, but it pretty nicely lays to rest this mistaken ID tenet (you must read all 6 pages, otherwise you will not get the points). The second is my own twist on a theme that pops up from time to time. Enjoy.

    http://www.iscid.org/ubb/ultim.....000145;p=1

    http://www.iscid.org/ubb/ultim.....035#000000

  59. art

    Thanks. I found what I was looking for with Google Scholar shortly after I wrote the urban legend question.

    The ARN link seemed to be saying that protocells had been produced by Fox and that appears to be urban legend. Microspheres were produced that resembled the exterior of certain cells but that’s as far as it went. Spheres are a very common shape in nature especially when boiling liquids are involved. The polypeptides formed were racemic. Homochilarity is perhaps the biggest single problem for OOL. As far as I know the only way anyone found to produce homochilaric monomers is by forming them in a strong laser beam where the polarity of the coherent light aligns the reactants. Natural lasers of high intensity and stability are known to occur (rarely) in some young solar systems so it isn’t out of the question as a source of homochiralic monomers. Francis Crick’s opinion is still spot on today:

    An honest man, armed with all the knowledge available to us now, could only state that in some sense, the origin of life appears at the moment to be almost a miracle, so many are the conditions which would have had to have been satisfied to get it going.

  60. Homochirality is perhaps the biggest single problem for OOL.

    Agreed. As far as I know the latest attempt has been studying circularly polarised ultraviolet light in space and even that was only capable of producing an excess of 2.6% left-handed amino acids.

    Perhaps there has been more studies on that issue and that’s why they’re saying stuff like this:

    http://www.sciencedaily.com/re.....093819.htm

    The researchers calculate the odds of life starting on Earth rather than inside a comet at one trillion trillion (10 to the power of 24) to one against.

    Professor Wickramasinghe said: “The findings of the comet missions, which surprised many, strengthen the argument for panspermia. We now have a mechanism for how it could have happened. All the necessary elements – clay, organic molecules and water – are there. The longer time scale and the greater mass of comets make it overwhelmingly more likely that life began in space than on earth.”

    Then there is “Spontaneous emergence of homochirality in noncatalytic systems” , November 2004, Proceedings of the National Acedemy of Sciences:

    http://www.pnas.org/cgi/conten…..1/48/16733

    Their theoretical model describes a dynamic system of amino acids joining and disjoining with a free flow of energy and ingredients. In the best-case scenario, provided that all the ingredients are present in the right conditions, this system might produce about 70% of one hand in a few centuries (a value that stabilizes and does not rise higher). Even this does not form polypeptide chains, only an excess of one-hand in the amino acids. They say that the formation of the first prebiotic peptides is not a trivial problem, as free amino acids are poorly reactive (peptide bonds tend not to form in water). To solve this part of the problem, they imagine alternate wetting and drying periods and the presence of N-carboxyanhydrides to activate the amino acids. The tests required fairly high concentrations of ingredients, and specific temperature and acidity. They couldn’t get any single-handed chains to result, but still feel their model is better than the usual direct autocatalytic reaction models, which they view as “dubious in a prebiotic environment.”

    Old thread on the subject:

    http://www.uncommondescent.com.....the-horse/

    Comments sometimes get eaten for reasons unknown.

    To add to that, I’d suggest to everyone that if you’re writing a long response that you copy it somewhere safe before hitting the “submit comment” button. It’s maddening to write an article-length response just to have it zapped.

  61. All:

    I have hatches to batten down, so to speak [trees beginning to moan and wave a bit more than usual at evening time now; but I doubt we'll see more than 50 mph here in M'rat]; and the projections for my fam back in Jamaica are not so good at all — Cat 4 on or about Sunday is no fun. Maybe, DV, this one will do an Ivan jump and miss . . . time for a bit of kneeology [as one of the Weather experts comments on . . .]?

    So, pardon my being a bit summary, esp with Art’s comment.

    On that, I think it a bit amusing that he could zero in on one point, while missing the major pattern of the gaps between what SF did as reported by the mid 80′s, and the realities of proteins in cell based life; as well as the extensions thereto in proteinoid microspheres, which TBO specifically discuss in ch 10, BTW; cf. Table 10-1.

    If he glances back, he will also see that I was giving a quick response to Sal’s remark in 60, so that was no red herring at all – note the non-proteinaceous bonding, the racemic forms, and much more in the excerpt, and the ref to Ch 10, sadly not online.

    On the stress on catalytic activity etc in his proteinoids, let’s just say that enzymes have extreme functional specificity based on highly specific coded chaining, so “catalytic ability” in the abstract is not exactly deeply relevant to the issue of forming life as we observe it – science fiction alternative possible worlds notwithstanding.

    (Any credible detailed pathway from microspheres to cells thence life as we know it; esp where codes, data storage, information and functionally specified complexity come from? Absent these we are just looking at so many imaginative just so stories . . . )

    On the various other protocell claims, the TBO remarks are in the Ch 10, which is unfortunately not online, and I have little time to type out from text. What I can first excerpt briefly and remark on is Nakashima’s:

    “(p)roteinoids or proteinoid microspheres have many activities. Esterolyis, decarboxylation, amination, deamination, and oxidoreduction are catabolic enzyme activities. The formation of ATP, peptides or oligonucleotides is synthetic enzyme activities.” [NB, cf TMLO table 10-1 on this; nothing truly new there beyond what was reckoned with and addressed by TBO circa 1984, IMHCO, tellingly.]

    Proteinoids, as TBO pointed out very relevantly [as excerpted above], are simply not proteins, starting with bonding patterns across monomers in the chains.

    TBO also observe, and I am excerpting desperately:

    . . . microspheres are simply proteinoids attracted together (by physical forces) into a somewhat ordered spherical structure. Here, too, the spherical structure is due to the attraction of the hydrophilic parts of the proteinoids to water and of the hydrophobic parts to each other . . . catalytic activity of the microspheres is not due to any special structure the microsphere possesses . . . much of the rate increase seen in proteinoids is due to the amino acids themselves, not the proteinoids . . .

    In short neither the agglomeration of amino acids into proteinoids nor the further agglomeration into microsphere seems to add materially to the existing properties of the amino acids [and where do these credibly come from in an OOL scenario relative to the geology, atmospheric and astronomical situations?], save for of course the effects of in effect random, and predominantly non-protein like chaining triggered by heat.

    As to the wider pattern of activities listed in the first ARN link, Art glided over the algorithmic, stored-data controlled framework of life in actual cell-based life. So, he changed the subject from the real world to a sci fiction world. It is the real world that we need to explain.

    Gotta go now, stole a few moments after checking Accuweather and WU . . .

    Back in touch maybe on the morrow, DV, depending.

    GEM of TKI

  62. PS The excerpt fr TMLO Ch 10 was from p 174, my paperback edn.

  63. Patrick

    The researchers calculate the odds of life starting on Earth rather than inside a comet at one trillion trillion (10 to the power of 24) to one against.

    That’s better than the odds of falciparum coming up with any structure requiring more than 3 interdependent mutations in a trillion trillion chances.

    There may be hope that evolutionary biologists will figure out that statistical probability isn’t something you get to ignore when it doesn’t agree with your dearly held convictions.

  64. kairosfocus, three things.

    1. Sorry about messing up your name above. For some reason, I had “Karo Syrup” in my mind. Go figure.

    2. Good luck with the upcoming weather.

    3. Please take the time to read the various things I have pointed to. It is not appropriate to assume that I am thinking of proteinoids as linear predecessors of proteins as we know them, or of Fox’s living protocells as linear predeccesors of life as we know it. This is putting the cart way before the horse, and I am not trying to do this. If you can understand this, then you will better see the gist of my essay.

    As far as the statement

    “enzymes have extreme functional specificity based on highly specific coded chaining, so “catalytic ability” in the abstract is not exactly deeply relevant to the issue of forming life as we observe it”

    this is just plain wrong. The 6-page discussion on ISCID that I pointed to spells out the many ways that this is so.

    The same discussion provides a wet-bench, experimentally-supported contrast to the assertion (unsupported by direct experimentation) regarding “the algorithmic, stored-data controlled framework of life in actual cell-based life.”

    Finally, I’ll add one more ISCID discussion that brings into play some concepts that confound the clean, engineering POV even more. As always, enjoy (and again, stay out of harm’s way as much as is possible).

    http://www.iscid.org/ubb/ultim.....551#000000

  65. Hang in there KF.

  66. All,

    First, thanks on expressed concerns. On weather situation M’rat: Seems Dean is cutting into the region just N of Barbados, so so far just wind here, maybe up to 40 – 50 in gusts. Rains have just now begun, with a bang. (Now my concern is that it may have done a number on the farmers in Dominica and St Lucia. But moreso, projections put it very near Jamaica at Cat 4 Sunday. Let’s hope and pray it does an Ivan if so – ducks away from Jamaica by a mysterious swerve. And onward let’s hope it does not do a Katrina etc.)

    On a few notes:

    1] DS, 67: There may be hope that evolutionary biologists will figure out that statistical probability isn’t something you get to ignore when it doesn’t agree with your dearly held convictions

    True, true.

    But let us also note that the issue is not really whether life started on earth or in a comet by the mechanisms discussed, but the relative likelihood of same! [In short, it is EVEN MORE IMPROBABLE, by 10^24:1 against, that life started on a comet than on earth. That life started on earth based on chance + necessity based abiogenesis scenarios on the geology at work and the atmosphere that is plausible, is improbable in the extreme.]

    2] Art, 68: It is not appropriate to assume that I am thinking of proteinoids as linear predecessors of proteins as we know them, or of Fox’s living protocells as linear predeccesors of life as we know it

    I am pointing out, by excerpting TMLO and remarking, that the comparisons between proteinoids and proteins, and between properties of microspheres of proteinoids and living cells were arguably greatly exaggerated, circa 1984. I have but little reason to infer further that today’s situation has done much to revise that judgement.

    Indeed, being a little less rushed just now, here is a bit more from p 174, on catalysis:

    Fox et al state that “microparticles possess in large degree the rate enhancing activities of the polymers of which trhey are composed” 47 . . . If the protein by itself has a catalytic property, it seems very logical that the protein would retain that property when put into a micelle. The catalytic property is not due to any special property the microsphere possesses. The increase in reaction rate observed in microspheres is very small by comparison to the increase seen in true enzymes (where the rate increase factors are in the billions –10^9). Furthermore, much of the rate increase is due to the amino acids themselves, not the proteinoid . . .

    What empirical data do you have, Art, that overthrows that observation?

    3] Re my: “enzymes have extreme functional specificity based on highly specific coded chaining, so “catalytic ability” in the abstract is not exactly deeply relevant to the issue of forming life as we observe it “, this is just plain wrong

    I hear your assertion. Am I to understand that you hold that in effect any random fairly easily random walk accessed protein or proteinoid chain will function more or less “just as well” as the specific cluster of DNA-controlled enzymes we see in cells? (I.e bio-functionality is a matter of easily accessed closely spaced stepping stones in the config spaces relevant to the matter at hand? That is how I seem to see your arguing at ARN, e.g in attempted rebuttal to Meyer, Axe et al.)

    At least, that is what your remarks I now excerpt from March 2000 seem to suggest:

    Recall, now, one of the failed predictions of ID – that the information content of functional macromolecules would be high, enough so as to support some sort of design inference. As a matter of fact, direct experimental measurements as well as the success of bioinformatics tools for identifying function in newly-sequenced genes tell us that the informational content of proteins is inherently low. From a practical perspective, this means that any sizeable population of randomly-assembled chains of L-amino acids will likely have a large, diverse range of catalytic activities. As indicated by Nakashima (above), this will also hold for thermal proteins and their protocell products, entities that could readily (probably copiously) form in prebiotic conditions. And, as indicated in Nakashima’s review, one such property would include the ability to synthesize oligonucleotides.

    Do you mean that the precise DNA-controlled sequences that form enzymes are a matter of low-information accident, and more or less any random pattern of monomers and/or a sloppy replication system would have worked just as well or at least adequately?

    If my summary is anywhere nearly accurate to your view, kindly explain to me on this basis the origin, function and preservation of the cellular process of DNA-reading, code and codon -based protein chaining and the enzymes it uses, then, on your thesis.

    -> Power has dropped now, so I cut here and post, or at least try.

    GEM of TKI

  67. Picking up:

    Power back on for now, we are plainly fringish.

    4] From ARN quote: any sizeable population of randomly-assembled chains of L-amino acids will likely have a large, diverse range of catalytic activities. As indicated by Nakashima (above), this will also hold for thermal proteins and their protocell products, entities that could readily (probably copiously) form in prebiotic conditions

    If you redefine and broaden the target zone enough [sounds familiar?], it becomes meaningless of course – note how already, amino acids may have some catalytic properties..

    More to the point, Art needs to show not just assert how the relevant amino acids much less proteins much less cells based on the DNA-RNA_Enzyme-Ribosome etc system credibly and with sufficient probability originated in reasonable prebiotic environments. Otherwise this is yet another just so story.

    5] Latest ARN cites . . .

    Seem, unfortunately, to be more of the same: reiterating assertions rather than actually substantiating beyond the level of just so stories with huge gaps, the last on ebeing openly a tentative suggestion.

    6] Sal, 63 (still worth a follow-up): The ARN link seemed to be saying that protocells had been produced by Fox and that appears to be urban legend.

    Actually, in TMLO, it seems that the term “protocells” and the like, were introduced among the researchers who were impressed with the sort of list of parallels to cells in table 10-1:

    Because of the many similar properties between microspheres and contemporary cells, microspheres were confidently called protocells, the link between the living and nonliving in evolution. Similar structures were given the names plasmogeny 44 (plasma of life) and Jeewarnu 45 (Sanskrit for “particles of life”) . . .

    But the problem as already highlighted, is that here was a problem that the actual processes bear little relationships to life-processes at cellular level.

    So, again, an overstated resemblance highlighted prematurely.

    GEM of TKI

  68. Recall, now, one of the failed predictions of ID – that the information content of functional macromolecules would be high, enough so as to support some sort of design inference. As a matter of fact, direct experimental measurements as well as the success of bioinformatics tools for identifying function in newly-sequenced genes tell us that the informational content of proteins is inherently low.

    Eh? I must have missed that one. Where did anyone in the ID movement make a specific prediction on the information content of proteins?

  69. Hi Patrick:

    I didn’t comment on that one because I thought the loud trumpeting of an elephant being hurled makes the point eleoquently.

    {Let’s hope this claim will not be backed up by a lit bluff!)

    The nanotech of the cell constitutes an information system well, well beyond 500 – 2,000 bits worth of informational complexity, and the latter is something like a configuration space of 10^600 power. So we can comfortably infer that it is beyond Dembski’s UPB relative to finding islands of relevant functionality.

    In short, we are in effect back to the point underlying Hoyle’s comment that the odds of getting to the 2,000 enzymes of life by chance are of order 1 in [10^20]^2,000 ~ 1 in 10^40,000.

    Art’s confident assertion on failed predictions, fails to pass the common-sense test.

    GEM of TKI

  70. Art

    1. Some time in the past, 4+ billion years ago, the earth was a lifeless place. A bit more recently, 3+ billion years ago, life existed in earth. These two statements are unremarkable and true. And they say something pretty simple – abiogenesis, life from non-life, happened.

    One can also say that 4 billion years ago the earth was a violent molten piece of rock and 3.5 billion years ago life appeared. Both are not absolutely true. They are provisionally true. The former a lot more confident than the latter. There are only suggestions in the strata that life was around that long ago.

    2. Before there was life on earth, there was chemistry. Every scientist who studies these things would agree with this.

    Before there were stars and galaxies there was chemistry and quantum mechanics and electricity and mechanical forces too so while the statement is true but I’m not sure what point it makes.

    3. Every process in a cell that has been studied has been found to be a matter of chemistry.

    Not at all true. Electrical, chemical, and mechanical forces are all at play in cellular processes. Information storage, retrieval, modification, and translation is a process in a class by itself with electrical, chemical, and mechanical structures and processes serving as media and mechanisms.

    4. There is still a gray area, a time between the epoch

    Point 4 is invalid as it’s based on the false premise in point 3.

    By the way, there are no such things as “Fox protocells” or anyone else’s protocells. This appears to be urban legend concocted by armchair abiogenesis pundits. No one has fabricated a protocell including Fox.

  71. Art,

    Patrick, I’m happy to learn that the ID camp has abandoned the notion that proteins are information-rich moieties. Of course, this means that DNA is also low-information. I’ve been waiting for more than 10 years for IDists to catch up to this realization.

    I’m waiting for you to tell me exactly when and where someone considered to be part of the ID movement made the prediction that individual proteins would contain larger amounts of information than previously estimated. The only similar prediction I remember over the years was relevant to “junk DNA” and we all know how that’s turning out.

    Does anyone else know the prediction he is talking about?

  72. Art,
    What gives a protein its “complexity” is its specificity of requirement! This is a bit detailed responce but you will clearly see the point I am making in regards to protein specificity.

    (What evidence is found for the first life on earth?) we will look at the evidence for the first appearance of life on earth. As well we will also look at the chemical activity of the first life on earth. Once again, the presumption of naturalistic blind chance being the only reasonable cause must be dealt with. It is commonly presumed in many grade school textbooks that life slowly arose in a primordial ocean of pre-biotic soup. Yet, there is absolutely no hard evidence, such as chemical signatures in the geologic record, indicating that a ocean of this pre-biotic soup ever existed. The hard physical evidence scientists have discovered in the geologic record is stunning in its support of the anthropic hypothesis. The oldest sedimentary rocks on earth, known to science, originated underwater (and thus in relatively cool environs) 3.86 billion years ago. Those sediments, which are exposed at Isua in southwestern Greenland, also contain the earliest chemical evidence (fingerprint) of “photosynthetic” life [Nov. 7, 1996, Nature]. This evidence has been fought by naturalists, since it is totally contrary to their evolutionary theory. Yet, Danish scientists were able to bring forth another line of geological evidence to substantiate the primary line of geological evidence for photo-synthetic life in the earth’s earliest known sedimentary rocks (Indications of Oxygenic Photosynthesis,” Earth and Planetary Science Letters 6907 (2003). Thus we have two lines of hard conclusive evidence for photo-synthetic life in the oldest known sedimentary rocks ever found by scientists on earth! The simplest photosynthetic bacterial life on earth is exceedingly complex, too complex to happen by even if the primeval oceans had been full of pre-biotic soup. Thus, naturalists try to suggest pan-spermia (the theory that pre-biotic amino acids, or life itself, came to earth from outer-space on comets) to account for this sudden appearance of life on earth. This theory has several problems. One problem is that astronomers, using spectral analysis, have not found any vast reservoirs of biological molecules anywhere they have looked in the universe. Another problem is, even if comets were nothing but pre-biotic amino acid snowballs, how are the amino acids going to molecularly survive the furnace-like temperatures generated when the comet crashes into the earth? If the pre-biotic molecules were already a life-form on the comet, how could this imagined life-form survive the extremely harsh environment of space for many millions of years, not to mention the fiery crash into the earth? Did this imagined super-cell wear a cape like superman?

    The first actual fossilized cells scientists have been able to recover in the fossil record are 3.5 billion year old photosynthetic cyano(blue-green)bacteria, from western Australia, which look amazingly similar to a particular type of cyano-bacteria that are still alive today. The smallest cyano-bacterium known to science has hundreds of millions of individual atomic molecules (not counting water molecules), divided into nearly a thousand different species of atomic molecules; and a genome (DNA sequence) of 1.8 million bits, with over a million individual complex protein molecules which are divided into hundreds of different kinds of proteins. The simplest of all bacteria known in science, which is able to live independent of a more complex host organism, is the candidatus pelagibacter ubique and has a DNA sequence of 1,308,759 bits. It also has over a million individual complex protein molecules which are divided into several hundred separate and distinct protein types. The complexity found in the simplest bacterium known to science makes the complexity of any man-made machine look like child’s play. As stated by Geneticist Michael Denton PhD, “Although the tiniest living things known to science, bacterial cells, are incredibly small (10-12 grams), each is a veritable micro-miniaturized factory containing thousands of elegantly designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machine built by man and absolutely without parallel in the non-living world”. So, as you can see, there simply is no simple life on earth as naturalism had presumed – even the well known single celled amoeba has the complexity of the city of London and reproduces that complexity in only 20 minutes. Here are a couple of quotes for the complexity found in any biological system, including simple bacteria, by two experts in biology:

    “Most biological reactions are chain reactions. To interact in a chain, these precisely built molecules must fit together most precisely, as the cog wheels of a Swiss watch do. But if this is so, then how can such a system develop at all? For if any one of the specific cog wheels in these chains is changed, then the whole system must simply become inoperative. Saying it can be improved by random mutation of one link, is like saying you could improve a Swiss watch by dropping it and thus bending one of its wheels or axis. To get a better watch, all the wheels must be changed simultaneously to make a good fit again.” Albert Szent-Györgyi von Nagyrapolt (Nobel prize for Medicine in 1937). “Drive in Living Matter to Perfect Itself,” Synthesis I, Vol. 1, No. 1, p. 18 (1977)

    “Each cell with genetic information, from bacteria to man, consists of artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction and a capacity not equaled in any of our most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours” Geneticist Michael Denton PhD.

    To give an idea how impossible “simple” life is for naturalistic blind chance, Sir Fred Hoyle calculated the chance of obtaining the required set of enzymes for just one of any of the numerous types of “simple” bacterial life found on the early earth to be one in 1040,000 (that is a one with 40 thousand zeros to the right). He compared the random emergence of the simplest bacterium on earth to the likelihood “a tornado sweeping through a junkyard might assemble a Boeing 747 therein”. Sir Fred Hoyle also compared the chance of obtaining just one single functioning protein (out of the over one million protein molecules needed for that simplest cell), by chance combinations of amino acids, to a solar system packed full of blind men solving Rubik’s Cube simultaneously.

    The simplest bacteria ever found on earth is constructed with over a million protein molecules. Protein molecules are made from one dimensional sequences of the 20 different L-amino acids that can be used as building blocks for proteins. These one dimensional sequences of amino acids fold into complex three-dimensional structures. The proteins vary in length of sequences of amino acids. The average sequence of a typical protein is about 300 to 400 amino acids long. Yet many crucial proteins are thousands of amino acids long. Proteins do their work on the atomic scale. Therefore, proteins must be able to identify and precisely manipulate and interrelate with the many differently, and specifically, shaped atoms, atomic molecules and protein molecules at the same time to accomplish the construction, metabolism, structure and maintenance of the cell. Proteins are required to have the precisely correct shape to accomplish their specific function or functions in the cell. More than a slight variation in the precisely correct shape of the protein molecule type will be for the life of the cell. It turns out there is some tolerance for error in the sequence of L-amino acids that make up some the less crucial protein molecule types. These errors can occur without adversely affecting the precisely required shape of the protein molecule type. This would seem to give some wiggle room to the naturalists, but as the following quote indicates this wiggle room is an illusion.

    “A common rebuttal is that not all amino acids in organic molecules must be strictly sequenced. One can destroy or randomly replace about 1 amino acid out of 100 without doing damage to the function or shape of the molecule. This is vital since life necessarily exists in a “sequence—disrupting” radiation environment. However, this is equivalent to writing a computer program that will tolerate the destruction of 1 statement of code out of 1001. In other words, this error-handling ability of organic molecules constitutes a far more unlikely occurrence than strictly sequenced molecules.” Dr. Hugh Ross PhD.

    It is easily demonstrated mathematically that the entire universe does not even begin to come close to being old enough, nor large enough, to ally generate just one small but precisely sequenced 100 amino acid protein (out of the over one million interdependent protein molecules of longer sequences that would be required to match the sequences of their particular protein types) in that very first living bacteria. If any combinations of the 20 L-amino acids that are used in constructing proteins are equally possible, then there are (20100) =1.3 x 10130 possible amino acid sequences in proteins being composed of 100 amino acids. This impossibility, of finding even one “required” specifically sequenced protein, would still be true even if amino acids had a tendency to chemically bond with each other, which they don’t despite over fifty years of experimentation trying to get amino acids to bond naturally (The odds of a single 100 amino acid protein overcoming the impossibilities of chemical bonding and forming spontaneously have been calculated at less than 1 in 10125 (Meyer, Evidence for Design, pg. 75)). The staggering impossibility found for the universe ever generating a “required” specifically sequenced 100 amino acid protein by would still be true even if we allowed that the entire universe, all 1080 sub-atomic particles of it, were nothing but groups of 100 freely bonding amino acids, and we then tried a trillion unique combinations per second for all those 100 amino acid groups for 100 billion years! Even after 100 billion years of trying a trillion unique combinations per second, we still would have made only one billion, trillionth of the entire total combinations possible for a 100 amino acid protein during that 100 billion years of trying! Even a child knows you cannot put any piece of a puzzle anywhere in a puzzle. You must have the required piece in the required place! The simplest forms of life ever found on earth are exceedingly far more complicated jigsaw puzzles than any of the puzzles man has ever made. Yet to believe a naturalistic theory we would have to believe that this tremendously complex puzzle of millions of precisely shaped, and placed, protein molecules “just happened” to overcome the impossible hurdles of chemical bonding and probability and put itself together into the sheer wonder of immense complexity that we find in the cell.

    Instead of us just looking at the probability of a single protein molecule occurring (a solar system full of blind men solving the Rubik’s Cube simultaneously), let’s also look at the complexity that goes into crafting the shape of just one protein molecule. Complexity will give us a better indication if a protein molecule is, indeed, the handi-work of an infinitely powerful Creator.
    In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, that is 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it is estimated it will take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape. In real life, the protein folds into its final shape in a fraction of a second! The computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. That is the complexity found for JUST ONE “simple” protein. It is estimated, on the total number of known life forms on earth, that there are some 50 billion different types of unique proteins today. It is very possible the domain of the protein world may hold many trillions more completely distinct and different types of proteins. The simplest bacterium known to man has millions of protein molecules divided into, at bare minimum, several hundred distinct proteins types. These millions of precisely shaped protein molecules are interwoven into the final structure of the bacterium. Numerous times specific proteins in a distinct protein type will have very specific modifications to a few of the amino acids, in their sequence, in order for them to more precisely accomplish their specific function or functions in the overall parent structure of their protein type. To think naturalists can account for such complexity by saying it “happened by chance” should be the very definition of “absurd” we find in dictionaries. Naturalists have absolutely no answers for how this complexity arose in the first living cell unless, of course, you can take their imagination as hard evidence. Yet the “real” evidence scientists have found overwhelmingly supports the anthropic hypothesis once again. It should be remembered that naturalism postulated a very simple “first cell”. Yet the simplest cell scientists have been able to find, or to even realistically theorize about, is vastly more complex than any machine man has ever made through concerted effort !! What makes matters much worse for naturalists is that naturalists try to assert that proteins of one function can easily mutate into other proteins of completely different functions by pure chance. Yet once again the empirical evidence we now have betrays the naturalists. Individual proteins have been experimentally proven to quickly lose their function in the cell with random point mutations. What are the odds of any functional protein in a cell mutating into any other functional folded protein, of very questionable value, by pure chance?

    “From actual experimental results it can easily be calculated that the odds of finding a folded protein (by random point mutations to an existing protein) are about 1 in 10 to the 65 power (Sauer, MIT). To put this fantastic number in perspective imagine that someone hid a grain of sand, marked with a tiny ‘X’, somewhere in the Sahara Desert. After wandering blindfolded for several years in the desert you reach down, pick up a grain of sand, take off your blindfold, and find it has a tiny ‘X’. Suspicious, you give the grain of sand to someone to hide again, again you wander blindfolded into the desert, bend down, and the grain you pick up again has an ‘X’. A third time you repeat this action and a third time you find the marked grain. The odds of finding that marked grain of sand in the Sahara Desert three times in a row are about the same as finding one new functional protein structure (from chance transmutation of an existing functional protein structure). Rather than accept the result as a lucky coincidence, most people would be certain that the game had been fixed.” Michael J. Behe, The Weekly Standard, June 7, 1999, Experimental Support for Regarding Functional Classes of Proteins to be Highly Isolated from Each Other
    “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed – along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering)

    From 3.8 to .6 billion years ago photosynthetic bacteria, and to a lesser degree sulfate-reducing bacteria, ted the geologic and fossil record (that’s over 80% of the entire time life has existed on earth). The geologic and fossil record also reveals that during this time a large portion of these very first bacterial life-forms lived in complex symbiotic (mutually beneficial) colonies called Stromatolites. Stromatolites are rock like structures that the photo-synthetic bacteria built up over many years (much like coral reefs are slowly built up over many years by the tiny creatures called corals). Although Stromatolites are not nearly as widespread as they once were, they are still around today in a few sparse places like Shark’s Bay Australia. Contrary to what naturalistic thought would expect, these very first photosynthetic bacteria scientists find in the geologic and fossil record are shown to have been preparing the earth for more advanced life to appear from the very start of their existence by reducing the greenhouse gases of earth’s early atmosphere and producing the necessary oxygen for higher life-forms to exist. Photosynthetic bacteria slowly built the oxygen up in the earth’s atmosphere by removing the carbon-dioxide (and other greenhouse gases) from the atmosphere; separated the carbon from the oxygen; then released the oxygen back into the atmosphere (and into the earth’s ocean & crust) while they retained the carbon. Interestingly, the gradual removal of greenhouse gases corresponds exactly to the gradual 15% increase of light and heat coming from the sun during that time (Ross; PhD. Astrophysics; Creation as Science 2006). This “lucky” correspondence of the slow increase of heat from the sun with the same perfectly timed slow removal of greenhouse gases from the earth’s atmosphere was absolutely necessary for the bacteria to continue to live to do their work of preparing the earth for more advanced life to appear. Bacteria obviously depended on the temperature of the earth to remain relatively stable during the billions of years they prepared the earth for higher life forms to appear. More interesting still, the byproducts of greenhouse gas removal by these early bacteria are limestone, marble, gypsum, phosphates, sand, and to a lesser extent, coal, oil and natural gas (note; though some coal, oil and natural gas are from this early era of bacterial life, most coal, oil and natural gas deposits originated on earth after the Cambrian explosion of higher life forms some 540 million years ago). These natural resources produced by these early photosynthetic bacteria are very useful to modern civilizations. Interestingly, while the photo-synthetic bacteria were reducing greenhouse gases and producing natural resources that would be of benefit to modern man, the sulfate-reducing bacteria were also producing their own natural resources that would be very useful to modern man. Sulfate-reducing bacteria helped prepare the earth for advanced life by “detoxifying” the primeval earth and oceans of “poisonous” levels of heavy metals while depositing them as relatively inert metal ore deposits (iron, zinc, magnesium, lead etc.. etc..). To this day, sulfate-reducing bacteria maintain an essential minimal level of these metals in the ecosystem that are high enough so as to be available to the biological systems of the higher life forms that need them, yet low enough so as not to be poisonous to those very same higher life forms. Needless to say, the metal ores deposited by these sulfate-reducing bacteria in the early history of the earth’s geologic record are indispensable to man’s rise above the stone age to modern civilization. Yet even more evidence has been found tying other early types of bacterial life to the anthropic hypothesis. Many different types of bacteria in earths early history lived in complex symbiotic (mutually beneficial) relationships in what are called cryptogamic colonies on the earths primeval continents. These colonies “dramatically” transformed the “primeval land” into “nutrient filled soils” that were receptive for future advanced vegetation to appear. Naturalism has no answers for why all these different bacterial types and colonies found in the geologic and fossil record would start working in precise concert with each other preparing the earth for future life to appear. -// Since oxygen readily reacts and bonds with almost all of the solid elements making up the earth itself, it took photosynthetic bacteria over 3 billion years before the earth’s crust and mantle was saturated with enough oxygen to allow an excess of oxygen to be built up in the atmosphere. Once this was accomplished, higher life forms could finally be introduced on earth. Moreover, scientists find the rise in oxygen percentages in the geologic record to correspond exactly to the sudden appearance of large animals in the fossil record that depended on those particular percentages of oxygen. The geologic record shows a 10% oxygen level at the time of the Cambrian explosion of higher life-forms in the fossil record some 540 million years ago. The geologic record also shows a strange and very quick rise from the 17% oxygen level, of 50 million years ago, to a 23% oxygen level 40 million years ago (Falkowski 2005)). This strange rise in oxygen levels corresponds exactly to the appearance of large mammals in the fossil record who depend on high oxygen levels. Interestingly, for the last 10 million years the oxygen percentage has been holding steady around 21%. 21% happens to be the exact percentage that is of maximum biological utility for humans to exist. If the oxygen level were only a few percentage lower, large mammals would become severely hampered in their ability to metabolize energy; if only three to four percentage higher, there would be uncontrollable outbreaks of fire across the land. Because of this basic chemical requirement of photosynthetic bacterial life establishing and helping maintain the proper oxygen levels for higher life forms on any earth-like planet, this gives us further reason to believe the earth is extremely unique in its ability to support intelligent life in this universe. All these preliminary studies of early life on earth fall right in line with the anthropic hypothesis and have no explanation from any naturalistic theory based on blind chance as to why the very first bacterial life found in the fossil record would suddenly, from the very start of their appearance on earth, start working in precise harmony with each other to prepare the earth for future life to appear. Nor can naturalism explain why, once the bacteria had helped prepare the earth for higher life forms, they continue to work in precise harmony with each other to help maintain the proper balanced conditions that are of primary benefit for the complex life that is above them. -// Though it is impossible to reconstruct the DNA of these earliest bacteria fossils, that scientists find in the fossil record, and compare them to their descendants of today, there are many ancient bacterium fossils recovered from salt crystals and amber crystals that have been compared to their living descendents of today. Some bacterium fossils, in salt crystals, dating back as far as 250 million years have had their DNA recovered, sequenced and compared to their offspring of today (Vreeland RH, 2000 Nature). Scientists accomplished this using a technique called polymerase chain reaction (PCR). To the disbelieving shock of many scientists, both ancient and modern bacteria were found to have the almost exact DNA sequence. Thus the most solid scientific evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level to the DNA of bacteria. According to the prevailing naturalistic evolutionary dogma, there “HAS” to be “significant mutational drift” to the DNA of bacteria within 250 million years, even though the morphology (shape) of the bacteria could have remained the same. In spite of their preconceived naturalistic bias, scientists find there is no detectable “drift” from ancient DNA according to the best evidences we have so far. I find it interesting that the naturalistic theory of evolution “expects” and even “demands” that there be a significant amount of drift from the DNA of ancient bacteria while the morphology is expected to remain exactly the same with its descendants. Alas for the naturalists once again, the hard evidence of ancient DNA has fell in line with the anthropic hypothesis.
    Many times naturalists will offer “conclusive” proof for evolution by showing bacteria that have become resistant to a certain antibiotic such as penicillin. When penicillin was first discovered, all the gram positive cocci were susceptible to it. Now 40% of the bacteria Strep pneumo are resistant. Yet, the mutation to DNA that makes Strep pneumo resistant to penicillin results in the loss of a protein function for the bacteria (called, in the usual utilitarian manner, penicillin-binding-protein). A mutation occurred in the DNA leading to a bacterial protein that no longer interacts with the antibiotic and the bacteria survive. Although they survive well in this environment, it has come at a cost. The altered protein is less efficient in performing its normal function. In an environment without antibiotics, the non-mutant bacteria are more likely to survive because the mutant bacteria cannot compete as well. So as you can see, the bacteria did adapt, but it came at a loss of function in a protein of the bacteria, loss of genetic information in the DNA of the bacteria, and it also lessened the bacteria’s overall fitness for survival. Scientifically, it is better to say that the bacteria devolved in accordance with the principle of genetic entropy, instead of evolved against this primary principle of how “poly-constrained information” will act in organisms (Sanford; Genetic Entropy 2005). As well, all other observed adaptations of bacteria to “new” environments have been proven to be the result of such degrading of preexisting molecular abilities. Sometimes a complex adaptation in bacteria is exhibited by naturalists (Hall, gene knockout experiments) that defy tremendous mathematical odds. Yet far from confirming evolution as they wish it would, the demonstration of a complex adaptation of a preexisting protein actually indicates another higher level of complexity in the genetic code of the bacteria that somehow found (calculated) how to adapt a preexisting protein with the very same ability as the protein that was knocked out to the new situation (Behe, evidence for design pg. 138). To make matters worse for the naturalists, the complex adaptation of the protein still obeys the principle of genetic entropy for the bacteria, since the adapted bacteria has less overall functionality than the original bacteria did. Thus, even naturalists supposed strongest proof for evolution in bacteria is found to be wanting for proof of evolution since it still has not violated the principle of genetic entropy. Even the most famous cases of adaptations in humans, such as lactase persistence, the sickle cell/malaria adaptation (Behe, The Edge of Evolution 2007), and immune system responses, genetic entropy is still being obeyed when looked at on the level of overall functional genetic information. For naturalists to “conclusively prove” evolution they would have to clearly demonstrate a gain in genetic information. Naturalists have not done so, nor will they ever. The overall interrelated complexity for the integrated whole of a life-form simply will not allow the generation of meaningful information to happen in its DNA by chance alone.

    “But in all the reading I’ve done in the life-sciences literature, I’ve never found a mutation that added information… All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not increase it.” Lee Spetner (Ph.D. Physics – MIT)

    “There is no known law of nature, no known process and no known sequence of events which can cause information to originate by itself in matter.” Werner Gitt, “In the Beginning was Information”, 1997, p. 106. (Dr. Gitt was the Director at the German Federal Institute of Physics and Technology) His challenge to scientifically falsify this statement has remained unanswered since first published.

    Naturalists also claim stunning proof for evolution because bacteria can quickly adapt to detoxify new man-made materials, such as nylon and polystyrene. Yet once again, when carefully looked at on the molecular level, the bacteria still have not demonstrated a gain in genetic information, i.e. though they adapt they still degrade preexisting molecular abilities of the bacteria in order to adapt (genetic entropy). Indeed, it is not nearly as novel as they think it is, for the bacteria are still, only, complacently detoxifying the earth of toxins as they have always been doing for billions of years. Even though naturalists claim this is something brand new, that should be considered stunning proof for evolution, I’m not nearly as impressed, with their stunning proof, as they think I should be (Answers in Genesis; Nylon Eating Bacteria; 2007)! This overriding truth of never being able to violate the entropy of poly-constrained information by natural means applies to the “non-living realm” of viruses, such as bird flu, as well (Ryan Lucas Kitner, Ph.D. 2006). I would also like to point out that scientists have never changed any one type of bacteria into any another type of bacteria, despite years of exhaustive experimentation trying to change any bacteria type into any other bacteria type. In fact, it is commonly known that the further scientists deviate any particular bacteria type from its original state, the more unfit for survival the manipulated population will quickly become. As esteemed French scientist Pierre P. Grasse has stated “What is the use of their unceasing mutations, if they do not change? In sum, the mutations of bacteria and viruses are merely hereditary fluctuations around a median position; a swing to the right, a swing to the left, but no final evolutionary effect.” Needless to say, this limit to the variability of bacteria is extremely bad news for the naturalists.

    Psalm 104:24
    O Lord, how manifold are your works! In wisdom you have made them all. The earth is full of Your possessions -

  73. Hi Patrick, BA and Art:

    Maybe it would be helpful to refocus a bit on the article PaV was commenting on:

    It was GK Chesterton who famously quipped that “when people stop believing in God, they don’t believe in nothing – they believe in anything.” So it has proved. But how did it happen?

    The big mistake is to see religion and reason as polar opposites. They are not. In fact, reason is intrinsic to the Judeo-Christian tradition.
    The Bible provides a picture of a rational Creator and an orderly universe – which, accordingly, provided the template for the exercise of reason and the development of science.

    Dawkins pours particular scorn on the Biblical miracles which don’t correspond to scientific reality. [Snip here; Bible-believing Christians see God as creating an orderly world in which science can operate, but with the option to intervene beyond those usual patterns for good reasons of his own . . .]

    The heart of the Judeo-Christian tradition is the belief in the concept of truth, which gives rise to reason. But our postreligious age has proclaimed that there is no such thing as objective truth, only what is “true for me”.

    That is because our society won’t put up with anything which gets in the way of ‘what I want’. How we feel about things has become all-important. So reason has been knocked off its perch by emotion, and thinking has been replaced by feelings.

    This has meant our society can no longer distinguish between truth and lies by using evidence and logic. And this collapse of objective truth has, in turn, come to undermine science itself which is playing a role for which it is not fitted.

    Sobering thoughts, and well worth following up. They also explain why it is that we see so strong a clinging to straws floating in the alleged prebiotic soup, to shore up the worldview of evolutionary materialism, which is increasingly challenged to address four big bangs: origin of the fine tuned observed cosmos, origin of life within that same observed cosmos, origin of body plan level diversity as observed here on our planet, and origin of a credible, truth-apprehending mind [including conscience and morals].

    Now on a few specifics:

    1] Patrick, 76, to Art: I’m waiting for you to tell me exactly when and where someone considered to be part of the ID movement made the prediction that individual proteins would contain larger amounts of information than previously estimated.

    I have highlighted the key operative word.

    For, proteins do not exist in isolation, but in complex, integrated algorithmic, code-based information systems that function based on genetic and epigenetic structures in the cell. When we pull those strands together we see a very different picture:

    –> DNA is code-based and serves as information store, mediated through RNA, enzymes and ribosomes etc., through a step by step read process that assembles proteins, which are then folded, transported to the right location, and put to work. All of this is plainly information-intensive, including information not directly coded in the DNA itself but in the structures and functional architecture of existing cells. [DNA by itself is inert.]

    –> DNA strands go down to about 1 mn for independent living organisms and to 500k or so for parasitic ones; with functional disintegration at 360 k or so. That is we are looking at minimal config spaces of order 4^360,000 ~ 3.95*10^216,741. Even if we say that only 10% of that is “truly” functional, we are till well above the reach of random walk searches in any generous prebiotic soup in oceans, ponds or hydrothermal vents or comets, etc., as 4^36k ~ 1.44*10^21,674.

    –> Even on proteins, the problem is that we have not just one but hundreds in the simplest plausible cell, let’s say 100, with a reasonable length of 300. Folding regions are often relatively insensitive to shifts in amino acids, so let’s take 10% as the effective length, towards making a lower-bound estimate for their information content, for the sake of argument on Art’s premises.

    –> We would then be needing to account for 10% of 100 * 300 = 3,000 20-state elements. 20^3,000 ~ 1.23*10^3,903, and these would be coded for in 3-state codons each, in the right relative places in the strand to be, or 9,000 DNA base-pairs. 4^9,000 ~ 3.47*10^5,418. In each and every case the resulting configuration space is well beyond the reasonable upper bounds for random searches to be plausible on the scope of the observed cosmos, much less a planetary body within it. [And, I have not yet got into the selecting and sorting work needed to pick amino acids and nucleic acids of the right chirality out of the many other potential monomers in the environment. Cf. TBO's discussion. Not to mention the challenges of creating the relevant monomers under plausible prebiotic conditions, especially certain nucleic acids, as Shapiro so ably discussed in his recent Sci Am article.]

    –> In short, the points Art has been riding are a distraction at best. Proteinoids forming microspheres have little to do with creating the right information rich polymers in the right configs to form functioning cells, and the information content of proteins in aggregate is well beyond the reach of the sort of random searches that would obtain under chance + necessity only prebiotic scenarios.

    . . .

  74. 2] DS, 75: Electrical, chemical, and mechanical forces are all at play in cellular processes. Information storage, retrieval, modification, and translation is a process in a class by itself with electrical, chemical, and mechanical structures and processes serving as media and mechanisms.

    Again, an astute comment. It is not the chemistry and physics of a motherboard or the CPU and memory chips in it that defines what it is and is about, but the information system architecture, which is here physically expressed and implemented as algorithms are executed based on coding.

    All of these are in our certain knowledge of cases where we see the causal process in action, rooted in agency, not chance + necessity only, and on the same statistical thermodynamics grounds that we do not expect tornadoes in junkyards to throw out functioning 747 jetliners [cf my always linked App 1, point 6 for why] – nb on the new generation that looks like giving the latest airbus a run for the money.

    So, relative to what we do know, when we see even more sophisticated cases, that invites the very reasonable inference that the cases are produced by similar but more expert agents.

    3] there are no such things as “Fox protocells” or anyone else’s protocells. This appears to be urban legend concocted by armchair abiogenesis pundits.

    Apparently, this was a term used in the early discussions some 30 years ago, from TBO’s discussion. The term protocell has plainly tuned out to be premature. Cf excerpts above.

    4] Art 73: It’s telling that you equate “low information” with “accident”. Direct experimental determinations plainly show that enzymes are low-information.

    Not in aggregate and not in the context in which they function, cf. Above.

    When config spaces are relatively small compared to probabilistic resources, random walks can hit targets reasonable plausibly. That is why low-/high- information carrying capacity [i.e. implied in configuration spaces: one 2-state element or bit has 2 states, 2 have 4, but 100 have ~1.27*10^30] is a key issue. This I will examine below.

    5] Fox protocells:

    No, I am not thinking of linear or other links from so-called fox protocells to anything. I am pointing out that they are plainly and utterly irrelevant – since 25 – 30+ years ago — and should not have been raised at all.

    The misnomer, protocells, you use is telling in this regard.

    6] You focus on experiments not yet done. I focus on what we know – and we know that direct experimental measurement has established that enzymes are low-information moieties.

    First, no experiment has, or could ever establish that monochiral enzymes [whatever may obtain for folding regions etc] which are the core of the functioning of the cell, are low-information structures; given what we already know about them.

    For, just simply chaining 300 or so L-monomers from a field of 80 to several hundreds of possible candidates goes well beyond the Dembski type bound. Let’s use the 40 from 80 available to TBO in the mid 80′s as a reasonable estimate. (I won’t bother on the achiral nature of glycine as 39 or 40 is immaterial.)

    [½]^300 ~ 1 in 2.04*10^90, and we are definitely dealing with 100% L-type chains here. (As a yardstick for onlookers: There may be 10^80 atoms in our observed universe, so to get something that is of order a billion times more rare in its config space is far harder to get than marking one atom in the universe then picking it at random first shot; that is an example of a unique specification within a large config space in action.) Something that isolated in a config space of plausibly available racemic amino acids – just a random 300-length amino acid polymer with “all” L-monomers! – is itself a high information constraint. We would have to do this hundreds of times over for a cell.

    Then to get to sequencing the amino acid chain for bio-function, impresses a lot more information directly and by implication of its role in the cell’s nanotechnology. Cf my discussion on clumping and configuring the microjet’s functional parts so they work, in App 1, point 6 my always linked.

    Nope: the idea that enzymes are “low-information” structures was never viable, save as a way of thinking made plausible within a questionable paradigm.

    7] this means that DNA is also low-information.

    This, being premised on demonstrably false and/or highly questionable and misleading premises, is also plainly false and highly misleading. (It also fails the common-sense test, once we simply estimate the carrying capacity of DNA chains that are observed.)

    Okay, I was busy the past few days so took time to catch back up.

    GEM of TKI

  75. Art sez:
    3. Every process in a cell that has been studied has been found to be a matter of chemistry.

    Not JUST chemistry. There is more at work than that. However I do understand why you would want people to think that it is just chemistry. That is the only way to simplify living organisms.

    Too bad reality refutes that premise.

    There are also command and control of the chemical reactions.

    For example DNA not only unzips to replicate but also different segments unzip to form other molecules that are then directed to where they are needed.

    Also the age of the Earth can only be determined once we figure out HOW the Earth was formed.

    And DNA is an information-rich molecule- that is the DNA of living organisms. That will never change.

  76. If it’s just chemistry it should be pretty easy to make a cell form scratch. Right?

  77. kairosfocus,
    Hope and pray the hurricane does you no harm.
    I just want to thank you for laying out the foundational math for the protein specificity that gives a protein its inherent and obvious complexity.
    Is it ok if I quote what you wrote in the future if I need to defend this point against Darwinists “just so” stories?

  78. Okay:

    Thank God, it was “only” a side-swipe for Jamaica.

    DV, later this morning I’ll call my folks later and see how they fared. Now, my family over in Cayman are under the gun, but it is even more distant of a side-swipe that they most likely face. [BTW, the just linked has in it my set of rules of thumb on where hurricanes are likely to go in 1 – 3 days . . . of course, monitor weather and disaster officials and heed their counsel, noting that any probabilistic estimate or likelihood estimate -- apart from 1 or 0 -- is by definition an index of ignorance to one extent or another, as modified by our leaning on whatever grounds towards occurrence or non-occurrence. This of course bridges into our discussion below too.]

    Now, on points of note:

    1] BA, 83: Is it ok if I quote what you wrote [on: “foundational math for the protein specificity that gives a protein its inherent and obvious complexity”] in the future . . .

    Of course you can quote what I wrote, but it is a simple matter to do the math yourself and IMHCO, far more effective.

    For instance, DNA uses 4-state elements in a chain that runs from about 500k – 1 mn up to 3 – 4 bn. For the first item, X, that is 4 states. For each of those, the second item, Y can take up 4 values as well, so X-Y has 4 * 4 = 16 possible states.

    Chaining, for N elements, we have 4^N possible configurations. Using base 10 logs, log [4^N] = N log 4, say ABCD. EFGH = ABCD + 0.EFGH. The easy way for big numbers is to subtract off the power of ten, ABCD, and report it as 10^ABCD, then multiply by antilog 0.EFGH, ie say LMNO. So 4^N ~ LMNO*10^ABCD. This defines the scale of the configuration space, and illustrates the power of scientific notation for compactly expressing large and small numbers.

    Once we have a space equivalent to more than about 500 – 1,000 bits of information-carrying capacity [think of a 500-bit or 1,000 or 2,000 -bit memory, say a one-bit wide slice off a typical eight-bit or byte-wide RAM chip], we are dealing with 10^150 to 300 to 600 possible states. With 10^600 states, it would be hard to argue that any reasonable island or archipelago of functional configurations is accessible to a random-walk or exhaustive search that begins at any arbitrary initial point. Also, one DNA 4-state element encodes up to 2 bits of information, so chains of 250 to 500 to 1,000 DNA elements correspond more or less to 500 to 1,000 to 2,000 bits of information storage capacity. [WD discusses the effect of among other things fractional occurrence of given states of the elements in the chain in his latest paper here, i.e a confining to a specified zone in the config space.)

    Now, cells use a nanotechnology that implements step by step algorithms to create and use the macromolecules of life, embedding the blueprint as a part of the system, i.e DNA. So, in aggregate we are looking at vastly more possible configurations than even 10^600 or so. This propagates into proteins, esp enzymes, esp if we look a the required large cluster of such required for life. So if even 90% of the chains in these molecules are is relatively unconfined as to specific residue, we are looking at a large number of 20-state elements in aggregate, and again that puts us rapidly beyond the reach of reasonable random walk or exhaustive searches. Then, thirdly, we have to address the issue of chirality, which leads to a very similar result for the chains, which are L- or R- for proteins and nucleic acid polymers respectively. Just 2 to 3 or so typical 300-residue proteins gets us close to the upper limit for any reasonable random walk search or search by exhaustion.

    You will note the attempted dismissal without addressing on the merits, the issues. This I turn to in a moment:

    . . .

  79. 2] Art, 82: how does one measure information in this sense? Not assert (which is what the assemblage of numbers and calculations seen in this thread and in the ID literature are), mind you, but experimentally measure?

    This is of course a dismissal attempt, without actually addressing the issue on the merits.

    Relative to measuring “information,” onlookers will note that I have stressed the term: information-carrying capacity, which is directly measurable in bits once we see empirically the length of a digital chain and the number of states elements in that chain may take. This is not a bare “assert[ion],” it is an empirically anchored measurement and/or calculation that is commonly used in work with information and communication systems. (For instance one can measure a volume by taking certain linear measurements and calculating through well-founded formulae, not just by pouring in a liquid and observing in a measuring cylinder how much it took to fill up. Indeed, the cylinder was designed using the same sort of formulae.)

    If you want to make an example of direct empirical “measurement” of information carrying capacity and scale of config space, set up a 16-bit ripple-carry counter based on JK flip flops and step it through all accessible states, counting the number of clock-ticks and negative edges [for say a good old 7476 dual JK TTL chip] till it recycles to the initial reset state; you will immediately see the exact number calculated for a 16-bit system below. You can then reconfigure the counter as a 4-decade binary coded decimal counter and see how you have reduced the effective carrying capacity by specifying the zone of the config space that it can access. [I doubt that we would bother with such an exercise even at High School level these days!]

    Similarly, the related deduction of the number of states in the resulting configuration space is a common, real-world measure – it is the reason why an old fashioned 16-bit address space in the old 8-bit microprocessors was of maximum length 65,536, and why moving to 32 bits address space (and internal bus widths) on the 68000 opened this up to 4,294,967,296, but using only 20 lines on the 8088 (16 internal data bus, 8 external!) left this address space at 1,048,576. From that hangs much of the story of the kludgish evolution of the PC over the 1980′s, to find workarounds – including on address segmentation. [Then in the 1990's when Motorola failed to come through with the 68050 in good time, Apple went Power PC; I wish the common hardware reference platform had won the day then. Now that Intel has fully dominated the market Apple has gone to the Pentium. A real pity.]

    Had Art looked at my always linked, Appendix 1, point 9, he would have seen also excerpts from Bradley’s recent discussion on the case of Cytochrome C, showing the measurement or calculation of information in it relative to the observed frequency of occurrence of specific monomers, leading to the average information per residue, via i = – ∑ pi log2 pi, yielding 4.139 bits per residue, or 455 bits of Shannon Information, and a config space of 1.85 x 10^137.

    He then further adjusts as per Yockey on observing: “. . . . Some amino acid residues (sites along chain) allow several different amino acids to be used interchangeably in cyctochrome-c without loss of function, reducing i from 4.19 to 2.82 and I (i x 110) from 475 to 310 (Yockey). This yields, “Wo / W1 = 1.85 x 10^137 / 2.1 x 10^93 = 8.8 x 10^44,” which is a non-log information metric. He concluded by citing two experimental studies that produced similar low probabilities for getting to a functional protein from a racemic prebiotic soup. This cumulates from the multitude of required proteins etc into the DNA. And, we have not yet addressed the issue that all of this is in a functionally integrated system architecture irreducible to mere chemistry . . .

    3] Art, 82: “electricity” and mechanical operations in living cells are all matters of chemistry. My assertion that all of what we do know about cells reduces to chemistry is a spot-on, completely accurate, if very abbreviated statement.

    First, as a physicist: Chemistry is a function of Physics [it is in effect an effect of that property of particles we call charge], and electricity is physics not chemistry; indeed, with quantum effects and related magnetism [which traces to relativity BTW], physics grounds the chemistry in fact. So, should we next argue by your logic that Chemistry is not “real,” it is physics only . . . and so on, philosophy and psychology lurking behind physics as the empirical sciences rely on those factors to function, and so on to infinity in an absurd regress . . .?

    More on the point you are missing the key issue, just as materialist reductionism tends to miss the mind in the midst of its fascination with the meat and the conditioning. Namely, a cell is no more reducible to a cluster of chemical reactions than is a PC reducible to the electrical currents, emfs and resistances etc in its components.

    It is the specific, functional organisation of carefully designed and assembled components that makes all the difference, and it is the production of that configuration that is characteristically a known artifact of mind. This, I pointed out above, and this the ancients knew, in how they distinguished material cause and cause tracing to active intent of agents, who USE the materials and forces of nature to achieve their ends. And, that is also what DS pointed out.

    4] Dismissive aside to BA:

    Again, sadly, Art resorts to dismissal by elephant hurling, rather than addressing the merits.

    GEM of TKI

    OOPS: WD’s June 07 paper is here.

  80. kf

    On measuring information it seems there’s a twist in figuring out the information content of a gene. The problem is that the gene specifies a protein but the protein is a reference to further information entailed by the physical properties of the protein. So the gene is analogous to references into an encyclopia of physics. Wouldn’t that mean that one has to include the information in the references instead of just the raw bit capacity of the media containing the references?

    On another topic you were of course exactly right to tell Art that everything is physics not chemistry. Unfortunately it appears it will fall on deaf ears. Art’s convictions are both false and immutably held.

  81. Hi Dave,

    I was about to go.

    By focussing on the information carrying capacity of the digital strings involved, I am in effect saying: how many yes-no Q’s are required to specify what is to be done?

    1] If there are no y/n elements – all is necessity, then we have not any capacity to convey information, we are in effect forced to use all AAAAAAAAA’s and cannot encode information at all. But, once we have at least two alternatives at any one step or element, and capacity to chain, we can store a lot of information, including where to go in our encyclopaedia of the forces, effects, properties and materials of nature to — purpose! — get what we want done. DNA uses 4-state elements and proteins use 20-state ones with very interesting chemical and physical properties that act like a super-meccano set or super swiss-army knife.

    2] Now of course in the latter case, some of these properties of the links in the chain may in part constrain which elements can be where relative to other elements, and may leave certain chain sections relatively free – i.e., e.g one hydrophobic amino acid may substitute for another and permit the same type of folding to happen, i.e we see here don’t care elements and degeneracy, common enough in discrete state control systems. These may partly constrain the utilisation of the ideally available config space and so the in praxis equivalent yes-no chain information content per symbol, but it does not materially alter the overall picture. Indeed the idea that we are looking at and measuring “encyclopaedia”-ndexing information – which BTW is by def’n information — tells us that life systems are MORE complex than even these measures indicate, i.e we have a lower bound estimate – which is quite okay for our purposes!

    3] So now we come to the issue of where did such code-bearing, algorithm-implementing chains come from?

    4] As my always linked discusses, in principle it is logically and physically possible that any and every digital string we have seen is the product of lucky noise. But when the resulting config spaces exhaust reasonably available probabilistic resources, then we see that functionally specified complex information is best explained as the result of agent action. Just as we do not revert to chance to explain the strings in this blog thread.

    5] Additionally, once we observe WD’s UPB as a reasonable limit, 1 in 10^150, and give room for the sort of freeness and constraints above, we see that a string equivalent to 1,000 to 2,000 bits or more [yes/no steps] is beyond the credible reach of chance on the gamut of this observed cosmos. That is easy to pass in this blog’s threads, and – sand kicked up to blind onlookers and cloud the issue notwithstanding — it has long since been passed in the nanotech of life.

    6] Then finally observe that in every case of FSCI that we directly know the causal story for, we see that such originates in agency. Thus, it is very reasonable indeed to infer that in the cases where we happen not to have seen the process, such is the most likely cause to a degree of reliability that exceeds many things we routinely bet our life and limb on.

    GEM of TKI

  82. Art you state:
    Bornagain77, I don’t mean to be rude, but your lengthy diatribe has been refuted in considerable detail. Much of what I have pointed to details the problems with the your claims, and there is much more that I won’t cite that also does. You are free to ignore what I have mentioned, but you should know that your claims are long-ago laid to rest.

    Thank you for so solidly refuting my claims, NOT! Or as KF pointed out hurling an elephant!
    This has to be the most lame rebuttal I have ever seen in my life. You address no specific evidence I site and worse yet you site absolutely no references to back your claims of fraud.
    I truly respect kf’s solid refutation of your “non-complexity issue” and respect his integrity as a scientists/mathematician, but you sir have lost any respect you may have had from me because of your shoddy technique of discerning the truth.
    Remember Art, science follows the evidence wherever it leads no matter if it is distasteful to our biases. You Sir are guilty of favoring your biases over evidence~!

  83. My assertion that all of what we do know about cells reduces to chemistry is a spot-on, completely accurate, if very abbreviated statement.

    If that is what we “know” about cells then I would have to say that we don’t “know” very much at all.

    To close (at least for now – we’ll see where things stand at the end of the month), I think discussants here need to face the fact that, somewhere in any discussion of information, evolution, the OOL, and whatever, one is going to have to bring actual chemistry and biology into the picture.

    And the same holds for Art2- IOW he is also going to have to bring chemistry and biology into the picture if he is going to assert that all of life’s diversity owes its collective common ancestry to some unknown population of single-celled organisms.

    Because as of this moment there isn’t anything in chemistry or biology which demonstrates such transformations are even possible.

    I’m seeing more than a reluctance, but even a disdain here for chemistry and biology.

    Nice projection.

    So tell us Art2 what is the biological data which can account for the physiological and anatomical differences observed between chimps and humans?

  84. BA and Joseph:

    It seems Art is incommunicado for a week or so, and that the above was in effect his last post, absent the thread keeping going for the week intervening. So, in effect we are looking at wrap-up, methinks – unless someone else steps up to the plate.

    BTW, BA, thanks for the kind words. [I do quietly note my last formal qualification in Math is the third major in my u/grad degree, supplementing my “home” double physics major. Beyond that, I am an applied physicist, who also broadened to take in an MBA with a focus on strategic change.]

    A few quick thoughts:

    1] Elephant Hurling, Literature Bluffing and Selective Hyper-Skepticism:

    These are of course colourful names apparently originating in the apologetics and ID movements for persuasive but misleading arguments often directed at them/us. Some have objected that adverting to the issues by those names is improper, I suppose by extending the “taint” of those movements. But if targets/victims of a certain tactic give it a convenient name, the substance stands/falls on the merits, not the name. So, to definition:

    ELEPHANT HURLING: giving a one-sided summary or declaration of “expert” or “credible” claimed “consensus” opinions on a matter in dispute as if that settles the matter without having to refer to the discussion and resolve the question on the merits of fact and logic. It persuades by the “credibility” of the authorities on the favoured side, thus is an example of improper – because biased — appeal to authority. This is of course to be distinguished from citing authorities on “your” side of a dispute in a context that is either engaging in the discussion or is balancing remarks made on the other side, so that onlookers may see for themselves well enough both sides of the issue.

    LIT BLUFFING: On being challenged on hurling, some artful debaters will proceed to do a literature reference “dump” that may sometimes be hard to track down or address in a live debate; i.e they are claiming through merely piling up numbers of cites, that the “weight”/ “consensus” of “credible” scholarship is on their side. However, on tracking down the references, it soon turns out that he cites are irrelevant to the matter at stake, i.e the cites may use terms that happen to show up in a search engine’s results, or may brush at the issue tentatively, or may be speculative, or the like. In any case, the pile of cites is insufficient to settle the matter, and the matter should be addressed on the merits clearly and fairly to come to a well-warranted conclusion.

    SELECTIVE HYPER-SKEPTICISM: My own modest contribution is here to give a descriptive title to a common skeptical debate tactic long since addressed by the likes of a Simon Greenleaf and frequently used in theological, historical, statistical, philosophical and scientific contexts, e.g. Sagan’s “extraordinary claims require extraordinary evidence.” No, they only require ADEQUATE and reasonable evidence! The fallacy works by the asserting of the idea that, in effect, if I can doubt your claim [as opposed to my own claim] relative to arbitrarily high standards of proof, I can dismiss. The core issue is, of course, that by consistent application of that standard, the whole field of knowledge vanishes (including those knowledge claims that lurk under the assertions of such skepticism), poof; radical and universal skepticism is self-referentially absurd. But, if one can SELECTIVELY apply skepticism to ideas one is inclined to disbelieve, that one in fact does not apply to similarly supported ideas one accepts — e.g. both are based on similarly warranted claimed matters of fact or use similar scientific or statistical approaches — one can pretend to be “rigorous” while begging the question and being inconsistent in his handling of issues. This of course frequently backs up the other two fallacies just above. If we insist on adequate and consistent standards of warrant, especially by giving clearly parallel cases where the same approach and substantially the same conclusion are generally accepted, then that suffices to expose the selectiveness of the radical skepticism being used.

    2] Joseph, 89: the same holds for Art2- IOW he is also going to have to bring chemistry and biology into the picture if he is going to assert that all of life’s diversity owes its collective common ancestry to some unknown population of single-celled organisms.

    Actually, this misses the core challenge; on evolutionary materialist views, one properly has to account for the ORIGIN of life relative to the plausible physical, geological and chemical factors present in the observed cosmos and especially earth at the relevant time – this is to get TO the claimed population of last universal common ancestral unicellular organisms. THEN, on examining the information systems, storage media and scale, and molecular nanomachines in the cell, one also has to credibly and empirically account for body-plan level biodiversity. In short, there are challenges on abiogenesis and on macroevolution.

    Cf my always linked for a balancing discussion – IMHCO, neither of these challenges has been adequately acknowledged much less faced squarely and addressed properly on the merits by the many evolutionary materialism advocates – especially on the origin of biologically relevant information.

    3] Art2 what is the biological data which can account for the physiological and anatomical differences observed between chimps and humans?

    In Art’s absence, are there any takers? [Remember, this is not even really a serious step to answer to the challenges just identified – we share a fundamentally similar body plan with the chimps. But there are certain issues to be addressed: haldane's dilemma, Genetic Entropy, behe's observed “edge” of evolution, the infamous “98% similarity” in genes – and I hear (kindly address) the 2% difference is mostly in inconsequential stuff too, also that we have very similar genes to worms, fish and the like – was the banana our chimp has in mouth in the picture on that surprising degree of overlap, too? If NDT is the biological equivalent of atomic theory, surely it can give us a good account here.]

    GEM of TKI

  85. Kairosfocus-

    I know I haven’t said it recently but I have always maintained that if living organisms didn’t arise from non-living matter via stochastic/ blind watchmaker-type processes then there would be no reason to infer those processes have sole dominion over any subsequent evolution.

    Also I should note that dead organisms have the SAME chemicals as their living counterparts. Yet they are still dead.

    That alone refutes Art2′s premise about chemistry and living organisms.

  86. Hi Joseph:

    You are right that we should not beg questions by imposing methodological naturalism with the underlying philosophical materialism that lurks therein — denials notwithstanding. However, all that requires is that we be open to the three major causal possibilities [which can interact] i.e. chance, necessity, agency. In the end, that is what ID asks for,t hen puts on board a reliable tool for identifying certain important cases of the last of these three mechanisms. Reliable? [In every case where the explanatory filter votes, design, in which we have independent knowledge, it is accurate.)

    On the second point, it is worse for the chemistry reductionist thesis than that, for dead organisms may have living tissues and cells in them – the whole living organism is plainly more and different from the simple sum of the physical parts.

    Then, when it comes to minds that we need to think credibly about such matters . . . it seems evolutionary materialist thinkers have inescapably undercut their own ability to think. We are looking at self-referential inconsistency here.

    GEM of TKI

Leave a Reply