Home » Intelligent Design » Transparent Lunacy

Transparent Lunacy

The resolution of the debate about the creative powers of natural selection is dead simple and utterly trivial to figure out.

1) Natural selection throws stuff out. Throwing stuff out has no creative power.

2) Existing biological information, mixed and matched, can be filtered by natural selection, as in sexual reproduction, but nothing inherently new is created.

3) Random errors can produce survivability quotients, but only in circumstances in which overall functional degradation supports survival in a pathological environment (e.g., bacterial antibiotic resistance), and only given massive probabilistic resources and a few trivial mutational events capable of producing the survival advantage.

4) Random errors are inherently entropic, and the more complex a functionally-integrated system becomes, the more destructive random errors become. Anyone with any experience in even the most elementary engineering enterprise knows this.

Yet, we are expected by Darwinists to believe that throwing a sufficient number of monkey wrenches into the complex machinery of living systems, over a long enough period of time, can turn a microbe into Mozart.

This is transparent lunacy.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

20 Responses to Transparent Lunacy

  1. 1
    material.infantacy

    Random mutations acting upon natural selection is about as honest a proposed external mechanism as one could hope to get, given that everything living systems do, in order to reproduce with variation, is a result of their internal, specified, and irreducibly complex configuration.

    “Random mutations” at least is only reliant on the robustness of the system to error, so that they don’t often outright destroy the organism; where “reproduction with variation” attempts to co-opt an entire mechanism — an engineering marvel — the very one in need of explaining.

    “Natural selection” seems to introduce an external environmental factor, so that the environment can place external pressures on populations; but it can do nothing on its own; it must await some other process to produce the variations to be “selected;” and we have no reason to believe that feedback from warm, cold, wet, dry, feast, or famine, are in some way linked to the incremental assembly of wings, eyes, hands, feet, motors, signal processors, data replicators, or information translators.

    So I have some sympathy for attempting to credit factors external to the machine in question, in order to explain the observed changes — as implausible as this is, at least it’s honest at the start. Less respect do I have for the outright question begging that results from labeling the purposeful, configured behavior of the system itself as “evolution,” as if we can take it entirely for granted as a natural explanation.

  2. A hammer hits the piano string, inducing random vibrations.

    The resonance of the piano string throws vibrations out.

    Nevertheless, piano players seem to be able to get music out of what remains.

  3. A hammer hits the piano string, inducing random vibrations.

    Dear Neil,

    The vibrations are not at all random; they are the antithesis. They produce an integer-related overtone series based on the physics of sound, which is why the octave is divided into 12 half-steps. This explains consonance and dissonance. Our perception is based on the physics of sound.

    The universal power of music in human experience is so evident, and puts Darwinistic explanations to the ultimate test.

    Throw in a whole bunch of random mutations and a few million years, and a tree-dwelling primitive simian is transformed into a being capable of composing Beethoven’s Ninth Symphony?

    This is essentially the claim Darwinists have made. Is it any wonder to you that most people think that this hypothesis is absurd?

    Absurd is not sufficient. Transparent lunacy is more appropriate.

    Sorry if this offends, but after my liberation from a lifetime of indoctrination in junk Darwinian pseudo-science, I feel compelled to express my view honestly.

  4. ‘A hammer hits the piano string, inducing random vibrations.’

    deep sigh,,, And that sort of ‘lack of wonder’ thinking is exactly why atheists constantly fail to grasp the miracle that infuses all of creation:

    It was not a fortuitous meeting of chordal atoms that made the world. If order and beauty are reflected in the constitution of the universe, then there is a God.
    - Beethoven

    The Deep Connection Between Sound & Reality – Evan Grant – Allosphere – video
    http://www.metacafe.com/watch/4672092

    Here is an interesting discussion on the link between math and and the birth of musical scales

    Math and Musical Scales
    http://mathforum.org/library/d.....52470.html

    Amazingly, beautiful music comes from the pi , tau, and e, dimensionless constants, which is not at all to be expected in a ‘random vibrations’ universe:

    What pi sounds like
    http://www.youtube.com/watch?v=YOQb_mtkEEE

    What Tau (The Golden Ratio; Fibonacci Number) Sounds Like
    http://www.youtube.com/watch?v=3174T-3-59Q

    (Pi jam) Pi & e – put to music
    http://www.youtube.com/watch?v=3OIfYOi19XY

    Further note:

    Nature by Numbers – Fibonacci
    http://vimeo.com/9953368

    Verse and Music:

    Matthew 26:30
    After singing a hymn, they went out to the Mount of Olives.

    High School Musical 2 – You are the music in me
    http://www.youtube.com/watch?v=IAXaQrh7m1o

  5. The vibrations are not at all random; they are the antithesis.

    That’s the effect of the filtering due to the resonance. There is nothing about the way the hammer strikes the string that would result in anything other than random. If you retune the string to a different pitch, then the string will vibrate at that different pitch, even though the way the hammer strikes it is unchanged.

    If the hammer could strike the string in a way that produced a single pitch, then after retuning the string you would get nothing (just vibrations that quickly die out). The filtering due to resonance is the important issue here.

    With evolution, the randomness of the mutations is also filtered out. It’s like using a Monte Carlo method to explore the fitness landscape. What will emerge depends very little on the details of the randomness of the mutation, and is mostly a consequence of the fitness possibilities.

    You are able to see this with the piano, because you are a pianist. But your preconceptions against it prevent you from seeing it for evolution.

    This is very similar to what happens with plants. They randomly spread their seeds around (either wind carried, or spread by animals). It’s a way of exploring for places that plant can grow. And you see the effect every spring when those weeds pop up in your flower garden.

  6. Neil,

    BREAKING NEWS- Pianos are DESIGNED. They operate by design.

    With your version of evolution whatever is good enough makes it through that “filter”. And with cooperation even the weak get through.

    Also fitness, wrt biology, means reproductive success which is an after-the-fact assessment. And you could definitely have a less than physically-fit female that is more fertile than all the others.

  7. 7
    material.infantacy

    Neil writes,

    “There is nothing about the way the hammer strikes the string that would result in anything other than random. If you retune the string to a different pitch, then the string will vibrate at that different pitch, even though the way the hammer strikes it is unchanged.”

    The “anything other than random” effects you mention would be the pitch produced by the vibrating string, which is based on the specific and deliberate sizing and tuning of the piano’s many strings. That pitch, as a result of its specific length and tensioning, is produced by the natural regularities described by laws of physics and mathematics, which are non-random themselves. The non-random effects of a vibrating piano string’s relation to its pitch, is subject to the non-arbitrary laws which govern it. Moreover, the specific pitch for each string arrayed within a piano — expressed by a desire of the piano tuner and its builder towards a purpose — is an example of specified complexity, and the result of intelligent design. There may be random effects with regard to our ability to predict attributes of the waves at a given point in time; but to suggest that the hammer strike produces a purely random effect upon the string is unsupportable. The effect might be random with respect to predicability within a very narrow range of constraints, but it’s those very constraints which cry out for an explanation. They can’t simply be taken for granted without begging the question.

    Piano string images

    “With evolution, the randomness of the mutations is also filtered out. It’s like using a Monte Carlo method to explore the fitness landscape. What will emerge depends very little on the details of the randomness of the mutation, and is mostly a consequence of the fitness possibilities.”

    I agree in part. What emerges depends almost not-at-all upon a mutation’s randomness, but rather upon a sequence’s specificity, which implies it’s conformance to positive function. The fitness landscape of any given polypeptide sample space is likely to be sparse. There is empirical support for this in Douglas Axe’s estimate of 10^-77 for stable folds in a chain of 150 residues. The implication is that for these stable folds, we can imagine a monochrome graphic representation of a landscape, with one white dot amongst every 10^77 black ones. Of course this is one single step in evaluating fitness for an arbitrary sequence. From there it only gets worse.

    Randomness just can’t be vindicated for any effect which is dependent upon a specific configuration of its internal parts and properties, in any significant quantity. I’m surprised this is still controversial. Better would be a natural law which can explain the apparent contingency of polypeptide sequences, but such a law is nowhere to be found. So while I can applaud that random mutations and natural selection attempt to provide a basis for biological change which does not beg the question, those are completely incapable of producing the credited effects (vast quantities of functional novelty) barring empirical demonstration.

  8. What will emerge depends very little on the details of the randomness of the mutation, and is mostly a consequence of the fitness possibilities.

    Dear Neil,

    You’re a nice guy, and are always congenial and respectful. You are obviously very intelligent and articulate. But this statement is simply ridiculous.

    You’ve completely ignored probabilistic resources.

    What will emerge depends very little on the details of the randomness of the mutation…

    How can you be serious in proposing such a preposterous idea? What will emerge depends very little on the details of throwing monkey wrenches in highly functionally integrated information-processing machinery?

    These kinds of irrational, empirically unsupported, and probabilistically absurd claims on the part of Darwinists only contribute to skepticism on the part of those of us who have figured out that Darwinism was never about science or truth, but the long-awaited creation myth of the religion of materialistic atheism.

  9. You’ve completely ignored probabilistic resources.

    I have studied probability theory in a mathematics department. I have studied graduate textbooks in probability theory. I have taught classes in probability theory. I have a published research paper (jointly with another author) in the field.

    I have never come across the term “probabilistic resources” until I started following ID sites.

  10. 10
    material.infantacy

    Neil, is there a more relevant, accurate, and descriptive term to replace “probabilistic resources”?

    For example, to break a 256 bit encryption key, a simulation running a brute force attack upon the key would need to make around 10^77 iterations to have a better than even chance of success, as far as I can tell. Are the “probabilistic resources” available for us to break it?

    What concise description would replace “probabilistic resources” in that context? I’m happy to make an adjustment in favor of a better descriptor; but that one seems perfectly up to the task, as far as I can tell.

  11. NR:

    I have never come across the term “probabilistic resources” until I started following ID sites.

    First, let me agree with Gil, that you have come across in a refreshingly positive vein. That is important as it helps us all move beyond the sort of polarised debate that characterises all too much of discussion on matters connected to the emerging design revolution in science.

    My terms are deliberate. I am echoing Kuhn, on paradigmatic examples and how they shift frames of thought. (I think here of how the relativistic view threw what had hitherto been a minor thing into sharp light: the proportionality/”equality” relationship between mass as source of inertia and mass as reflected in gravitational interaction.)

    A new paradigm gives us a new way of seeing, of conceiving and of solving.

    Now, back to the specific point.

    Recall the concept of expected value of an outcome, where we multiply a probability by a payoff? What happens if you take this exercise and do it a certain number of times? Why is it that you tend to move to the average expectation times the number of opportunities to observe? What happens to the less probable outcomes of an experiment — those in the far tails or another special and separately describable zone of possible outcomes — as the number of attempts rises?

    Now, shift to a common phrase: a hundred year storm. What are the odds that in a 20-year span, we will NOT see such a storm?

    Similarly, let us imagine that something has a 1 in 10 chance of occurring in each case (I think here of a typical estimate of condom failure rates.) What happens after ten successive attempts, assuming independence? (It can be shown that the chance that the low probability event did not occur in any of the ten tries is about 35%.)

    Shift again, to drawing a large bell-distribution graph, say a Gaussian and cutting it out of bristol board and tacking it on a particle-board backing. Then let us get high enough that the odds of hitting any given point by dropping a dart on it would be more or less even. Drop darts, and count hits by evenly spaced stripes.

    The hits of course, once sample size is more than about 25 – 30, will reflect the pattern of the distribution, especially its bulk. Until a sufficient number of opportunities are on the table, it will be unlikely to see something in the very far tail regions.

    That is, within a certain limit of opportunities, the highly improbable is unlikely to be seen by chance. Which is of course the heart of the old Fisherian approach to hypothesis testing, Which is still probably quite common in the field, never mind the grumbles of the academic statisticians.

    We are now quite close to the idea of probabilistic resources in the context of design theory.

    Now, in our solar system there may be about 10^57 atoms, of course mostly H and He and mostly in the Sun. In our observed cosmos, there may be about 10^80 atoms. [We don't know what dark matter is but it does not act like atoms.]

    These atoms act and interact.

    The fastest clock-tick for that is the Planck time, about 5 * 10^-44 s. It takes about 10^20 P-times for the fastest particle interactions per the strong force, and about 10^30 for the fastest (ionic) chemical interactions. Organic chemical interactions — highly relevant to life — as a rule are millions or billions of times slower.

    If we take the solar system, there have been about 10^102 Planck-time states of its atoms since a plausible date for its origin. For the observed cosmos, about 10^150 states.

    So, if we see that a complex and specific configuration that is distinctly describable and/or functional in some way dependent on configuration, e.g. it bears information based on symbol sequences — like letters in this post — such that the configuration is isolated in the field of possibilities sufficiently so that a random walk search that depends on a solar system or an observed cosmos worth of resources is unlikely to hit on it or a zone close enough that the function can thereafter emerge through hill-climbing reward to incremental success, then it is not plausible that the function or config that gives rise to it, will emerge by blind chance and blind necessity on the gamut of our observed solar system or cosmos.

    This is of course another way of speaking of the million monkeys type thought exercise, where typing at random is to give rise to coherent sentences, from Shakespeare or whatever. think about converting our solar system into monkeys, keyboards, piles of paper, tables and forests, paper factories etc and of course banana plantations to keep them going. Then bang away for 12 billion years. Would we be likely to see something like the first 73 ASCII characters of this post, or any other coherent sentence in English?

    Nope.

    In fact, here is a report — courtesy Wiki speaking inadvertently against known ideological interest — on the exercises that have been done using random number generators (impossibly fast and numerous monkeys):

    The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation.

    One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[25]

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    That is, search spaces of order 10^50 can be successfully scanned by chance to hit on function, but those of order 10^150 or so, no dice.

    Dembski provided a metric, which the undersigned with inputs from Torley, Mung and Giem, reduced to, on a solar system scale, a threshold metric:

    Chi_500 = I*S – 500, bits beyond the solar system threshold.

    But of course, if we are Wiki, we have to have something to keep up the side, so we see shortly thereafter an inadvertent testimony to the power of intelligent design:

    More sophisticated methods are used in practice for natural language generation. If instead of simply generating random characters one restricts the generator to a meaningful vocabulary and conservatively following grammar rules, like using a context-free grammar, then a random document generated this way can even fool some humans (at least on a cursory reading) as shown in the experiments with SCIgen, snarXiv, and the Postmodernism Generator.

    That is, with intelligent design, we can move our search to an island of function, instead of swanning about fruitlessly in a sea of mostly non function. Then, we can climb hills much more easily.

    But the problem is to get to the deeply isolated islands of function within the search resources available. Think in terms of a randomly drifting leaky life raft. Will it sink or will it get to an isolated island at random? That depends on how much air is in the raft, and how common and large the islands are.

    Which is the problem, for we know that functionally specific and complex structures almost by definition are going to be very atypical of random clusters of components.

    Probabilistic resources is about that sort of constraint, and its implications for blind chance and mechanical necessity in a space of possible configs given available resources to build trials or attempts. Knowing, as background, that the two main sources of high contingency are chance and design: “fyidyrd . . . ” vs “a designed sequence” vs “atatatat . . . ” Knowing also, that the living cell uses digitally coded information to guide step by step — algorithmic — assembly of the proteins that are the workhorse molecules of life.

    Bottomline is simple: we have as possible explanations for cell based life a staggering statistical miracle, or intelligence. And, if you sit to a poker game and get a very good hand, sufficient to bet the farm, and see everyone else doing the same, then the dealer’s good friend tops you all in the same hand, you are very unlikely to accept the statistical miracle explanation. Why the same reasoning is suppressed in discussions on the root of life, is that there is an ideologically enforced institutional bias in science and science education against anything that just might let a Divine Foot in the door.

    That is, design is refused not because it is not a possible explanation, but because there is a bias against a possible designer. And, that can be documented in copious detail. But we don’t need to do that, just look at the emotional intensity of objectors to the design inference and ask yourself, why.

    For more details, cf Dembski here on.

    GEM of TKI

  12. material.infantacy:

    For example, to break a 256 bit encryption key, a simulation running a brute force attack upon the key would need to make around 10^77 iterations to have a better than even chance of success, as far as I can tell. Are the “probabilistic resources” available for us to break it?

    I would just call that “brute force” or “trial and error”.

    It is a serious misunderstanding of evolution, to think that it requires that sort of brute force.

  13. Recall the concept of expected value of an outcome, where we multiply a probability by a payoff? What happens if you take this exercise and do it a certain number of times? Why is it that you tend to move to the average expectation times the number of opportunities to observe? What happens to the less probable outcomes of an experiment — those in the far tails or another special and separately describable zone of possible outcomes — as the number of attempts rises?

    But that’s a misdirection. Evolution isn’t about finding averages (expected values). It is about taking random walks, some of which might turn out to be successful.

    We are now quite close to the idea of probabilistic resources in the context of design theory.

    It is vacation time. Instead of asking a travel agent to plan a vacation for you, you decide to just drive somewhere interesting and see what happens. So you explore a road or a mountain path. Near evening you look for a motel sign, and stay there for the night. You spend a week of unplanned sight seeing.

    You didn’t need to do a brute force search of all possible vacation plans. You just took a random walk and had a great time. Your design theory thinking doesn’t apply, because you deliberately eschewed design when you decided to skip the travel agent.

    What this random walk vacation does require, is a way of evaluating possibilities to see which ones are interesting. The randomness of mutations finds possibilities, and the feedback systems of biology provide a natural purposefulness to “judge” which of the possibilities are interesting.

    I’m not a big fan of neo-Darwinism, and would prefer something more like James Shapiro’s ideas. It’s not that there is anything seriously wrong with neo-Darwinism as a research framework. But too many people (non-biologists) misunderstand it, and don’t see the possibilities of that “random walk vacation” idea that are implicitly there.

  14. Gil,

    Elizabeth Liddle seems to think that since sculptors throw away the stuff they carve out of a piece of marble (for example), that refutes your claim that throwing stuff out has no creative power.

    IOW designers designing = natural selection….

  15. Neil,

    “Success” is relative. Populations can take a real walk and end up in an environment in which once detrimental traits are now advantageous.

  16. Hi NR:

    It is a serious misunderstanding of evolution, to think that it requires that sort of brute force [256 bit keys etc]

    Sorry but the simplest plausible cells come in at about 100 – 300 kbases. Allowing just 1 bit per base that is 100 – 300,000 bits of functional info to get to first life in that warm little pond or equivalent.

    But, OOL is not evo.

    Indeed.

    But to get to the metabolising, von Neumann self-replicating, algorithm implementing C-chemistry, aqueous medium molecular nanotech systems like that, we are looking at design as the most plausible candidate. So, design is in the door. (Actually, to get to the physics of a cosmos amenable to that sort of life, starting with getting to water and organic chemistry, we are already looking at very good reasons to infer to design.)

    And if design is already in the door it does not make sense looking elsewhere to explain what seems designed. In for a penny . . .

    But, even in the world of already living systems, novel body plans are looking for 10 million or more bases to explain them, typically 100+ or so million. The requisite space to be searched by chance and mechanical necessity is vastly beyond that for OOL.

    Incremental evolution with intermediate steps functioning can readily explain modification of life forms within a body plan and adaptation to niches. But, it runs into stiff challenges to explain novel body plans.

    And if you wish to argue for a smoothly graded world of life where incremental functional changes join body plans, that needs to be shown, and shown in the teeth of our knowledge that functionally specific organisation normally comes in discrete islands.

    Evidence for that is of course still missing in action 150 years after Darwin et al hoped to find the fill-in links. Starting with the Cambrian revo, with top-down variation.

    Your empirically observed fossil or field or lab evidence for the smooth, incremental step by step links between major body plans is (a) _________, and it is on display in the (b) ____________ museum, having won Nobel [or similar] prizes for (c) ___________, et al., in the year (d) ________ ?

    Let’s hear your fills for the blanks.

    KF

  17. 17
    material.infantacy

    Neil,

    material.infantacy:

    For example, to break a 256 bit encryption key, a simulation running a brute force attack upon the key would need to make around 10^77 iterations to have a better than even chance of success, as far as I can tell. Are the “probabilistic resources” available for us to break it.

    I would just call that “brute force” or “trial and error”.

    Niether of those terms are valid for replacing “probabilistic resources.” I’m asking if there’s a better descriptor for the concept, which seems perfectly reasonable. If not, I can’t see the basis for your objection to the term.

  18. F/N: NR, a disappointing side-step on the earlier matter. You asked for justification of the concept, probabilistic resources, and I pointed you to a path from familiar concepts, ending up with a chapter from a book published by Cambridge University, authored by a certain Wm A D that addresses replicational and specificational resources, and shows by several examples just how relevant the concept is as a probability issue. (Think about the prisoners set the task of tossing 23 vs 100 heads in a row as their job, before they can get out of gaol.) Please address that. The relevance to origin of life and to origin of major body plans, is second to that but should be quite evident from the challenge to get to islands of function in config spaces given just what the DNA requisites are in terms of coded info. 100 kbit and 10 – 100 Mbit search spaces are not trivial matters, given the atomic resources of our solar system or observed cosmos. And the notion of smooth incremental variation to form body plans is something that will require serious evidence in the face of what we know about complex, functionally specific organisation of components to make a working whole. KF

  19. Devil’s Advocate-

    1- Throwing stuff out creates more space in the area the stuff once occupied

    2- Throwing stuff out creates something to do

    3- Throwing stuff out creates a feeling of doing something

  20. …sculptors throw away the stuff they carve out of a piece of marble…

    Sculptors do so with intent, purpose, foresight, and design. Thanks to Liz for presenting yet another example in support of my case! I’ll add that example to my repertoire in defense of the logic of ID theory.

Leave a Reply