Home » Intelligent Design » RNA Designed to Evolve?

RNA Designed to Evolve?

I’m currently working through Robustness and Evolvability in Living Systems, and came across the following information which seems to be right in line with Denton’s evolution by natural law ideas:

A final, especially counterintuitive feature of RNA sequence space is that all frequent structures are near each other in sequence space. Consider a randomly chosen sequence that folds into a frequent structure and ask how far one has to step away from the original sequence to find a sequence that folds into this second structure…For instance, for RNAs of length n = 100 nucleotides, a sphere of r = 15 mutational steps contains with probability one a sequence for any common structure. This implies that one has to search a vanishingly small fraction of sequence space…to find all common structures.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

43 Responses to RNA Designed to Evolve?

  1. “Designed to evolve”- That is a phrase that every IDist should use regularly.

    That one simple phrase makes it clear that ID is NOT anti-evolution, along with demonstrating the debate is all about mechanisms- culled genetic accidents vs. design.

    From the “Contemporary Discourse in the Field Of Biology” series I am reading Biological Evolution: An Anthology of Current Thought, edited by Katy Human (perhaps related to Mike Gene ;) ).

    The old, discredited equation of evolution with progress has been largely superseded by the almost whimsical notion that evolution requires mistakes to bring about specieswide adaptation. Natural selection requires variation, and variation requires mutations- those accidental deletions or additions of material deep within the DNA of our cells. In an increasingly slick, fast-paced, automated, impersonal world, one in which we are constantly being reminded of the narrow margin for error, it is refreshing to be reminded that mistakes are a powerful and necessary creative force. A few important but subtle “mistakes,” in evolutionary terms, may save the human race. -page 10 ending the intro by KH (bold added)

    And by design it is meant that there is, at a minimum, a goal/ target, ie purpose to the evolutionary process.

  2. I got introduced to Wagner’s work via one of my nicer critics at Newton’s Binomium weblog.

    It is a good survey of the cutting edge theoretical investigations going on out there.

    My major reservation is when Wagner resorts to the usual circular arguments to justify his view that mindless evolution created such and such a system.

    You’ve manage to find some gems in his book which I overlooked.

    Sal

  3. “This implies that one has to search a vanishingly small fraction of sequence space…to find all common structures.”

    Who is the “one” who searches the space?

    Does this not rather imply that in naturalistic models of origins a vanishingly small fraction of sequence space must be found by chance in the first place to make any one of the known useful structures. In real living systems the structures are only of any use at all when they are organised in a coordinated group.

    Does this not imply that RNA is designed around a tight set sequence space because of constructional and functional constraints?

  4. idnet.com.au –

    Exactly.

  5. Designed to evolve? This just seems silly. You reduce ID to a tautology. First you argue, it couldn’t have evolved by random chance. Then when random chance is not a problem, you argue it was designed to evolve.

    As for the phrase “RNAs of length n = 100 nucleotides, a sphere of r = 15 mutational steps contains with probability one a sequence for any common structure.” Can you elaborate on what that means?

  6. Jehu:
    Designed to evolve? This just seems silly.

    Is it more silly than “evolved by culled genetic accidents”?

    We have to weigh the data against the options.

    Jehu:
    You reduce ID to a tautology.

    I call it a starting point from which to launch our investigation. Design is an impetus.

    Jehu:
    First you argue, it couldn’t have evolved by random chance.

    First we argue there isn’t any evidence to support random chance. Then we say we know intelligent agencies can produce things like that. It’s the ole “data v options” thingy.

    Jehu:
    Then when random chance is not a problem, you argue it was designed to evolve.

    This is what is said:

    Intelligent design is a good explanation for a number of biochemical systems, but I should insert a word of caution. Intelligent design theory has to be seen in context: it does not try to explain everything. We live in a complex world where lots of different things can happen. When deciding how various rocks came to be shaped the way they are a geologist might consider a whole range of factors: rain, wind, the movement of glaciers, the activity of moss and lichens, volcanic action, nuclear explosions, asteroid impact, or the hand of a sculptor. The shape of one rock might have been determined primarily by one mechanism, the shape of another rock by another mechanism.

    Similarly, evolutionary biologists have recognized that a number of factors might have affected the development of life: common descent, natural selection, migration, population size, founder effects (effects that may be due to the limited number of organisms that begin a new species), genetic drift (spread of “neutral,” nonselective mutations), gene flow (the incorporation of genes into a population from a separate population), linkage (occurrence of two genes on the same chromosome), and much more. The fact that some biochemical systems were designed by an intelligent agent does not mean that any of the other factors are not operative, common, or important.

  7. “First you argue, it couldn’t have evolved by random chance. Then when random chance is not a problem, you argue it was designed to evolve.”

    Well, no, because ‘random chance’ doesn’t cease to be an issue here. The argument is that what looks like random chance actually is not. Someone here has used the casino argument before: If you have 1000 slot machines, ‘random chance’ is determining pretty much every pull of the lever. But the ultimate result – a profitable casino – was intentional.

    In other words, “random v design” is a false argument, because design doesn’t argue the lack of randomness. I may be wrong, but even in the case of IC structures I don’t believe that the ID camp argues that the results are impossible without a miracle. They just argue that the presence of such structures indicates that certain results of evolution weren’t happy accidents, but may have been – through whatever process – intentional.

    (For the record, even full-on darwinists would have to argue that not all known ‘evolution’ is random chance. They just think intelligent design occurred vastly later than when ID proponents think it showed up. Random chance as commonly defined didn’t result in scottish terriers.)

  8. “Designed to evolve” = the front-loading hypothesis. There’s a lot to be said for the front-loading hypothesis, but I personally am more convinced of frequent acts of agency. Though I think that life is designed to withstand, even periodically benefit from, random accidents, I don’t beleive that random accidents + the great cull engine in any way accounts for life’s divercity.

    “evolved by culled genetic accidents” Now that defines the RM+NS hypothesis beautifully!

  9. This is a fascinating concept, one that we shouldn’t pass up when putting together a comprehensive ID Theory. It’s one of the reasons why I criticize not considering the intentions of the designer – because through hypothesizing about those intentions, and their ability to fulfill those intentions, we can actually test our ideas.

    In order to consider RNA and other aspects of organisms as “designed to evolve,” then we must postulate that the designer(s) intended for their designs to be capable of some evolution.

    Another reason why the negative argumentation strategy can run us into a dead end. Let’s say that the designers made living organisms that could have evolved IC structures, but some of them didn’t. Finding out whether or not it is possible to evolve them in that case would not de facto establish that it wasn’t designed. Only by including the intentions and properties of the designers can we understand why certain organisms or structures within previously existing organisms were designed.
    My 2 cents.

  10. Joseph

    “Designed to evolve” is a real possibility for a design goal and we see it attempted with computer simulations. However, nothing you have said gets away from the fact that it reduces ID to a tautology.

    IDist: “Evolution by random chance is impossible, it is outside the unverisal probability bound.”

    Darwinist: “No it isn’t, see the probability of finding these RNA structures is quite good.”

    IDist: “If the outcomes are probable then it must have been designed to evolve.”

    It heads you win, tails they lose.

    BTW, will somebody please explain by what is meant by, “RNAs of length n = 100 nucleotides, a sphere of r = 15 mutational steps contains with probability one a sequence for any common structure.”

  11. Jehu,
    Yes! In a sense it does end up being heads design wins, tails materialists lose. But so what! :) I’ve identified in the past that this problem for materialists does exist. Materialism is on the horns of a dilemma.

    My argument was that evolutionists (ie. the materialist ones) are in a dilemma because design is evident by the fact that it is impossible, and yet even if it were deterministicaly certain to occur, it would still exemplify the occurance of an CSI impossibilty, ie. that such chance natural laws would exist.

    To eliminate design, I think, you’d have to find a very indifferent probability between impossible and deterministically certain.

    If deterministic, Dembski’s design filter might not apply at some level of life, BUT it would at the least apply to the natual laws themselves. And it would apply to origin of life b/c it would require genetic front loading. EVEN still !! If a complex & specified cell could be shown to form spontaenously.. this would ALSO require intelligence…
    Don’t you agree ?

    From my personal opinion, as a Christian, I think this agree’s with scripture where it reads that the evidence for the creator is CLEARLY seen. In other words.. either way you look at it life & the universe require the intelligence of a creator.

    JG

  12. M-W on Random:

    RANDOM, HAPHAZARD, CASUAL mean determined by accident rather than design.

    IDist: “Evolution by random chance is impossible, it is outside the unverisal probability bound.”

    I believe that is incorrect. A proper way of putting it would be:

    IDist: “Evolution by random chance is highly unlikely and puts the onus on the claimant because it is outside the unverisal probability bound.”

    Darwinist: “No it isn’t, see the probability of finding these RNA structures is quite good.”

    IDist: “Demonstration please.”

    Darwinist: “Can’t do that because eveyone knows it would take eons of time.”

    IDists: “You just admitted your inference is outside of science. Thank you.”

  13. Well done, Joseph.

  14. Why wouldn’t the definition of random as an adjective be more appropriate in a scientific context.

    2 a: relating to, having, or being elements or events with definite probability of occurrence b: being or relating to a set or to an element of a set each of whose elements has equal probability of occurrence

  15. 15

    bfast said (03/10/2007 @ 9:55 pm) –
    >>>> “Designed to evolve” = the front-loading hypothesis. “front-loaded evolution” is “prescribed evolution”).

    Dogmatic anti-intellectual Darwinists like Judge You-know-who are satisfied that random mutation explains the changes that occur in evolution and try to suppress any contrary ideas as heresy. It is intelligent design that says that random mutation is not adequate to explain these changes. So which is the “science stopper,” ID or dogmatic Darwinism? On Panda’s Thumb, PvM tries to hocus-pocus that front-loaded evolution is consistent with Darwinism but not with ID, whereas the vice-versa is true. PvM, commenting on the opening post here, says,

    I have discussed these fascinating properties of RNA space and the topic of evolvability in many postings at PandasThumb. It’s good to come to realize that some IDers are actually reading scientific research, even though accepting scientific explanations completely undermines ID’s attempt to hide in ignorance.
    JohnnyB also gives me some hope that IDers, properly exposed to real science, will quickly reject Intelligent Design as scientifically vacuous.

    However, even if the mechanisms of front-loaded evolution actually exist or existed, the problems of co-evolution could still be a barrier to evolution. Where the co-dependent traits in both co-dependent species are harmful when the co-dependent traits in the other species are absent, then the co-dependent traits in both species would have to suddenly appear — often in large numbers of both species — at the exact same time and place. In commenting about this dilemma, John A. Davison said,

    The mutual morphological and physiological adaptations that characterize bee/flower relationships for example arose simultaneously as each form was reading the same prescribed blueprint setting up the relationship, a blueprint that had been established long before and was finally being read.

    I realize that this sounds crazy but I am convinced it is the only conceivable explanation. Convergent gradual evolution through natural selection is just another Darwinian fairy tale without a shred of evidence.
    – from http://im-from-missouri.blogsp.....6191630865

    JAD only said that “this sounds crazy” and proposed no mechanism by which these co-dependent relationships could arise simultaneously in the same place.

  16. 16

    Sorry, the opening of my comment, #14, misquoted bfast. My comment was screwed up by my use of the inequality signs as quote marks. Here is how my opening should have read:

    bfast said (03/10/2007 @ 9:55 pm) –
    “Designed to evolve” = the front-loading hypothesis.

    I agree (John A. Davison’s name for “front-loaded evolution” is “prescribed evolution”).

  17. I think the point being danced around here is, “design” is about as tautological as “random chance” is.

    If I write a book (Call it ‘The Dog Delusion’), the IDer can argue that this was an act of an intelligent agency – acts of consideration, forethought, planning, etc. The Darwinist can argue that the so-called agency was the product of random (yet predetermined!) forces – place of birth, life experience, genetics, etc. Akin to that Dawkins Delusion movie up on youtube.

    IDers are looking at the biological sciences and seeing more and more reason to believe that the systems therein indicate a grand intelligence (or intelligences, if you prefer) at work in their development. Darwinists only see yet more amazing things that random chance can produce. While good science and theory can be proposed by both sides, what constitutes ‘random and unguided’ and what constitutes ‘intentional and an act of agency’ is philosophy, and both are more or less on equal footing.

    And there’s the real reason Darwinists raise such a ruckus over ID: Science isn’t under attack. Their favored philosophy is. The science rests on observable, repeatable facts – it’s safe. The philosophy ultimately rests on belief and hope – it can be challenged easily.

    Which is why even Ken Miller couldn’t be stomached by PZ Myers for long. It doesn’t matter if you defend ‘orthodox’ darwinism and evolution in every way. If at the end of the day you can see intelligence behind and purpose within the sciences, you’re a threat to many ‘defenders of science’, because science isn’t really what many of them are defending.

  18. I agree, “front-loaded evolution” and Davison’s “prescribed evolution” are, as far as I can see, two terms for the same hyptothesis.

    “PvM tries to hocus-pocus that front-loaded evolution is consistent with Darwinism but not with ID.” Now that’s a good one.

    Where the co-dependent traits in both co-dependent species are harmful when the co-dependent traits in the other species are absent, then the co-dependent traits in both species would have to suddenly appear — often in large numbers of both species — at the exact same time and place.

    I’m not convinced of this. I have not yet seen a co-evolutionary pair that could not have grown into their roles.

    I think of a wasp that I saw on a National Geographic special. It brings “food” for a fungus, and lives exclusively on the fungus. The only place the fungus exists is in the wasp’s nest. One could easily envision the wasp feeding a less environmentally selective fungus, and using it as part of the wasp’s food source. One could easily envision the fungus loosing its ability to live in a generalized environment becoming only able to survive in the wasp’s world. One could also envision the wasp gradually specializing its diet until all that it eats is the fungus. Now I know that I painted a just-so story, however, in the “no co-evolution could happen gradually” scenerio, I believe that a just-so story caries reasonable weight of logical challenge.

    Could you please give a specific example of a co-evolutionary pair where a just-so story explaining its gradual development is not easy to come by?

  19. johnnyb,

    A couple questions.

    1. How accessible is the Wagner book? The chapter titles seem harmless enough but before I spend $45 for something, I would like to know how readable it is.

    The reason I ask that is because of your snippet “Consider a randomly chosen sequence that folds into a frequent structure etc. ” is not something that I understand what is meant so are these types of concepts explained adequately in the book?

    The comment “currently working through” seems to imply it is a difficult read.

    2. Maybe you or someone else could explain what is meant by the excerpt you put in the post and its implications.

    Thank you.

  20. bfast said:

    ““Designed to evolve” = the front-loading hypothesis. There’s a lot to be said for the front-loading hypothesis, but I personally am more convinced of frequent acts of agency. Though I think that life is designed to withstand, even periodically benefit from, random accidents, I don’t beleive that random accidents + the great cull engine in any way accounts for life’s divercity.”

    JGuy said:

    “My argument was that evolutionists (ie. the materialist ones) are in a dilemma because design is evident by the fact that it is impossible, and yet even if it were deterministicaly certain to occur, it would still exemplify the occurance of an CSI impossibilty, ie. that such chance natural laws would exist.”

    I think both these statements clarify very well the terms of the problem, which is a recurrent topic on this blog, and not always so clearly expressed. I’ll try to elaborate further a little bit.

    I think we should be aware of the equivalence (as far as I can understand) between the “designed to evolve” and the “front-loading” formulations. I will accept tentatively that both are one and the same thing, but if the supporters of that position don’t agree, I will be happy to understand better their point.
    So, for the moment, let’s refer to it as front-loading, which is the more common term. I have always had some problems with front-loading, bur I can well accept that it is a position consistent with ID, but not in the same sense of the general “non front-loading” versions of ID.
    The difference, in my opinion, is as follows:
    1) If we accept the general ID arguments, as I do, we affirm that we are convinced, I would say beyond any reasonable doubt, that the laws of nature, at least as they are at present understood by physics, cannot account for the information observable in living beings. Design, an observable property of the outcome of intelligent agents, indeed can. So, design is the only available, acceptable, empirical explanation for what we daily observe.
    One point is very important in this formulation: as the fundamental argument here is that the laws of nature cannot explain biological information in a deterministic model, neither through necessity (that would require new laws, which have never been observed or conceived), nor through chance (that’s the main point of the Behe-Dembski work), an intelligent agent must have imparted information after the laws of nature were created, that is in “historical” times (in the wide sense of “after the big bang”). That can happen in various ways. The main possibilities are:
    a)special creation at special moments, with apparent violation of known laws of nature (the proper “creationist” hypothesis, no common descent in this form);
    b)special information imparting at special moments, but with common descent, with or without apparent violation of known laws of nature (we could call it “intermittent design with partial reutilization of both existing hardware and code”);
    c)information imparting in slow, continuous modality, with or without apparent violation of known laws of nature and with common descent(we could call it “continuous design with partial reutilization of both existing hardware and code”); in this case, information can be imparted at least in three different ways: intelligent controlled mutation, intelligent selection, or both.

    2) Let’s now discuss front-loading. If I understand it well, front-loading,in its various forms, postulates that the designer acted in planning the laws of nature, that is “before” the big bang, or in some other spcific time (let’s call it “the front-loading time”), but not after that. In that sense, it is a deterministic position, which needs not any intelligent agent “after” big bang or front-loading time. So, it is nearer to the darwinist position, but I agree that still the front-loading hypothesis is compatible with ID, but only in the sense that such a front-loaded design could be in principle detectable, in other words a universe (or a starting front-loaded reality) planned to generate life can probably be proven so unlikely as to exclude “random generation”, and therefore presuppose design. But, in this sense, design detection is restricted to arguments of the same kind as the “fine-tuning” argument, and supporters of this view must anyway show how biological information can develop deterministically from the moment of front-loading (either the big bang or any other moment in time) on.

    So, to sum up, the general ID position is that a designer is needed each time significant new biological information (CSI, IC) is created, although the modalities of intervention could be various. The scientific support to this point of view comes from all the ID arguments, both negative (demonstration that no deterministic theory can explain new significant biological information, either by necessity, or chance, or both), and positive (demonstration that design, indeed, can). In this case, any kind of fine-tuning argument may remain true, but is not pertinent, or sufficient, to the discussion about biological information.
    On the other hand, any front-loading hypothesis can demonstrate design only at front-loading, that is by fine-tuning arguments of some kind, but is deterministic after the moment of front-loading. So, the usual ID arguments (CSI, IC) don’t apply here in their usual form, but should be transformed in some form of fine-tuning argument.

  21. I agree with jehu and jerry that the phrase:

    “For instance, for RNAs of length n = 100 nucleotides, a sphere of r = 15 mutational steps contains with probability one a sequence for any common structure. This implies that one has to search a vanishingly small fraction of sequence space…to find all common structures”

    is interesting, but rather obscure. I don’t understand what it means. If somebody can help, I will be grateful for that.

  22. Off topic:

    I don’t know if anyone else knows about this (especially DaveScot), but I found a full-length anti-global warming documentary (free) here:

    http://www.chalcedon.edu/blog/blog.php

  23. 23

    bFast said (03/11/2007 @ 4:25 pm) –

    Could you please give a specific example of a co-evolutionary pair where a just-so story explaining its gradual development is not easy to come by?

    I think that first it is necessary to recognize that co-evolution is fundamentally different from the kind of isolated evolution that is adaptation to the widespread fixed physical features of the environment — e.g., land, water, air, and climate. In the first of my five points in my Co-evolution Redux article on my blog, I say,

    (1) Unlike the kind of evolution which is adaptation to widespread fixed physical features of the environment, e.g., land, water, and air, in co-evolution there is often nothing to adapt to because the co-dependent trait is likely to be initially absent in the other organism.

    It is true that many cases of co-evolution may simply be adaptation to pre-existing traits of the other organism — for example, bees could have originally had senses of sight and smell and the flowers developed colors and scents to take advantage of those senses of the bees. But is it reasonable to assume that this adaptation to pre-existing traits occurred in every case of co-evolution? Some co-dependent traits are just so specialized that they cannot exist outside of the co-dependent relationship. For example, take “buzz pollination,” where the pollen cannot be carried by the wind because the pollen adheres so strongly to the flower that it takes the resonant vibrations of an insect’s wingbeats to shake it loose — see

    http://en.wikipedia.org/wiki/Buzz_pollination

    To answer your question more directly: Even in cases of co-evolution that are just incremental improvements of an existing co-dependent relationship, the appearance of a new co-dependent trait in one of the organisms offers no benefit in natural selection unless the corresponding co-dependent trait already exists in the other organism. For example, some flowers might have gained a leg up on their rivals by developing new scents, but this change would not have done these flowers any good unless bees already had the ability to sense those scents and were attracted to them. In some hypothetical cases of co-evolution, it is not possible for the mechanism of natural selection to operate.

    Even where co-evolution is possible, its problems can greatly slow it down, and time constraints are a problem even for isolated evolution.

    My blog has another post where co-evolution is discussed. There is much more to co-evolution than just “mutual evolutionary pressure.”

  24. gpuccio –

    I answered your questions in a (very extended) post, but it is not showing up here (the basic idea being that there is a distinction between the concepts of front-loading and platonic forms). Hopefully it simply got stuck in the moderator queue and is not forever in the bit bucket. If it doesn’t show up tomorrow, I’ll rewrite it in my own blog and link it from here.

  25. Re Gpuccio’s: rather obscure . . .

    You have a point. Here’s my take, from a digital code perspective:

    The key phrase,

    for RNAs of length n = 100 nucleotides, a sphere of r = 15 mutational steps contains with probability one a sequence for any common structure. This implies that one has to search a vanishingly small fraction of sequence space…to find all common structures

    . . . is a statement about what can be called configuration space.

    1] Namely, if we imagine a 100-letter RNA string, made up from the letters GCAU, then there are 4^100 possible strings from GGGG . . . . to UUUU… and all the possible mixes in between. [4^100 ~ 1.61 *10^60; the number of atoms in our galaxy is probably of order 10^65 - 70]]

    2] So far, that is just a matter of defining the relevant space. Now, start at a specified (functional) chain of 100 letters length [NB: translates into 33 amino acid residues, given the 3-letter codons], then allow up to 15 elements to mutate at random.

    3] Wagner is arguing, in effect, that with certainty [p = 1], the commonly enountered biofunctional structures are within 15 such random steps of that initial code. (That is, he is introducing a measure of the “distance” between configurations of interest.)

    4] The “sphere” reference is to an imaginary hyper-sphere, such that each configuration plots to a specific point in that imagined space.

    5] Since a 15-step radius sub-space is much smaller than the overall configuration space [e.g. 4^15 ~1.07 * 10^9], and is compact within it, we can observe that it is going to be hard indeed to access the required cluster of target-rich subspaces from an arbitrary start-point in the overall space to make an initial functioning life-form, by random chemical searches in one form of prebiotic soup or another.

    6] This is of course, sparseness and specification relative to the overall space of possibilities. So, we are looking here at functionally specified complex information that is sparse relative to the overall space of possibilities, thence the Dembski type bound.

    7] That means that, in a world where DNA strings for real life forms are typically 500,000 to 3,000,000,000 or so in length, random processes are maximally unlikely to initially access biofunctional states, and the increments in functionality in say the Cambrian life revolution are also hard to explain, though of course Mr Wagner is not going there.. [RNA is as a rule templated off DNA strings, in observed life forms.]

    Pause . . .

    GEM of TKI

  26. Continuing . . .

    Now, Wagner (it seems), as just noted, is not aiming down the ID road, he is trying to explain robustness in life forms within a generally Darwinian framework, e.g. as the sample chapter indicates:

    Living things are unimaginably complex, yet they have withstood a withering assault of harmful influences over several billion years. These influences include cataclysmic changes in the environment, as well as a constant barrage of internal mutations. And not only has life survived, it has thrived and radiated into millions of diverse species. Such resilience may be surprising, because complexity suggests fragility . . . . A biological system is robust if it continues to function in the face of perturbations . . . perturbations can be genetic, that is, mutations, or nongenetic, for example, environmental change . . . . ultimately robustness of only one organismal feature matters: fitness–the ability to survive and reproduce . . . it is necessary to analyze, on all levels of organization, the systems that constitute an organism, and that sustain its life. I define such systems loosely as assemblies of parts that carry out well-defined biological functions. Examples include DNA with its nucleotide parts, proteins with their amino acids, metabolic pathways and their enzymes, genetic networks and their genes, and developing organs or embryos with their interacting cells . . .

    He in effect argues that because there are many possible, fairly closely spaced forms for biofunctional macromolecules, they are robust against such variations. But then, we observe that that is what sets up the argument above, on sparseness in the wider configuration space.

    I think, too, that in highlighting that there are many possible variations on say biofunctional proteins, he apparently glides over the issue that in many key domains within the amino acid strings for enzymes, one random change can destabilise the whole system, especially through misfolding or loss of biochemical functionality. In short, it seems to me, based on what others have to say on biofunctionality and its sensitivity, e.g. Meyer, Axe etc, there is seemingly yet another side to the story on robustness, i.e. the sub-space in question has in it a sea of non-functional states, with islands of functionality. Cf. Meyer:

    For a functioning protein, its three-dimensional shape gives it a hand-in-glove fit with other molecules, enabling it to catalyze specific chemical reactions or to build specific structures within the cell. Because of its three dimensional specificity, one protein can usually no more substitute for another than one tool can substitute for another . . . the three-dimensional specificity derives in large part from the one-dimensional sequence specificity in the arrangement of the amino acids that form proteins. Even slight alterations in sequence often result in the loss of protein function . . . . To maintain viability, the cell must regulate its metabolism, pass materials back and forth across its membranes, destroy waste materials, and do many other specific tasks. Each of these functional requirements in turn necessitates specific molecular constituents, machines, or systems (usually made of proteins) to accomplish these tasks. Building these proteins with their specific three-dimensional shapes requires specific arrangements of nucleotide bases on the DNA molecule.

    Who is right on robustness vs sentisitivy? [Or are we here seeing diffferent pieces of an overal, complex picture, where both sides have a point?} Over to the molecular biologists for further clarification.

    GEM of TKI

  27. H’mm:

    Let me augment a bit.

    1] I think that Dr Wagner in effect gives us a size measure on the overall reasonable dimensions of islands of functionality.

    2] Meyer’s point tells us that such “islands” are not compact, i.e. they are clusters of perhaps partly connected islets of functionality, sitting in lagoons of non-functionality.

    3] In turn, it seems that the islet-clusters [and archipelagos] are sparse in the ocean as a whole.

    That means that it is hard to get to the islands to begin with from an arbitrary start-point, and that once you are on an island, a long bindfolded random jump in any direction may be far more likely to end in a splah than a safe landing.

    That would make micro-evolution possible, but would make macroevolution hard indeed to get to. It is also is consistent with the issue that it is hard indeed to credibly get to the first living form.

    Any thoughts?

    GEM of TKI

  28. Kairosfocus,

    thank you for your contributions, which have helped me a lot to understand the point Wagner seems to express (although, obviously, knowing a little more about Wagner’s arguments and context would help not to misunderstand him).
    I agree with all that you say, and I have some further remarks:

    1) First of all, I notice that Wagner is speaking of RNA, so I think we are in the context of a discussion about OOL and the RNA world. Otherwise, I don’t understand why he is speaking of RNA and not DNA or proteins.
    Speaking of RNA world scenarios, which are I think the climax of human ungoverned imagination, there is a particular aspect that I can’t understand, and which is partially relevant to the discussion here: what do we mean, in RNA, for “function”? Indeed, RNA is the only known molecule which has two fundamental and well differentiated functions: the first direct, that is the “enzime like” function of rybozimes, where the RNA molecule is an effector of something; the second indirect, where the RNA molecule is used to store or transmit information about protein sequences or else. I think the RNA world scenario is so popular just for that reason, because it is the only (un)reasonable way to bypass the “chicken-egg” dualism of any protein-DNA OOL scenario.
    But, as it often happens, solving one problem may cause new, bigger ones to appear, and this is just the case with RNA world. Even if we don’t consider the basic impossibilities of the hypothesis itself, there is indeed a furthur “difficulty” which arises: how can we conceive that the first living beings developed their precious and complex functions (membranes, metabolism, reproduction, and so on) through rybozimes, transmitting the information through the RNA itself, and then, more or less suddenly (take as many millions of years as you want, we have some quantity in store…) they have shifted to a DNA-protein system (or even an intermediate RNA-protein system), where the effector function and the information function are completely separated, in two different molecules, and connected via a complex “language” (the DNA genetic code, including all the structures necessary for its transcription and translation)? Here we have a practical connection with the problem of configuration spaces and function, because, however we can conceive these configuration spaces, there is no doubt that the configuration of functional spaces for RNA must be completely different from the configuration for proteins (after all, we are speaking of two completely different types of macromolecules). So I ask: how is it possible or even conveivable that all the information painfully accumulated to get function trough RNA conformations may be “translated” into a genetic information to get the same function through proteins, and at the same time coded through a genetic code which was in no way necessary in the RNA world scenario, and which has, in itself, no necessary connection with the functions it encodes?

    2) Coming back to configuration spaces and function, I wonder if Wagners’s statements about RNA configuration space are hypothetical, or derived from a theoretical argument, or derived from facts. That would make a great difference, because I am sometimes tired of discussing theories or statements which have, ultimately, no valid justification. Anyway, as far as I know about proteins, we have a lot of examples in nature of single aminoacid mutations which cause a more or less complete loss of function: I am obviously speaking of mendelian diseases. So, it is obvious that, at least in proteins, a single change can often cause the loss of function. For proteins, it is important to understand if we are speaking of the general configuration space of the whole protein (usually hundreds of aminoacids), or of the much smaller configuration space of the active site. I think an experimental work was published by Behe about the specifity and/or robustness of an enzimatic site in proteins, but I have not the link here.

    3) Finally, I think that understanding the relationship between protein structure and function is one of the most important challenges today, and that only biophysics can help clarify the subject. One of the problems that I can imagine, in discussing about possible functional subsets of a general configuratiopn space, is that we should distinguish bewteen function of an effector, which can in some way be anticipated, and function of an informational intermediate molecule, which is different according to the context (for instance, an enzymatic chain). Indeed, in proteins the effector function is always linked to a recognition function (and therefore, to an information value): an enzyme, for instance, can catalyze with extreme efficiency a chemical reaction because it “recognizes” the two substrates. But still, I think it is different if we consider some chemical step indispensable for life (let’s say, ATP formation), or some chemical step which is only an intermediate in an information chain (let’s say a membrane cytokine receptor). In the second case, in principle, any cytokine-receptor-intracellular transmitter chain could be good to transmit some specific information, provided that it is correctly “linked” to the final effector and outcome. In other words, in a computer code we can choose any name for a variable, and the code will work just the same, provided the variable name is linked to the right memory location and procedures. So, my problem is: how can we anticipate all possible functional states, when in principle any configuration could have an informational value, if correctly recognized? I think we could have here a good example of CSI based on pre-specification: a specific aminoacid sequence which is specified not because of any intrinsic property of the sequence, or of any particular biochemical function it can have, but only because it is “known in advance” by some other part of the code, and thearefore can be “recognized” and carry the right information, like the name of a variable.

  29. Hi GP:

    You have raised several excellent questions! (I would like to see the evolutionary materialist OOL advocates answer them adequately!)

    Let’s just tsay that once I see funcitonally specified complex information beyond the carrying capacity of 500 bits, i.e. the Dembski bound, I think absent overwhelming demonstration why such is improper, agency is the best explanation. Worse, when we see sophisticated nanotechnology information systems which use idenitfiable codes, not simple chemical bonds, e.g. the DNA-ribosome-RNA-enzyme system that synthesises proteins etc. That gets compounded when one proposes that life starts with RNA then shifts to DNA etc . . .

    As to the RNA world hypothesis itself, the man I call Honest Robert, i.e. Prof Shapiro, has just published a devastating analysis, here. [His own metabolism first analysis IMHCO is also subject to a similar set of problems to the remarks in what i have bolded below, but has the modicum of empirical support that at least monomers can be synthesised and found in nature . . .]

    Here is an excerpt:

    RNA’s building blocks, nucleotides, are complex substances as organic molecules go. They each contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern. Many alternative ways exist for making those connections, yielding thousands of plausible nucleotides that could readily join in place of the standard ones but that are not represented in RNA. That number is itself dwarfed by the hundreds of thousands to millions of stable organic molecules of similar size that are not nucleotides . . . .

    A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life. (When larger carbon-containing molecules are produced, they tend to be insoluble, hydrogen-poor substances that organic chemists call tars.) I have observed a similar pattern in the results of many spark discharge experiments . . . . [but no RNA precursors form in such experiments]

    To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . .

    The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.

    That is where evolutionary materialist OOL scenarios stand today. Similar issues arise over any scenarios that try to produce the increments in biofunctrional information and associated macromolecules for the diversity of life reflected in say the Cambrian revolution. Going from say 500 k of 1M monomer DNAs to 180 M, onward to 3 – 4 Bn, just bsimply exhaust the probabilistic resources of the observed universe. Period.

    So, I find then excellent reason to be a design thinker.

    GEM of TKI

  30. GP, BTW:

    I would love to see the run-down on the link:

    I think an experimental work was published by Behe about the specifity and/or robustness of an enz[y]matic site in proteins, but I have not the link here.

    One of the ad hominem accusations I have seen against him is that he is “lazy” and has done little or no peer-reviewed publication in any recent times. (This is part and parcel of the claim that ID is about public relations not “real” science [original empirical research], I guess a new mantra now that it cannot honestly be said that there are no peer reviewed ID publications. Somehow the censoring and harrassing of pro ID researchers never gets put into the picture when that one comes up. Cf Dembski’s thread on Sternberg. Or, if an ID supporter raises it, it is dismissed and the victim is blamed.)

    I am also interested in the finding you say you have seen, for its own value.

    GEM of TKI

  31. Hi Kairosfocus,

    “So, I find then excellent reason to be a design thinker.”

    Me too!

    I have looked for the Behe article, and here is the link:

    http://www.proteinscience.org/.....04802904v1

    The article has been published on Protein Science, and is very interesting. Unfortunately, only the abstract is publicly available, but I remember that in some way I got to read the whole paper.
    The object of the article is perhaps not exactly the one we were discussing (functional configuration spaces), but it touches many related aspects.

    By the way, and speaking of intellectual persecution, have you ever checked the site of the Department of Biological sciences at Lehigh University, where Behe works? Read Behe’s very dignified disclaimer, here:

    http://www.lehigh.edu/~inbios/faculty/behe.html

    and then compare the infamous “claimer” of one of his colleagues, here:

  32. Hi Kairosfocus,

    “So, I find then excellent reason to be a design thinker.”

    Me too!

    Here is the link to Behe’s article:

    http://www.proteinscience.org/.....04802904v1

    Unfortunately, only the abstract is available at this link, although I remember having read the whole article somewhere. It is very interesting and, even if it is not exactly about the subject we were discussing (functional configuration spaces), it touches many related aspects.

    By the way, and speaking of intellectual persecution, have you ever checked the site of the Department of Biological Sciences, at Lehigh University, where Behe works?

    Please compare Behe’s very dignified disclaimer here:

    http://www.lehigh.edu/~inbios/faculty/behe.html

    with the shameful “claimer” of one of his colleagues here:

    http://www.lehigh.edu/~inbios/itzkowtz.html

    or with the following “politically correct” citations on the pages of other members of the staff:

    http://www.lehigh.edu/~inbios/.....asands.htm

    http://www.lehigh.edu/~inbios/faculty/barry.htm

    and, finally, with the “official” statement of the department here:

    http://www.lehigh.edu/~inbios/news/evolution.htm

    Evidently, a single man is creating much embarassment to a lot of “respectable” scientists, just by thinking freely in the same institution! No further comment needed…

  33. Hi again GP:

    I found the article, in full form here. Downloaded and saved for personal, non-commercial academic reasons.

    Thanks.

    I think you are right to highlight the issues of PC science attitudes. (I would like for Mr Behe to post somewhere his full set of research and publications . . . including difficulties with getting past “the panel of peers.” I note the apparently sharp taper-off on publications post 2004, just the time the brouhaha over Sternberg came out. Cf current discussion here in UD.)

    GEM of TKI

  34. Sorry for my delay in posting. I have injured my knee and shoulder, both my kids are sick, work is busy, and I have a midterm exam and a midterm paper due. And no, I’m not making any of that up.

    Anyway, here is my discussion of front-loading and platonic forms.

  35. JB:

    Hope recovery sets in soonest!

    Your remarks are a useful summary, namely that, in brief looks to me like:

    1] front-loading [a la Mike Gene] is about having the quantum of information that allows later descent with modification [mostly through information loss/specialisation?], and that

    2] Forms [a la Plato] define “physical” constraints that force systems to go to certain forms. [E.g. Denton on protein folding. Perhaps, too the European and Australian wolves, etc?]

    3] Your own view on the relevance of the two is:

    Information requires freedom of choice for the agent, while platonic forms is specifically about excluding freedom of choice through time. That doesn’t mean there aren’t both mechanisms in play, only that a single mechanisms cannot be simultaneously part of both . . . . there is a platonic-defined set of biological forms, but that they are fundamentally unreachable without an infusion of information. Both processes are active, with platonic forms being the part that keeps system perterbations from becoming catastrophic, but that the preloaded information is what helps adapt to new situations. [Onlookers, it would be wise to read the whole post at JB's blog.]

    In other words, you see here agency using contingency [information] and necessity. In some part, this is in anticipation of the effects of chance causing environmental perturbations, that would otherwise be destabilising.

    I add, based on the issue of feedback control, could there not also be a negative feedback system at work, in part through survival of the fittest, leading to stabilisation under stable environments, and a certain measure of adaptation under sufficiently strong environmental shifts? [This would include the founder principle and the valid part of punctuated equilibria. NB: It would also explain both the power of artificial breeding to cause emergence of diverse breeds, and the fact that there are limits to what breeding can do. This last is of course the point where IMHCO Darwin probably went too far in citing breeding as evidence for the plasticity of the species in the macroevolutionary, common descent sense.]

    Interesting.

    GEM of TKI

  36. Here is my two cents. It has been said by some biologists that ecologies are the most complex systems on the planet. Because a stable ecology requires complicated adjustments to survive it is necessary for the biological part of the ecology to change when necessary or in other words to adapt.

    This is a simple concept and I do not think to controversial. So if I was an intelligent designer what would I do. First, I would provide a means for each of the biological species to change somewhat as conditions change through chance or law. But second, and here is the anti-Darwinist part, the changes would be limited.

    As I look out my window, I see an ecology and according to Darwin’s Malthusian view, there is a struggle for existence among its members. And this scenario is repeated billions of times over the planet. But we never see a new form of any consequence emerging from this struggle anywhere. This is the plainest refutation of Darwin I know.

    So I propose there may be built in limitations on change. Otherwise why don’t we see smarter, faster, longer living, better seeing, hearing etc amongst the members of the ecology.

    So while organisms have built in means to evolve in the sense that allele frequencies can change over time they must have built in limitations even for gradual adaptations. Otherwise why no faster, longer living, better seeing etc progressions. It just does not happen in nature.

  37. So I propose there may be built in limitations on change. Otherwise why don’t we see smarter, faster, longer living, better seeing, hearing etc amongst the members of the ecology.

    Ferguson Jenkin (not Jenkins), a nineteenth-century engineer, and critic of Darwin, in his criticism of Darwin pointed out a similar kind of limitation based on the idea that any form (crystalline), as it moves away from its present form, cannot vary itself without limit, but, in fact, the number of permissible permutations (or mutations) becomes more and more limited. But, of course, no one paid any attention to Jenkin apparently.

  38. johhmyb:

    first of all my best wishes for you and your kids.
    Thank you for your very helpful discussion on your site. I was going to post some considerations here, but I am leaving for a few days, and time is not enough. I hope to post again on Monday.

  39. gpuccio:

    Please drop a note in my blog when you post, so I’ll know to come check.

  40. GP & JB:

    I’ll be watching out for it . . .

    Jerry & PaV:

    Adaptability within pre-programmed limits. Very interesting ideas. Any links on that old engineer? (Apart from a loopback to an older UD thread . . .)

    GEM of TKI

  41. Hi, johnnyb and kairosfocus:

    I just want to keep my promise and give some sequel to the discussion here.

    As I have already told, I found johnnyb’s summary about frontloading (on his name-linked site) very illuminating. So I would like to add some comments, completely accepting, as a starting point, his definitions and classification of the various positions. If I understand well, we could agree that, from a design point of view (we will here leave alone the darwinist points of view, assuming that we already agree they are completely unsubstantiated and self-contradictory) there are at least three great lines of thought, as follows:

    1) Platonic forms. I cite from johnnyb: “Platonic forms is the idea that physics is set up so that there are only a small set of possible configurations that life could have. The reason that life keeps coming up with the same type of solutions over and over again is that these are the forms allowed by physics itself”.
    I think we have to make a distinction here. There is a weak form of this argument, which is in essence the same as the general fine tuning argument of physics: physical laws and constants are intelligently selected from an almost infinite configuration space, so that our ordered universe is possible. Any slight deviation from the specific set of values we observe would make the universe chaotic (no atoms, molecules, galaxies, and obviously no life). In this form, which I strongly support, it is evident that very important information has been selected by the designer in planning the whole universe, and that kind of design is absolutely “necessary” for life to emerge. But the point remains open about the “sufficiency” of that information for life. Life could still have emerged by completely deterministic mechanisms (once the correct premises were set before the big bang), like in the darwinian model. Or it could still need other information additions.
    I think we all can agree on this weak kind of “platonic forms” in the basic physical laws, and it still remains a very strong cosmological argument for the existence of God.
    But I understand that your definition of the “platonic forms” view is stronger than that. It implies that “physics only allows certain biological motifs. And thus, while there are untold numbers of protein sequences, there are only a small number of folds available to them”.
    I must say that I have difficulties to accept this stronger formulation. First of all, one should demonstrate that such a “restriction” of possible motifs and protein folds is a necessary consequence of basic physical laws, and I am not aware of any argument in that sense. Second, I think that the only purpose of such a formulation seems to be that of allowing a purely “natural” mechanism for the emergence of biological information, once the basic physical laws are fixed. But all the ID arguments, CSI and IC first of all, have demonstrated that such an emergence is impossible by chance, and is best explained by design. The only other possibility is necessity, but that would require “new” physical laws, of which we have presently no knowledge, or detailed mechanisms linking known physical laws to he above said restrictions.
    Besides, I can’t see how any “platonic forms” explanation could account for OOL. Not only there is no known physical law which can account for the spontaneous generation of complex organic molecules from inorganic matter, just the contrary is true: known physical laws clearly demonstrate the impossibility of such an event. Indeed, all OOL scenarios, from urey-miller ponds to rna worlds, are mere fiction, as has been well discussed elsewhere.
    And even if we admitted (but I don’t!) that only few protein folds are accessible, and that for some mysterious reason they are the ones which can bear function, one should still explain the higher informational levels, like the order and control of function, the procedure code (still unknown to everyone), the error management, the general plan for multicellular organisms, etc. It seems obvious to me that all these aspects cannot be explained in terms of mere protein folding, however platonic.
    So, while I perfectly agree that specific fine tuning is necessary for life to emerge, I believe that all the ID approach is evidence against a purely mechanistic explanation of biological information, even allowing for very intelligent initial choices.

    2) Frontloading. I cite from johnnyb: “ Front-loading is the hypothesis that at some point in the past (usually at the origin of life), organisms were given a rather large deposit of information. The history of life from that point onward has been primarily governed by that information, specializing into the different species we have today”.
    That’s an interesting point of view, but again I have some difficulties, although of a different nature.
    First of all, if I understand well, this theory assumes a specific act of “information imparting” (or creation, if we prefer), at OOL. On that I agree. I think that, however one considers what happened after, OOL can only be explained by a very unusual event. Maybe natural laws were not violated (it is possible, in principle), but it is difficult to conceive OOL as a “gradual” event, because nothing we know or can conceive of is even near to a “precursor” of the simplest living beings. So I strongly believe that the only plausible scenario for OOL is that bacteria, or archea, probably very similar or identical to those we know today, must have been “assembled” from non-organic matter according to a specific plan by a designer.
    But, always if I understand well, according to the frontloading hypothesis the designer not only provided these first living beings with the information for their existence, survival and reproduction, but also with the information to “evolve” to the more complex species, up to humans. In other words, once that information was frontloaded in the first living beings, the following “evolution” can be explained by physical laws and mechanistic events, obviously exploiting the initial information. I understand also that, according to some supporters of this theory, the initial “extra” information is no more present, having exhausted its role, or as johnnyb states: “ most people who hold to this view think that there is at least some of that original deposit left hanging around in “junk DNA””. Well, that seems consistent enough, but here are my objections:

    a) First of all, why? I mean, obviously one can postulate any model one likes, but usually a specific model tries to address some specific difficulty. Well, it seems to me that one of the main fights about “evolution” is that some (the darwinists) are convinced that everything must be explained in terms of known physical laws and deterministic mechanisms, while others (IDists) believe that living beings (or at least the information in them) can best be explained by the intervention of a designer. Please note that the intervention of the designer needs not (although it certainly can) violate physical laws, but it is absolutely necessary (according to ID) to explain the observable fact of the appearance of biological information. Then my problem is, what is the need for a third hypothesis which shares the difficulties of both the others? Because frontloading certainly postulates at least one intervention of the designer, at the beginning of life, and indeed such “intervention” should be “heavier” than is supposed by normal ID, having to explain not only the emergence of life, but also any future evolution of it. So, front-loading creates a problem for “naturalistic” thinkers as much as any other ID model. But, at the same time, it seems to postulate that the designer acted only once, and inside time, leaving all the rest to deterministic mechanisms. Again, I don’t understand: why? What is the advantage of thinking that way? What kind of facts are more easily explained that way? I can’t see any reason why a designer, who can impart information once, should not do that other times, or even continuously. And we know that the only “observable” facts which are brought up by darwinists are in essence omologies, either morphologic or genomic, and we know that omologies are, at best, evidence of common descent (or of reutilization of the code), and are never evidence of a specific mechanism. So again, why?

    b) Second: how? It’s not enough to postulate something, you must also have a credible model for your hypothesis (unles you are a darwinist, of course…). So, what is the model? Was the extra information stored in the primordial genome of the first bacteria? How? How bigger was then that genome? How much extra information was necessary to “guide” evolution up to humans? It seems to me that here we are again in pure speculation, unsupported by any fact, but again maybe I am not aware of something.

    Moreover, if many think that part of the extra information survives in non coding DNA, I see a very difficult contradiction here. I am the first to believe in the importance of non coding DNA. With Mattick and others, I am convinced that much of the “missing code” which can explain procedures and regulations must be there. But Mattick has shown that non coding DNA is the only “quantity” which constantly increases with the complexity of the species. Indeed, the ratio between non coding DNA and total DNA is the best quantitative marker of complexity in living beings, being extremely law in prokaryotes, and going up to 98-99% in humans. If non coding DNA were the repository of the initial extra information, I think we should observe the opposite.
    In other words, while frontloading is a possible model, bypassing at least some of the difficulties of naturalism, still I find it unnecessarily complicated. Besides, unless a specific and credible model of this “extra information” is provided, I believe that the general ID arguments against the spontaneous generation of information by natural mechanisms still apply.

    3) ID proper. This is obviously my favourite scenario. In this model, design is imparted to living beings by one or more designers, inside time, and during time, many times or continuosly. In this model, again, it is not specifically important if the imparting of information is implementing by an intelligent “manipulation” of known physical laws, like in the case of human designers, or violating them (miracle – creation acts), or by superior laws at present unknown to us. Or if it is implemented continuously or in a “punctuated” way. Personally I have always been attracted by a continuous model, mainly because I believe (not a scientific statement, this one) that God acts in the world practically always. But I must admit that the OOL problem and the cambrian explosion fossils strongly support a special intermittent implementation.

    Well, enough for now. I am posting this at UD, but I will leave a note on johhnyb’s site.

  42. Interesting

    GEM of TKI

  43. gpuccio

    Read what’s been written here regarding Directed Panspermia which should answer many of your questions and objections. Basically the front loading hypothesis is consistent with Francis Crick’s postulation of directed panspermia. If you were technological species wanting to spread your kind of life to another planet in another solar system what practical ways are there of accomplishing that feat given the limitations physics imposes on traversing interstellar divides? You’d need to restrict yourself to a very small payload in order to get it moving at a significant fraction of the speed of light and you’d need a small variety of microscopic lifeforms that could “terraform” a planet into a suitable environment for the eventual expression of a rational technological species that could build a civilization and eventually continue the process of spreading life to other solar systems. Terraforming a planet takes a very long time. Laying down fossil fuel reserves to power an industrial civilization takes a very long time. But it can all be planned and executed from simple beginnings.

    On the DNA information cache try reading what’s been written here about amoeba dubia. In a nutshell this is single celled organism that feeds on bacteria, is probably very very ancient, and is living proof that organisms can thrive while carrying a genome 200 times the size of a human genome. Imagine how many different phyla could be encoded in a genome 200 times the size of the human genome. And that’s just the current record holder for largest extant genome. We’ve only measure the genome size of a tiny fraction of all the different organisms that inhabit the earth today. Maybe one or several of them are there waiting for us to discover as the seed organisms we can use to continue the cycle of life by sending them to a suitable young planet around another star. We’re getting close to the point where our telescopes can locate these types of planets and we’ve already got one spacecraft (Voyager I) that has exited the solar system and is zipping through interstellar space to parts unknown even as we speak.

    The beauty of this is that it follows the pattern of all life – to find fertile new places to reproduce and carry on. This is just on a grander scale in time and space. It’s also a testable hypothesis in that there should be mechanisms of preserving unexpressed genomic information for geologic timespans (you’ll find an article in the first link where evidence of such a mechanism may have discovered recently) and there should be somewhere in some extant genome a library of phylogenetic specifications or identifiable remnants of one (but I’m betting on the whole library being intact somewhere in some organism such as dubia or maybe scattered as a distributed database amongst a number of such organisms). Whole planetary ecologies reproducing on new planets like a dandelion puts its seeds into the wind seeking to expand its range is an elegant concept. It leaves the question of who and what got the ball rolling in the first place but maybe that information is awaiting discovery too as we get around to sequencing and understanding all the genomes in all the world. Our ability to sequence genomes is growing exponentially so it should be just a matter of time as we chip away at the vast store of genomic information on this planet and learn what it all means.

Leave a Reply