Home » Intelligent Design » Airplane magnetos, contingency designs, and reasons ID will prevail

Airplane magnetos, contingency designs, and reasons ID will prevail

Intelligent design will open doors to scientific exploration which Darwinism is too blind to perceive. The ID perspective allows us to find designed architectures within biology which are almost invisible to natural selection. Thus, the ID perspective is a far better framework for scientific investigation than the Darwinian perspective. What do I mean, and how will I justify my claim?

Let me illustrate my point with some anecdotes. I was piloting a small airplane in the spring of 2002. My airplane suffered a potentially serious systems failure during the flight. In piston powered aircraft, the electrical ignition system (called a magneto system) is life-critical. Aircraft engineers consider the magneto system so crucial that they design each engine with two redundant, independent magnetos. If one magneto fails, the other seamlessly takes over. In fact, these dually redundant systems are so effective that a pilot will not even know if one of the magnetos failed in mid-flight until he’s back on the ground doing a routine inspection of his airplane!

Well that’s what happened to me on my flight in 2002. My left magneto gave out and I continued flying using only the right magneto. There are no instruments on board to indicate if one of the magnetos fail. The failure is usually discovered after landing. The airplane flies just fine on one magneto as long as the other magneto is working. That is by design.

How did I eventually realize I had a left magneto failure? After I landed, I took a break, then prepared to take off again. I went through a routine procedure to check out the airplane’s airworthiness.

I started the engine and followed several procedures on my check list. I then got to the part on the checklist where I separately test the integrity of each magneto. I shut off one magneto and leave the other on.

“Engine 1800 RPM: check!”

“Right magneto: check!”

“Left magneto: Whoa! Holy smokes!”

The engine practically cut off during the left magneto check. There are no instruments to indicate a mid-flight magneto failure. Such system failures are detected after landing. Thus, I had previously been flying through the air blissfully unaware of the left magneto failure. “Ignorance is bliss”. Ha! As I came to the realization that I had been flying on only one magneto, I had visions of what might have happened had the right magneto also failed, visions of me having to fly the airplane with a dead engine, and visions of me gliding the airplane to a safe landing in someone’s backyard…(ah, but I digress)….

What does this have to do with biology and Darwinism? One way Darwinists conclude something is evolutionary junk, a vestigial feature, or an otherwise useless biological artifact is to apply “knock-out” experiments on an organism. If a piece of the organism is knocked out, and the organism still functions well and is otherwise “fit”, then the knocked-out piece is deemed useless, an evolutionary leftover, junk, or even bad design.

What’s wrong with such logic you ask? Well allow me to clarify. Imagine if one applies this line of reasoning to the architecture of a magneto-fired airplane engine:

We knocked out the left magneto system on Airplane X and determined the airplane flies just as well without it. We knocked out the right magneto system on Airplane Y and determined the airplane flies just as well without it. We conclude therefore from these knockout experiments that neither the left magneto nor the right magneto have any functional significance since the airplanes were clearly fit without them. Magnetos are therefore unneeded vestigial artifacts, junk, and evidence poor design, totally useless to the airplane. Furthermore this is further evidence that airplanes are made by blind watchmakers.

Think I’m kidding, and evolutionary biologists don’t make these kinds of obviously bad inferences?

See:
Minimal genome should be twice the size, study shows

“Previous attempts to work out the minimal genome have relied on deleting individual genes in order to infer which genes are essential for maintaining life,” said Professor Laurence Hurst from the Department of Biology and Biochemistry at the University of Bath.

“This knock out approach misses the fact that there are alternative genetic routes, or pathways, to the production of the same cellular product.

When you knock out one gene, the genome can compensate by using an alternative gene.

But when you repeat the knock out experiment by deleting the alternative, the genome can revert to the original gene instead.

Using the knock-out approach you could infer that both genes are expendable from the genome because there appears to be no deleterious effect in both experiments.”

Knockout experiments have also been used to argue “junk DNA” is junk. This is out rightly bad science, but it persists because of Darwinist’s eagerness to close their eyes to design and paint various artifacts in biology as the product of a clumsy blind watchmaker rather than an intelligent designer.

The strategy of using several different means to achieve a particular goal where each of the individual means is sufficient by itself to achieve the goal is used in many engineered systems to ensure that the goal will be achieved, even if one or more of the means fail. For example, the space shuttle’s on-board inertial guidance system, consists of five redundant computers!

How does this relate to biology and intelligent design? Let me quote geneticist Michael Denton in his book Nature’s Destiny:

It now appears that a considerable number of genes, perhaps even the majority in higher organisms, are completely or at least partially redundant. One of the major pieces of evidence that this it the case has come from so-called gene knockout experiments, where a gene is effectively disabled in some way using genetic-engineering techniques so that it cannot play its normal role in the organism’s biology. A classic example of this came when a gene coding for a large complex protein known as Tenascin-C, which occurs in the extra cellular matrix of all vertebrates, was knocked out in mice, without any obvious effect. As the author of a paper commenting on this surprising result cautions: “It would be premature to conclude that [the protein] has no important function …[as] it is conserved in every vertebrate species, which argues strongly for a fundamental role.” The protein product of the Zeste gene in the fruit fly drosophila, which is a component of certain multi-protein complexes involved in transcribing regions of the DNA, can also be knocked out without any obvious effect on the very processes in which it is known to function.

The phenomenon of redundant genes is so widespread that it is already acknowledge to pose something of an evolutionary conundrum. Although in the words of the author of one recent article, “true genetic redundancy ought to be, in an evolutionary sense, impossible or at least unlikely,” partially redundant genes are common. As another authority comments in recent review article: “Arguments over whether there can be true redundancy are moot for the experimentalist. The question is how the functions for partially redundant genes can be discovered given that partial redundancy is the rule.

And it seems increasingly that it is not only individual genes that are redundant, but rather that the phenomenon may be all-pervasive in the development of higher organisms, existing at every level from individual genes to the most complex developmental processes. For example, individual nerve axons, like guided missiles or migrating birds, are guided to their targets by a number of different and individually redundant mechanisms and clues. The development of the female sexual organ, the vulva, in the nematode provides perhaps the most dramatic example to date of redundancy exploited as a fail-safe device at the very highest level. A detailed description of the mechanism of formation of the nematode vulva is beyond the scope of this chapter, suffice it to say that the organ is generated by means of two quite different developmental mechanism, either of which is sufficient by itself to generate a perfect vulva.

It seems increasingly likely that redundancy will prove to be universally exploited in many key aspects of the development of higher organisms, for precisely the same reason it is utilized in many other areas–as a fail safe mechanism to ensure that developmental goals are achieved with what amounts to a virtually zero error rate.

Now, this phenomenon poses an additional challenge to the idea that organisms can be radically transformed as a result of a succession of small independent changes, as Darwinian theory supposes. For it means that if an advantageous change is to occur, in an organ system such as the nematode vulva, which is specified in two completely different ways, then this will of necessity require simultaneous changes in both blueprints. In other words, the greater the degree of redundancy, the greater the need for simultaneous mutation to effect evolutionary change and the more difficult it is to believe that evolutionary change could have been engineered without intelligent direction.

Denton describes what I call contingency designs. It should be hopefully obvious that contingency designs are exactly the kinds of designs that are hard pressed to be created via natural selection. How does one evolve a contingency design when the primary design functions just as well? If a creature mutates a failure into a life-critical primary system, it will more likely be selectively eliminated before it can evolve a fully functioning backup system!

ID’s explanatory filter is therefore a potentially more effective tool at identifying designs which elude Darwinian style tests (such as knockout experiments) for functionality. ID’s explanatory filter looks for possible functionality by identifying specified complexity in biological artifacts which may not evidence any immediate effect on the organism if the biological artifact is knocked out.

I will pursue this more perhaps in another post, but I point out, IBM may have unwittingly detected designs which would otherwise elude the fitness test. See:
Invasion of the IBM engineers

The ability of the Explanatory Filter to identify designs in biology which Darwinists would sooner perceive as an accident and which will elude “fitness tests” is another reason I believe ID will prevail as the proper scientific framework for investigating biology.

The Explanatory Filter may very well succeed in identifying places to look for design which may have otherwise been easily overlooked. I will post on this more, but in the meantime in case you’ve missed it, here is my essay on a related topic: How IDers can win the war

.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

46 Responses to Airplane magnetos, contingency designs, and reasons ID will prevail

  1. Awesome article! An informative read.

  2. Sal, excellent post. I have been thinking along the same lines recently — i.e., that knockout experiments on individual components are poor ways to identify redundancy and/or compensation. Thanks for beating me to the punch and taking time to put down your thoughts.

    It would be great if you worked up your lengthy post into a brief essay that could be accessible where the issue could see the light of day for a while, rather than being buried in a few days in the UD blog archives.

  3. The correct term for what you’re talking about is “genetic interaction”. The evolution of genetic interactions by gene and genome duplication is a contemporary focus of the study of cellular networks. There are real questions and challenges here that evolutionary theory has yet to address. But come on, the claim that scientists somehow don’t think of these things is silly.

  4. Chance and necessity in the evolution of minimal metabolic networks.

    Evolution by devolution. I wonder what the original E.coli devolved from.

  5. But Mr. Cordova,

    Even if a system is redundantly complex and thus could not have been produced directly, however, one cannot definitively rule out the possibility of an indirect, circuitous route, right?

  6. Scordova, there has been a lot of chatter on ISCID’s BrainStorms (“Can some aspect of Darwinism be falsified”) along the same lines. Dr. Peter Borger is suggesting, similar to you, that redundant genes creates a significant problem for NDE. My understanding is that he is intending to publish his findings farely soon.

    In the chatter over there, is also a proposal that I have made that certain kinds of error correction algorithms are not evolvable by NDE. Though we have not been able to confirm that such mechanisms exist, it does produce an interesting hypothesis, and an interesting channel to explore.

    Let me put a little meat on those bones. One of the easiest ways of making error correction code is to have three copies of the code. If an error happens in one of those copies, you can tell which is the error by “voting” — the correct value will be represented in two out of three. Such an error correction system is a very logical reason to have redundant genes. Further, if there is no similar scheme, the redundant genes should no longer be protected by natural selection, and therefore they should mutate to mush in very few million years.

    The problem with NDE developing such an error correction scheme, however, is that the scheme would need to be developed with one gene first. (Once it is established with one gene, having the system spread to others sounds believable.) The problem with developing such a scheme with one gene first is that there is painfully little for natural selection to latch on to. Once you have multiple copies of the same gene, either being able to get the job done, the likelihood of a deleterious mutation actually affecting an individual organism becomes enormously small. If the individual organism is not affected, how on earth can selection say “you’re not fit to live” when the only system that is not yet there is the system that says “let’s correct for any error”?

    Alas, this is a catch 22 situation. If specific gene error correction exists, it is unevolvable by NDE. If redundant genes exist without error correction, then they are not protected by natural selection, and should mutate to mush in very few million years.

  7. 7

    “The Explanatory Filter may very well succeed in identifying places to look for design which may have otherwise been easily overlooked.”

    Can you explain what you mean by ‘look for design’, isn’t that what the explanatory filter does?

  8. 8

    Hello Salvador,
    Why do you think life’s designer might have created redundant genetic pathways for the generation of a nematode vulva? Do you have any thoughts about what general criteria the designer might have used to determine which genetic functions ought to be reduntant? If the designer was really good at what he does, would redundancy really be necessary?

    Thanks

  9. Many designs contains part that seem to be useless at first glance, but their job is to make your design fail-safe. For example take an electronic circuit board and remove some capacitors and the board may still seem to work fine. The reason is that some capacitors are only used for power decoupling and their job is to provide energy when a particular IC draws a high amount of instantaneous current. The decoupling capacitors will guarantee that your device will not behave in an erratic way under certain conditions.
    “I removed Part X and it still works fine, therefore Part X is junk” is an argument from ignorance.

    Software is even more packed with apparently useless code called error handlers. The usual state of affairs is no errors so removing them isn’t noticed except in special circumstances. -ds

  10. Actually Gil, you can tell if a magneto fails in flight. RPM will drop for no reason. In addition to two magnetos there are two spark plugs in each cylinder each fired by a separate magneto. Two spark plugs results in better combustion and more power. If one magneto fails then only one plug is firing in each cylinder. That’s why your pre-flight magneto test is to note an RPM drop when you switch from BOTH to LEFT or RIGHT. At least that’s how it worked in the Cessna 172′s that I rented.

    I have a better story than a magneto failure. I rented a plane where the airspeed indicator was in MPH instead of KNOTS and I didn’t notice. When I was flying it I noticed that the airspeeds weren’t what I expected for rotation, stall, etcetera so I cut the difference in half between what my experience told me the airspeed was and what the airspeed indicator reading was and used the compromise to set up the plane for various flight modes. When I came back (it was a cross country solo while still in training) and told my instructor he didn’t know whether to be more impressed that I knew the aircraft well enough to know that the indicated airspeed was wrong or upset that I didn’t notice it was MPH instead of KNOTS as it was clearly labeled on the face of the instrument.

  11. “Actually Gil, you can tell if a magneto fails in flight. RPM will drop for no reason. In addition to two magnetos there are two spark plugs in each cylinder each fired by a separate magneto. Two spark plugs results in better combustion and more power. If one magneto fails then only one plug is firing in each cylinder. That’s why your pre-flight magneto test is to note an RPM drop when you switch from BOTH to LEFT or RIGHT. At least that’s how it worked in the Cessna 172’s that I rented.”

    Yes, Dave. But as I recall, that drop in RPM is only a few hundred RPM. The airplane will still fly on one magneto (one ignition system), even if performance is somewhat degraded.

  12. Would any airplane backup system work with Sal’s analogy? The emergency landing gear extension system on most jets is a dusty handle under a seldom-opened panel in the floor of the cockpit. It is more primitive than the regular hydraulic gear extension system and the landing gear works fine without it.

  13. To what extent are redundant systems in living organisms really redundant? In the airplane example, I think we are talking about two or more *identical* systems, one of which can take over if one or more of the others fail. Do such systems exist in nature? I think not. It seems more likely that the “redundant” systems overlap to a degree but are not really identical. One system might rely on a different basic nutrient than the other system. This might be hard to detect in an experimental setup where all necessary nutrients and then some are supplied by the experimenter. It might well be that one of the redundant systems is much more efficient under one set of environmental conditions, whereas the other system performs much better under a (perhaps slightly) different set of conditions. If so, it is easy to see how such a collection of “redundant” systems could be maintained by selection, if both sets of conditions occur sufficiently often in nature. Can anyone give an example of truly (i.e. identical) redundant systems in nature?

    Eh? Two kidneys, two lungs, two ears, two eyes, 32 teeth, 4 fingers… seems like a lot of redundancy. Granted at reduced performance if there’s a loss but that’s typical of redundant systems where the parts operate in tandem. -ds

  14. I’m pretty attuned to the performance of the aircraft. 300 RPM on a motor that turns 2400 is lot when it’s at the top of the RPM range. Remember that lift increases roughly as the square of the foil speed so 300 less RPM on the prop at close to full speed is a lot more loss of thrust than 10% reduction in total RPM would make you think if you’re thinking linear. Since you seldom do anything except climb out of the dead zone on takeoff at full RPM you’ll still get enough power for cruise speed but you’ll need to increase throttle to get it. I’d certainly notice a degraded maximum rate of climb on takeoff. I might not notice a slightly different throttle position at cruise speed but I’d like to think I’d notice a loss of power if the magneto failed while I had constant throttle applied in any climb or level flight.

  15. The examples you have been discussing of junk/not junk are fine, but the vast bulk of DNA people refer to as “junk” fall into a different category. It is placed in that category based largely on how it is observed to be preserved (or not preserved) across individuals in a populuation or between closely related species. What we call “junk” typically mutates with abandon. It’s rearranged, chopped up, and heterogenous across individuals. The information contained therein decays rapidly. Every now and again a chunk of it is co-opted for something useful, but by and large this isn’t the case. Some of it’s of viral origin. Some of it is the result of the proliferation of endogenous viral-like genetic elements. Again, every now and then something is plucked out as useful, but this appears (so far) to be the rare exception. The genomes of most higher eukaryotes are shifting ephemeral sands with the occassional nugget of conserved information. This may be the manifestation of a complex design paradigm, but it is one so entirely alien and unfathomable to us that, even if we *knew* it was designed that knowledge would be of little practical value. We began in biology by assuming everything we saw had a functional purpose. It’s been a long hard-fought road to come to terms with the notion that many things simply don’t. The “mistake” of claiming something is not functional when it is does happen, but it’s a rare thing and one that can be remedied easily enough when additional info comes to light. The notion that a design paradigm would fill a “blindspot” in current biological thinking is, in my opinion, of far more rhetorical substance than actual substance. When you’re down on the front lines, there’s ultimately very few systems that are deemed nonfunctional. Even if every one of those turn out to have been a mistake, it’s still a sliver of a fraction of biology. So it’s not so much a blindspot as a bad pixel. The only significant chunk deemed useless (with regards to our genome)–repetitive/viral DNA,etc–can be pretty clearly seen to have, except for those rare circumstances mentioned above, no apparent function beyond making new copies of themselves. We even can find host molecular components whose job appears to be to hinder them from making still more copies. Not only are they useless; they also appear downright counterproductive. Strangely enough, the only folks in biology that still hold steadfast to the notion that everything is there for a functional “purpose” are your good friends the darwinian fundamentalists. How’s that for irony?

    A big problem I have with the notion of a lot of unused DNA is that 3 billion bases, even if every single base contained needed information, seems like an incredibly small storage space for the specifications of something as complex as a human being. Perhaps the rapidly mutating regions are simply softer data that changes more often by design. For instance, we really don’t have a clue as to how instinctive behaviors are stored. Having instincts that are very malleable from generation to generation so that inheritance of acquired knowledge can happen rapidly would have great survival value as it would allow rapid adaptation to a changing environment. Until we know how instincts are stored I hesitate to call anything junk because there is just seemingly too little storage capacity in the genome to begin with. -ds

  16. The complementary sequence of bases – every A matches with a T, every G matches with a C – allows a double strand of DNA to be regenerated from a single strand. When DNA is replicated the strands are separated and one DNA polymerase complex goes to work on each of the resulting strands, making them double again. If something happens to one of the strands – let’s say two thymines next to each other happen to stick together and make a thymine dimer that has to be cut out or excised – the resulting hole can be filled in because DNA polymerase detects the two adenines opposite the hole, and knows to fill it in with two thymines. So there’s redundancy right off the bat.

    Diploid organisms have an extra layer of redundancy: if both strands of DNA on one chromosome are damaged, (DNA damage from ultraviolet light, etc.), there is still a homologous chromosome with a hopefully identical, or at least similar, sequence. When it comes to DNA repair by recombination, in the event of a double-stranded break the two chromosomes are brought together and do some sort of horizontal action (real technical, I know…) that allows DNA polymerase to make new strands for the broken piece of DNA from the sequence on the other chromosome.

    When it comes to backups:

    http://blog.tmcnet.com/blog/to.....ntists.asp

  17. Farshad @9: “I removed Part X and it still works fine, therefore Part X is junk” is an argument from ignorance.

    Moreover, second-guessing of any aspect of any design without knowing the design’s specifications and constraints is a demonstration of ignorance and presumption. One must know what the goal was/is to be able to legitimately evaluate or criticize a design.

  18. “The only significant chunk deemed useless (with regards to our genome)–repetitive/viral DNA,etc–can be pretty clearly seen to have, except for those rare circumstances mentioned above, no apparent function beyond making new copies of themselves.”

    Why repetitive DNA is essential to genome function
    How repeated retroelements format genome function
    Beneficial Role of Human Endogenous Retroviruses: Facts and Hypotheses
    Comprehensive Analysis of Human Endogenous Retrovirus Transcriptional Activity in Human Tissues with a Retrovirus-Specific Microarray

    Repetitive DNA is one of the most fundamental pieces of the whole genome. Earlier we had an entry here on Uncommon Descent that some repetitive DNA sequences can be used as tuning knobs. Repetitive DNA is used all over the genome in numerous purposes, as noted in the papers above, to format the functioning of the genome.

    The fact that something may change often or not is no indication of how useful it is.

    As for viral elements, have you not considered that viruses make perfect packages for plug-ins? Bacteria use viruses all the time in order to trade DNA with each other. Why assume that such mechanisms are necessarily bad when you hit eukaryotes?

  19. Great_ape, you suggest that the biological community has a very good handle on the non-coding DNA, and that professional biology has not been baffled by redundant genes for a long time. I come at this as an outsider (non-biologist) who is sticking his hand up high in the air and asking, if professional biologists have such a good handle on it, why has Nature published the article pointed to by Scordova in March, 2006? It seems that some biologists who have been studying the minimal bacteria have not been quite as professionally advanced as you have suggested that the field is. Are those guys just boneheads, or have you exaggerated the professionalism of the biological community just a bit?

  20. bFast,

    Alas, this is a catch 22 situation. If specific gene error correction exists, it is unevolvable by NDE. If redundant genes exist without error correction, then they are not protected by natural selection, and should mutate to mush in very few million years.

    Right on. And might I add, the Biotic Message theory says that life is designed to resist blindwatchmaker explanations. What you point out highlights that life is architected such that when attempts are made to explain it’s origin in terms of blindwatchmaker processes, the explanations will be trapped from succeeding with catch-22.

    When some blindwatchmaker explanation is offered, one only need ponder the facts a little more carefully, and nature will make it evident that a blindwatchmaker was not the author of life. It’s been enteraining to watch biotic message theory play out in these internet discussions!

    That said, a major challenge is how scientists will be able to identify contingency designs since these are the very kinds of designs which elude knockout experiments.

    There are some rather classic examples of knockout experiments which initially failed to uncover function. The human appendix was viewed as “vestigial”. Same is true for the thymus and tonsils which were also considered useless leftovers. In fact, a somewhat ghastly fact is that so many kids were having tonsils removed for no good reason! These early knock-out experiments were insufficient to uncover the function of the organs. (Darwinian evolution is not really taught in medical schools, and it should stay that way!)

    If special contexts are needed to trigger contingency designs, is there a way we might identify these components and their roles without actually having to induce a special circumstances?

    I believe so, and the Explanatory Filter might be the key. I hope to post my thoughts on this shortly. But in the interim, I simply point to the work of IBM and bioinformatics. They’ve detected a non-random pattern in the genome. It is not explanable by random processes, and it has not yet been demonstrated to have function. That’s right, they are forecasting a functional role even before the role is known. Good for them!

    I hope to post on these topics further.

    Salvador

  21. (An aside for the aviation enthusiasts. At 1800 RPM during a run up check in Cessna-172, the engines can drop as little as 25 RPM when one magneto is shut off. The manual says the drop should not exceed 125 RPM.

    During a 2200 RPM cruise or a low RPM descent and when the pilot is seeing many changes in airplane configuration, the magneto system failure could be “lost in the noise.” What apparently happened was the spark plugs tied to left magneto system got badly fouled, and I mean badly, the engine was cutting off during the check. It absolutely turned my stomach. The physical magneto did not fail as much as the magneto system (a phrase which I mean to includ the plugs tied to them).

    It remains a mystery why the plugs tied to the right magneto were not comparably fouled. I can only thank Providence for that fact. To remedy the situation, I leaned out the engine and ran up the RPMs to generate maximum heat in attempt to burn off the junk on the plugs tied to the left magneto. I was skeptical super heating the engine would solve the problem because the way the engine was dying, I thought for sure it was something other than spark plug fouling. Anyway, after a few minutes of this procedure, the left magneto system was operational, but I flew with the uncomforatble thought that only a few hours of flight could hose one’s sparkplugs that badly!

    OK, maybe after this thread is done talking about biology, we can use the rest of the thread to share our flying stories. :-)

    )

    Tip temperature of the plug in the right range prevents fouling. Too hot causes pre-ignition and too cold causes fouling. It’s possible someone replaced the left plugs with a colder plug type. Or the previous pilot might have had the mixture too rich on the ground at low RPM for a prolonged period and the left plugs were just the first to foul. I remember now the RPM drop wasn’t a few hundred. I thought that sounded too high as that’s a huge drop in thrust. You should notice it on climbout though. You better notice everything on climbout as that’s the most dangerous period of time. There’s nothing more useless than altitude above you and runway behind you. -ds

  22. (johnnyb, Thank you for the references on “junk DNA”. )

    For the benefit of the readers, I provide this little gem which shows the “junk dna” viewpoint is widespread:

    Plagiarized Errors and Molecular Genetics Another argument in the evolution-creation controversy

    Although the high content of “junk DNA” was initially surprising when it was discovered, our current understanding of the mechanisms of genome expansion (duplication and insertion) and the apparent lack of significant selective pressure to minimize genome size combine to make the accumulation of useless sequences in our DNA seem inevitable.

    Or how about this wikipedia entry (almost certaily by a Darwinist)

    Junk DNA

    In molecular biology, “junk” DNA is a collective label for the portions of the DNA sequence of a chromosome or a genome for which no function has yet been identified. About 98.5% of the human genome has been designated as “junk”, including most sequences within introns and most intergenic DNA. While much of this sequence is probably an evolutionary artifact that serves no present-day purpose,

    DNA is classified as junk simply because scientist are ignorant of it’s possible function. This is an example of a pre-disposition to discourage the exploration of something that might have serious scientific value simply because of the Darwinian anti-design bias is pervasive in certain circles.

    John Sanford points out in Genetic Entropy that one reason the “junk” prejudices has been sustained is that if the complexity biological artifacts can be concealed and swept under the carpet, it’s a little easier (relatively speaking) to sell the idea biology is not designed and that blind mechanisms can therefore create it.

    I don’t mean to imply any sort of conspiracy is a work, but I do assert this whole business of “knock-out” experiments and the attitude of “it’s junk unless proven otherwise” in order to affirm foregone conclusions has been a hindrance to scientific discovery.

    I’m not averse to the thought that there may be failed and decaying systems in biological genomes, and that therefore junk exists to some extent. What I object to is the pre-mature conclusions being made based on naive approaches such as knockout experiments and the attitude of “until you demonstrate something has function or selective value, I’ll presume it’s functionless junk.”

    Here is a non-ID website with a better perspective:

    http://www.noncodingdna.com

    Many people believe that the vast majority of the human genome is functionless. This part of the genome is generally referred to as noncoding DNA, but has also been labeled junk DNA. There is mounting evidence, however, to suggest that these genetic sequences are biologically important.

    A recent paper – the reason for the design of this site – suggests that the amount of noncoding DNA (or junk DNA) per genome is a more accurate indicator of biological complexity than either gene number or genome size. It is therefore highly likely that these sequences are functional, and the idea of “junk” DNA may need to be trashed.

    Indeed, the very thing that could be evidence of complexity is the very thing the anti-ID community of biologist insisted was junk!

    Salvador

  23. 23

    People might call it junk that doesn’t mean they just ignore it. Biologists have been looking for possible functions of junk DNA since before the term was coined, we didn’t really understand much about functions of noncoding DNA until fairly recently.

  24. great_ape: “The ‘mistake’ of claiming something is not functional when it is does happen, but it’s a rare thing and one that can be remedied easily enough when additional info comes to light. The notion that a design paradigm would fill a “blindspot” in current biological thinking is, in my opinion, of far more rhetorical substance than actual substance. When you’re down on the front lines, there’s ultimately very few systems that are deemed nonfunctional.”

    Genes that do not code for proteins have long been considered to be “junk.” However, they may play an important but not yet understood role in the regulation of other genes. The explanation of the role of the noncoding DNA may provide the solution to some of the open questions about the structure of the genotype. There are several different kinds of noncoding genetic material, including introns, pseudogenes, and highly repetitive DNA (Li 1997). At least some noncoding DNA definitely has a function: introns keep exons separate. What is particularly difficult to understand is the great amount of noncoding DNA. According to some estimates, 95 percent of the human DNA is “junk.” A Darwinian (sic) finds it difficult to believe that selection would not have been able to get rid of it if it was indeed totally useless. After all, the production of this DNA is expensive. — Ernst Mayr, (2001), p. 109.

    What I find hard to believe is the sudden change of tune (indeed, reversal of opinion) of Darwinists regarding “junk DNA”:

    Imagine a world in which actual hands from another galaxy supplemented the “hidden hand” of natural selection. Imagine that natural selection on this planet was aided and abetted over the eons by visitors: tinkering, farsighted, reason-representing organism designers… Now, would their handiwork be detectable by any imagineable analysis by biologists today?

    If we found that some organisms came with service manuals attached, this would be a dead giveaway. Most of the DNA in any genome is unexpressed — often called “junk DNA” — and NovaGene, a biotechnology company in Houston, has found a use for it. They have adopted the policy of “DNA branding”: writing the nearest codon rendering of their company trademark in the junk DNA of their products. According to the standard abbreviations for the amino-acid specifiers, asparagine, glutamine, valine, alanine, glycine, glutamic acid, asparagine, glutamic acid – NQVAGENE (reported in Scientific American, June 1986, pp. 70-71). This suggests a new exercise in “radical translation” (Quine 1960) for philosophers: how, in principle or in practice, could we confirm or disconfirm the hypothesis that trademarks — or service manuals or other messages — were discernable in the junk DNA of any species? The presence of functionless DNA in the genome is no longer regarded as a puzzle. Dawkins’ (1976) selfish-gene theory predicts it, and elaborations on the idea of “selfish DNA” were simultaneously developed by Doolittle and Sapienza (1980) and Orgel and Crick (1980) (see Dawkins 1982, ch. 9, for details). That doesn’t show that junk DNA couldn’t have a more dramatic function, however, and hence it could have a meaning after all. Our imagined intergalactic interlopers could as readily have exapted the junk DNA for their own purposes as the NovaGene engineers exapted it for theirs. — Daniel C. Dennett, Darwin’s Dangerous Idea (1995), p. 316.

    —————
    Adding to my comment @17: As is very obvious from this thread, evaluating or criticising a design without even understanding its function is also a demonstration of ignorance and presumption.

    So what you’re saying is there are comments in the code. Including copyrights. Interesting. Dont’ forget about an edit history too. -ds

  25. 25
    fifthmonarchyman

    Hey Guys,

    I am finding this subject fascinating. Every 7th grade science student has wondered why we have two lungs and two kidneys but only one spleen. Off the top of my head I can’t think of a compelling reason for this phenomenon from a RM/NS point of view. (Maybe someone could post one here)
    I would think that NS would work to eliminate all redundancy and any occurrence of it would be evidence of it’s impotence.

    From a ID perspective however we could calculate the benefit/cost of redundant systems and organs using an equation similar to the Drake equation to see the likelihood of a redundant system being truly “junk” in an organism. All we need to do is find the value of a few variables .

    FP= Failure Potential .What are the odds a particular system/organ failing in the course of a normal life span of an organism ? For complex systems like the heart or lungs this number would be higher than for organs like the larynx

    I= Importance. How important to the overall function of an organism is the function of a given system? A heart is a more critical organ than a gallbladder likewise certain stretches of DNA are more critical than others.

    C = Cost. How much in the way of resources does the building and maintenance of the system in question require? a brain would require many more resources than a thyroid gland for example.

    T= Transferability. How hard is it to transfer the function of one system to its redundant counterpart? It would be easy to transfer the function of one kidney to another but imagine the difficulty in transferring the function of one brain to another

    I= Integration difficulty. How complicated would it be to maintain two functioning systems at the same time? The potential for arrhythmia between two beating hearts would be huge for example.

    We can give each of these variables a value from 0 to 1 and If (FP + I) – (C + T+ID)>1 then a given system/organ should be redundantly backed up in an organism if on the other hand if (FP + I) – (C + T+ID)

  26. 26
    fifthmonarchyman

    Hey

    for some reason the last part of my post did not come up probably my fault any way here it is

    Peace

    We can give each of these variables a value from 0 to 1 and If (FP + I) – (C + T+ID)>1 then a given system/organ should be redundantly backed up in an organism if on the other hand if (FP + I) – (C + T+ID)

  27. There’s a reason KeithS is blacklisted here. He’s loud and stupid. Loud I can tolerate. At ATBC he blithers on about how any real pilot would have immediately turned around and landed suspecting a faulty airspeed indicator.

    Here’s a clue for you Keith. Pilots flying VFR don’t rely on flight instruments. No instruments at all. If you can’t fly safely without instruments on a clear day in uncontrolled airspace you don’t belong behind the yoke.

    Takeoff is not accomplished by staring at the airspeed indicator waiting for rotation speed you dolt. You trim the elevator for takeoff, accelerate at full throttle, and let the plane lift itself off the ground without yoke pressure from the pilot. You never hear the stall horn on takeoff. With enough experience in the aircraft and runway, and knowing the headwind component, you know about where on the runway the plane will lift off if everything is normal. On climbout you have several instruments as well as your senses to tell you what’s going on. You have a rate of climb indicator and an artificial horizon. If everything is running normal your rate of climb at full throttle and pitch will be at a certain reading. Backups are looking out the window and listening to your engine. Everything was normal on climbout except again the airspeed was higher than expected. I didn’t reduce the speed of the aircraft since everything else was fine and doing anything at higher than normal speed as long as one isn’t going so fast as to rip the wings off isn’t dangerous.

    On the flight in question everything was normal except the airspeed indicator was reading high. No biggie. It’s only an aid and the aircraft can be flown without it. The aircraft can be flown without any flight instruments and I’d be more concerned with a malfunctioning tachometer or oil temp guage than any flight instruments.

    As for the airspeed indicator being clearly labeled MPH – it was but it was in very small print and one tends not to notice things that one never looks for. I had no idea that MPH calibrated airspeed indicators were ever found in standard rental planes. Now I know. I could just as easily blame my instructor for never telling me to check for it since he’d been working out of that small airport with its single rental/training outfit for a few years. Surely he knew that a single C-172 among the half dozen or so of them had an MPH airspeed indicator.

  28. J thank you for the Dennett quote! I intend to pursue that one a bit more in another post.

    j wrote:

    Adding to my comment @17: As is very obvious from this thread, evaluating or criticising a design without even understanding its function is also a demonstration of ignorance and presumption

    Absolutely!

    In addition to the back and forth between pro-IDers and anti-IDers there is a bit of a turf war going on between the quants (mathematicians, engineers, physicists, chemists, etc.) and traditional biologists. The quants are trying to point out where they feel the biologists are mistaken (and even incompetent), and naturally the biologists are inclined to tell the quants to buzz off. Something of the cultural disdain between the various disciplines is evident. For example: Barrow to Dawkins: You’re not really a scientist

    and even Jerry Coyne observes:

    In science’s pecking order, evolutionary biology lurks somewhere near the bottom, far closer to phrenology than to physics

    Indeed, I sense our friends in evolutionary biology, though they lauded publicly as essential pioneers in science, sense that other scientific disciplines privately perceive them at the bottom of the pecking order [with physicists being at the top. :=) ]

    So let me add a litte more fuel to fire and come out and say it: biology needs the mindset of the quants and the professional designers (engineer types) to help straighten the field out. Thankfully this change has already begun:

    The Rise of Systems Biology

    “Biology is making an historic transition from being a descriptive science to being an engineering science,” says Regis Kelly, director of the Institute for Quantitative Biomedical Research (QB3). “This transition is putting us on the edge of one of the most exciting times in the history of biological science,…”

    Yeppers, the quants will hopefully start a revolution in biology. This has troubling implications however for the Darwinian community if the Salem Hypothesis is true. :-)

    Ah, but I digress again. Regarding the criticism of design, I give another presonal anectdote.

    The reading of compact discs have a huge number of read/write errors (call them mutations if you will) designed into the system which are then corrected via Reed-Solomon coding. One would be inclined to ask why not make more reliable read/write processess so error correction is not needed, and why deliberate design a system with a high error rate. The answer is that if one’s teleleological goal are for compactness of storage, according to Shannon’s theorem, this is the optimal way to store data: “allow numerous errors and then correct afterward”.

    The unititiated however, upon looking at this method of information storage would be inclined to criticize the designers as incompetent. I heard biologists say exactly that, “a competent designer would not have made DNA copy mechanisms which require error correction, he would have made a copy process which got it right on the first pass.” I shook my head in disgust, and I then proceed to set them straight on principles of information science.

    Regarding “imperfect copies” like pseuodogenes and the like, in addition to considerations of when “imperfect copies” are desirable based on Shannon, we have in information science the concepts of lossy compression and software revision control systems. Further, an imperfect copy may be vital piece of instrumentation (gee, is it possible that an imperfect copy is actually a counting register? Could it be telomeres are not really junk DNA? I point the readers to what I wrote regarding those “imperfectly copied” telomeres and Geron Corporation: How IDers can win the war.

    I predict the metaphors I offered such as “lossy compression”, “software revision control”, “counting registers” will be found in biology. “Imperfect copies” are not necessarily imperfect after all!

    Finding these metaphors expressed in physical artifacts is what the EF is very good at, by the way….

  29. 29

    “Yeppers, the quants will hopefully start a revolution in biology. This has troubling implications however for the Darwinian community if the Salem Hypothesis is true.” Considering how many engineers are always working in systems biology groups helping biologists understand how evolution works in the context of networks I don’t see this happening very soon. A great deal of what I now understand about evolution I have learnt from engineers, physicists and mathematicians over the past few years.

  30. I thank all the contributers to this thread. I will be going out of town until next week. Let me try to answer some questions before I depart:

    Michael Tuite asked:

    Hello Salvador,
    Why do you think life’s designer might have created redundant genetic pathways for the generation of a nematode vulva? Do you have any thoughts about what general criteria the designer might have used to determine which genetic functions ought to be reduntant? If the designer was really good at what he does, would redundancy really be necessary?

    Thanks

    Redundancy is a sign of good engineering under the constraints of known physical law.

    The genral criteria could be possibly glean by comparative methods. We are finding very similar organizing principles in radically different biological architectures. For example, radically different animals have very close specifications in terms of energy efficience in certain dimensions (unfortunately I lost the reference). Regarding redundancy, I would not be surprised to see it relatedin terms of Shannon’s information capacity limit someday. We are already finding biology is maximally optimized for compact information storage.

    Chris Highland asked:

    Can you explain what you mean by ‘look for design’, isn’t that what the explanatory filter does?

    Thank you for visiting our weblog. What I meant is that it’s easy to infer something has design if a knockout experiment causes serious systems failure. What is harder to determine is if an artifact is a contingency design, because the circumstances which trigger function my be quite difficult to uncover (which is the thesis of this thread). And even harder if the design is to serve as a user manual for human investigation!

    Furthermore, if we see non-random relationships in biology that have no selective value but trigger the EF, then those should be regions of interest.

    The telomeres were considered junk with only marginal utility as far as protecting the ends of the chormosomes. What selective value is there in keeping long telomeres and then shortening telomeres when copies are made. In fact, cancer cells with their already short telomeres have the highest selective advantage in the body.

    However, many biologists noticed a correlation between telomere length and the “age” of cell (closeness to senescence). It turns out the telomere is a nice little bit of instrumentation and it made reverse engineering possible. Geron was able to immortalize the human cell as a result, and we now have hope of curing many medical issues as a result.

    But this development came about not because of detecting selective advantages but noticing correlations in seemingly unrelated issues: telomere length, telomerase, and the longevity of the cell. Geron made a spectacular piece of reverse-engineering worthy of a the computer hackers hall of fame. They reset the age counter of the cell.

    Biology could be rich with roadmaps and usermanuals and software documentation to help human investigation. I can’t tell you how difficult it would have been for Geron to immortalize a cell had they not found a subtle correlation between changes in telomere length and longevity.

    Knockout out experiments to uncover function are like taking machine code and hacking out pieces to uncover function. It would be much more productive to reverse engineer the machine code, or better yet, what if an equivalent source code is found residing in the biolgical system. It would be like having *.exe files stored with the *.c files in a system if you know what I mean.

    These are exactly the kinds of designs the EF can sniff out and which would elude knockout explorations. What use is there leaving a *.c file along with the *.exe file. Well, it the *.c files serve as a backup role and works as a user manual. Further, this idea makes sense if the designer had the user in mind.

    This is a daring hypothesis that the Designer had us in mind and optimized the universe and life for scientific discovery, but that is one of the principles in the heart of ID. Finding these kinds of designs in biology, the kind with little or even negative immediate selective advantage, but which optimize scientific discovery could win the war for IDers.

    Geron did not use the EF, or CSI per se, but they gave a strong hint to the rest of industry, “junk DNA” can provide a road map and user manual to life.

    Salvador

    PS
    Michael,

    There is always the lingering question of a supposedly imperfect universe, and that is a good question and there are good answers. In brief, bad design can be good design, it depends on the goals of the design. Would a playright create all the characters in a drama to be perfect? Why should we expect the designer of the universe to do the same for the drama of universal history?

  31. Chris –

    It depends on what you mean by “evolution”. If evolution is merely the change in organisms from generation to generation, then I count myself an evolutionist to the highest degree! There’s nothing more interesting in biology than the way organisms adapt to their environments, and then transmit those adaptations to their offspring.

    If on the other hand, you mean by “evolution” any or all of (a) everything descends from a small set of unicellular common ancestors, (b) there are no final causes in biology, (c) happenstance changes are the substrate of beneficial change in a large search space, and possibly other atelic notions that have become dogma in biology which states that these systems can come into existance on their own, then I am anti-evolution.

    As I’ve pointed out before, I think that your own positions may be closer to ID than you think, and you have simply been sold on the mantra that “ID isn’t science” and have taken that as a basic fact, rather than investigated what it is that ID says and how closely it matches your own thoughts.

    Noone that I have ever met has had any problems with the experimental results of evolutionary biology (and I’m a fairly fundamentalist Creationist who hangs out with some people even more fundamentalist than myself). The issue is this — where is all of this information to guide change coming from? It is evident for many that it must necessarily come from higher-order designs and plans, not self-built from lower-order designs or non-designs that happen on fortuitous variations. Such a thing is the equivalent of a propetual motion machine for information.

  32. Raevmo: Can anyone give an example of truly (i.e. identical) redundant systems in nature?

    Brains.

    Salvador: DNA is classified as junk simply because scientist are ignorant of it’s possible function.

    It’s my understanding that DNA is classified as junk for theoretical reasons, not observational reasons. Can anyone else comment on this?

    According to some estimates, 95 percent of the human DNA is “junk.”

    Does anyone know the source for Mayr’s figures? Who made these estimates, and what were they based upon?

  33. The presence of functionless DNA in the genome is no longer regarded as a puzzle. Dawkins’ (1976) selfish-gene theory predicts it, and elaborations on the idea of “selfish DNA” were simultaneously developed by Doolittle and Sapienza (1980) and Orgel and Crick (1980) (see Dawkins 1982, ch. 9, for details).

  34. I made a stupid comment. There clearly are many redundant systems, even at the DNA level: e.g. multiple copies of rRNA genes etc. But I am not convinced yet that this redundancy cannot be explained from a functional point of view (i.e. NS). Here’s some anecdotal evidence: an uncle of mine lost one kidney by the horns of a bull. Yet he lived because he had a second one (but sadly he later died from cancer in the remaining kidney). True story.

    Quite a bit of junk DNA will of course turn out to be functional too (it’s a one-way street), but roughly 50% of human DNA supposedly consists of the remnants of transposible elements (TEs), “selfish” DNA (see Mung’s comment) that spreads by inserting copies of itself all over the genome. Quite functional (i.e. NS) from the viewpoint of the TE itself, but bad for the organism. That stuff is worse than junk from the individual point of view, but of course not from the TE’s point of view.

  35. “roughly 50% of human DNA supposedly consists of the remnants of transposible elements (TEs), “selfish” DNA (see Mung’s comment) that spreads by inserting copies of itself all over the genome.”

    The “selfish” DNA hypothesis is a nice story, but like most such stories, simply isn’t consistent with the facts. Transposable elements are used by the genome for many purposes, including function modulation and whole-genome resstructuring.

    A good paper against the “selfish” DNA hypothesis:

    On the Roles of Repetitive DNA Elements in the Context of a Unified Genomic–Epigenetic System

    A good paper on how transposable elements are used by cells in very beneficial ways:

    Transposable elements as the key to a 21st century view of evolution

    Basically, transposable elements are tied to the circuitry of the cell’s genetic engineering system, and are used to help in the manufacture of novel genetic variation.

    Transposons are often activated in times of stress, and help the genome re-engineer itself to recover from the stress. Transposons contained entire packets of function that can be quickly mobilized and distributed to the locations in the genome where it is needed.

    The view of the genome that you are espousing (and what is being taught in schools) is both 15 years out-of-date and was based on arguments from ignorance when they originated. In fact, the modern view of transposons as “controlling elements” was that which was expressed by the scientist who found them, Barbara McClintock. IIRC they wouldn’t even let her publish her findings because they were so at odds with the view of biology at the time. Finally, after her work was finally recognized, they were relabelled “selfish DNA” to avoid the obviously telic implications of a functionally structured genome. And this label unfortunately persists despite mounds of evidence to the contrary.

  36. johhnyb: I don’t think the view that transposons are “selfish” elements is as out of date as you suggest. The Sternberg paper you refer to seems to be mostly ignored by other scientists (5 citations in 4 years). Here’s the abstract from a paper in Trends in Genetics (Vinogradov 2003, #19, p609-614) which cites the Sternberg review:

    “Notwithstanding an average evolutionary increase of genome size in the higher plants due to activity of transposable elements, threatened plant species (those that are now on the brink of extinction) are shown here to have on average larger genomes than their more secure relatives, which indicates that redundant DNA in the plant genome might increase the likelihood of extinction. The effect is (at least partially) independent of the duration of the plant’s life cycle. Polyploidy is found not to be associated with the increased risk of extinction. These data agree with the hypothesis of ‘selfish’ DNA and indicate an antagonism between different selection levels, thereby supporting the concept of hierarchical selection.”

  37. 37

    “As I’ve pointed out before, I think that your own positions may be closer to ID than you think, and you have simply been sold on the mantra that “ID isn’t science””

    Not really I read most of Darwin’s Black Box and a lot of Discovery Institue articles before I’d even heard of the NCSE etc. My problem is similar to David Heddle’s, he says:

    “I will accept ID as science when I read something like this:
    A scientist at (some respected research university) has been awarded a grant to do experiment X. ID predicts the result of the experiment will be Y. Non-ID predicts the result will be Z.
    And don’t tell me this cannot happen because the secular scientific community would never allow it. I was a practicing scientist before I was a believer, and we never had any secret meetings where we discussed our true agenda of destroying Christianity in the guise of science.

    Predictions such as We will never discover an evolutionary pathway for (whatever) or We will never detect a parallel universe are interesting and important, but they are not examples of predictability arising from a full-fledged scientific theory.”

    “Such a thing is the equivalent of a propetual motion machine for information.”

    I’d appriciate any links people can give me where the relationship between evolution, biology and information theory is explianed. Preferably not one where people calculate the probability of entire proteins forming spontaneously out of random combinations of amino acids etc. Im particularly interested in how things like changes in regulation are covered, and how small changes in the genotype that cause large changes in genotype are treated in the context of information. Also something that deals with natural selections branching/pruning would be very interesting.

    “The view of the genome that you are espousing (and what is being taught in schools) is both 15 years out-of-date and was based on arguments from ignorance when they originated.”

    As I have said before what is taught in school is always out of date and it is certainly a state of affairs that needs remedying. Im not sure what it has to do with ID though, it certainly isn’t a scheme to stop people questioning evolution, as if what scientists know now somehow weakens the theory. Personally I would have loved to have learned about evodevo, plasticity, epigenetics etc, but it will probably be a long time before these things are taught in high school, which is a shame.

    Predictions such as We will never discover an evolutionary pathway for (whatever) or We will never detect a parallel universe are interesting and important, but they are not examples of predictability arising from a full-fledged scientific theory.”

    Bull! The classic prediction of Darwinian evolution that would falsify it is finding human and dinosaur fossils together in the same strata. In fact it was the basis of a famous hoax where some scientists carefully staged just such a fossil discovery to tease the YECs. -ds

  38. “A scientist at (some respected research university) has been awarded a grant to do experiment X. ID predicts the result of the experiment will be Y. Non-ID predicts the result will be Z.”

    Do Centrioles Generate a Polar Ejection Force
    N and P elements in V(D)J recombination have the purpose of structurally and functionally stabilizing proteins made from bags of pieces (last abstract in the proceedings, titled Metaprogramming and Genomics).

    I don’t know if the former has a grant, and I know personally that the second one does not, ID is making predictions.

    Now, out of curiosity, if you apply your same reasoning to evolution, does the same thing happen? Do you have questions which say “the evolutionary prediction is X and the non-evolutionary prediction is Y”? I’m talking about _before_ the experiment is run, not afterwards.

    “I’d appriciate any links people can give me where the relationship between evolution, biology and information theory is explianed.”

    It’s a large subject, but I would start with the following papers:

    Biological Function and the Genetic Code are Interdependent
    Chance and Necessity Do Not Explain the Origin of Life
    The Origin of Life on Earth and Shannon’s Theory of Communication (I haven’t read it, but I’ve also heard good things about Information Theory, Evolution, and The Origin of Life)
    Three subsets of sequence complexity and their relevance to biopolymeric information
    Searching Large Spaces
    Evolutionary Computation: A Perpetual Motion Machine for Design Information? (lay article)
    Darwinism vs. Teleology in Genomic Change (my own lay article — the part you are probably interested in starts with “The Nature of Computational Systems and Programs”)

    “Preferably not one where people calculate the probability of entire proteins forming spontaneously out of random combinations of amino acids etc. Im particularly interested in how things like changes in regulation are covered, and how small changes in the genotype that cause large changes in genotype are treated in the context of information. Also something that deals with natural selections branching/pruning would be very interesting.”

    While I don’t _remember_ those articles using spontaneous protein generation as the explicit model to refute, I know they all at least have other interesting focuses. The small changes in genotype causing large changes in phenotype are covered in my lay article, which I may try to publish some papers on parts of it. As far as branching/pruning, I would just say that it’s not hugely relevant.

    In light of Dembski’s “Searching Large Spaces”, you should read Chance Favors the Prepared Genome, especially this: “A genome’s ability to grow and to explore new organizational structures would be severely constrained, if its options were limited to simple point mutation…most organisms tolerate only relatively low levels of point mutation in a gneeration. Instead they have evolved mechanisms that generate multiple sequence changes in a single step, allowing them to bypass unselected neutral, and negatively selected, sequences that may lie on point mutation pathways between the current sequence and a more optimal sequence. Indeed, where genomic sequences have been available to provide a window into the evolution of a new gene, the series of steps revealed has been complex.”

    Also less relevant, but still somewhat on topic, see my own discussion of irreducible complexity.

  39. There are subtlties and corrections that should be considered to my original posting. I thank Dr. Ricardo Acevedo for his constructive criticisms and suggestions:

    Pleas visit:

    http://newtonsbinomium.blogspo.....urdum.html

  40. [...] Natural Selection does not trade in the currency of design (ala Allen Orr). I have also argued here why contingency designs are almost invisible to natural selection. The ability to regenerate major organs is an example of a contingency design. [...]

  41. [...] can excel at. Recall what I said about functional systems with little or no selective advantage (here). Well, here is another case where functional information could exist with little or no reason for [...]

  42. [...] 2. Contingencies for failed designs: Airplane magnetos, contingency designs, and reasons ID will prevail [...]

  43. [...] 2. Contingencies for failed designs: Airplane magnetos, contingency designs, and reasons ID will prevail [...]

  44. [...] Airplane Magnetos [...]

  45. […] here is a paper that vindicated something I said in 2006 regarding spare parts: Loss of Genetic Redundancy in Reductive Genome Evolution. It basically says, functions not visible […]

  46. […] here is a paper that vindicated something I said in 2006 regarding spare parts: Loss of Genetic Redundancy in Reductive Genome Evolution. It basically says, functions not visible […]

Leave a Reply