Home » Intelligent Design » Larry Moran Responds Regarding Directed Mutations

Larry Moran Responds Regarding Directed Mutations

In a previous post, I had taken Dr. Larry Moran to task about whether or not mutations, as they exist in nature, are random. Dr. Moran has responded, so I thought I’d give readers a summary of his response.

Dr. Moran seems to have at least two different things in mind when he says that evolution is unguided. That’s not a problem per se, but it is always good to be precise about which concept of unguided-ness we are discussing.

  1. The appearance of homo sapiens was not an intended part of the evolutionary process
  2. Mutations are random with respect to their ultimate usefulness

In my post, I was primarily concerned with #2, as I will be in this one.

Presumably, he has several different ways of knowing whether or not mutations are random. However, he only lists one: “Comparing the sequences of homologous genes in different species” Unfortunately, he didn’t say what it was about the sequences of homologous genes in different species that led to his conclusion that mutations are random. Maybe he will help us out with that one later.

However, unfortunately, he totally misses the point of my post. Dr. Moran said that creationists (meaning me) misunderstand his meaning of randomness. However, as we will see, I am precisely using the exact same definition of randomness that Moran is. But first, let’s see what Dr. Moran has to say:

Creationists also like to argue that mutations are not truly random. They point out that there are mutational hotspots in the genome and there’s a bias in favor of some mutations over other (e.g. transitions are more common than transversions). In most genomes, mutations are more common at sites where C is methylated.

All this is true and the results were discovered by scientists, not creationists. It’s why scientists try to avoid saying that mutations are random; instead they say that mutations are random with respect to their ultimate usefulness. Sometimes we slip up for simplicity as when I said in my previous posting that mutations are “essentially random,” although I added “Let’s not get into quibbling about the meaning of “random.”

What Moran is attempting to do is to paint my argument as saying, “because mutations don’t follow a uniform random distribution, the random mutation hypothesis is false”. But that wasn’t ever my argument at all. I was arguing specifically against what Moran says the true definition of evolutionary randomness is – “random with respect to their ultimate usefulness”.

In other words, it is not interesting to me that the mutations that occur when an antigen invades is restricted to a specific region of the genome. What is interesting is that the specific region of the genome that mutations get restricted to when an antigen arrives is the PRECISE region of the genome that the organism needs for its utility. I hope you can see the difference between those two ideas. In the former case, we simply are talking about a non-uniform distribution. That is uninteresting. What is interesting is the specific way it is non-uniform. It is biased precisely in the direction of contributing to the ultimate utility of the organism!

Moran goes on to give some details as to the mechanistic basis of this targeting, which are of course interesting. He misses one important one – an intron targets the precise location of the mutations – moving the intron will change the position of the mutation region and cause improper mutations to occur. But that is really beside the point.

The big question is, are mutations random with respect to their ultimate usefulness? In this particular case, the answer is a resounding no. I would argue that the answer continues to be no in more and more mutational mechanisms. Keep in mind that no experiments are required in order for an evolutionist to proclaim that a given mutation is random – this is simply assumed to be the case. But it can sometimes take 10 or more expensive experiments to demonstrate a mutability mechanism, such as this one. Therefore, since evolutionary biologists are not predisposed to think that mutations are directed, and it costs lots of time and money to prove that a mutation is associated with a purposive mechanism, it is not surprising that it often takes many years after the discovery of a mutation to learn about the mutational mechanism behind it. But more and more that is what we are learning.

UPDATE – I thought that if anyone is interested in a more full discussion of randomness and its intersection with design theory, you might be interested in a previous paper of mine on the subject.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

18 Responses to Larry Moran Responds Regarding Directed Mutations

  1. JohnnyB, Please forgive me for the length of this following response to Dr. Moran, but I feel his unsubstantiated claims deserve a thorough response. Though I posted this from Dr. Shapiro on your original link yesterday JohnnyB, I think it is worth repeating since it is so important to the main topic under discussion with Dr. Moran. Dr. Shapiro has clearly pointed out that mutations (alterations to the genome), are, for the vast majority of times, found to be not truly random in the Darwinian sense of randomness. Indeed Dr. Shapiro has listed many mutational mechanisms that are appear to be, after many ‘expensive experiments’, ‘directed’;

    Revisiting the Central Dogma in the 21st Century – James A. Shapiro – 2009
    Excerpt (Page 12): Underlying the central dogma and conventional views of genome evolution was the idea that the genome is a stable structure that changes rarely and accidentally by chemical fluctuations (106) or replication errors. This view has had to change with the realization that maintenance of genome stability is an active cellular function and the discovery of numerous dedicated biochemical systems for restructuring DNA molecules.(107–110) Genetic change is almost always the result of cellular action on the genome. These natural processes are analogous to human genetic engineering,,, (Page 14) Genome change arises as a consequence of natural genetic engineering, not from accidents. Replication errors and DNA damage are subject to cell surveillance and correction. When DNA damage correction does produce novel genetic structures, natural genetic engineering functions, such as mutator polymerases and nonhomologous end-joining complexes, are involved. Realizing that DNA change is a biochemical process means that it is subject to regulation like other cellular activities. Thus, we expect to see genome change occurring in response to different stimuli (Table 1) and operating nonrandomly throughout the genome, guided by various types of intermolecular contacts (Table 1 of Ref. 112).
    http://shapiro.bsd.uchicago.ed.....0Dogma.pdf

    Also of interest from the preceding paper, on page 22, is a simplified list of the ‘epigentic’ information flow in the cell that directly contradicts what was expected from the central dogma (Genetic Reductionism/modern synthesis model) of neo-Darwinism.

    A few comments from the ‘non-Darwinian’ evolutionist, James A. Shapiro PhD. Genetics, on the entire notion of ‘random’:

    Shapiro on Random Mutation:
    “What I ask others interested in evolution to give up is the notion of random accidental mutation.”
    http://www.huffingtonpost.com/.....11144.html
    -Comment section
    “Establishing that teleological questions are critical will itself take a considerable effort because we need to overcome the long-held but purely philosophical (and illogical) assertion that functional creativity can result from random changes.”
    http://www.huffingtonpost.com/.....99059.html

    And although, due to the extreme level of complexity being dealt with in the genome,,,,

    Multidimensional Genome – Dr. Robert Carter – video
    http://www.metacafe.com/watch/8905048/

    The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin – video (notes in description)
    http://www.metacafe.com/watch/8593991/

    ,,, it is hard to know exactly what these ‘non-random’ epigenetic alterations to the genome are doing precisely, it is, none-the-less, fairly easy to see that the non-random alterations to the genome, being made by these sophisticated molecular machines of the cell, are ‘non-random’ alterations since, number 1, the adaptations produced by these changes are much more rapid than would be expected if the process were a truly random Darwinian scenario:

    Biological variation is independent of need – Cornelius Hunter – PhD. BioPhysics
    Excerpt: One hint that biology would not cooperate with Darwin’s theory came from the many examples of rapidly adapting populations. What evolutionists thought would require thousands or millions of years has been observed in laboratories and in the field, in an evolutionary blink of an eye. For instance, lizards placed in a new environment responded rapidly, developing new head morphology and digestive tract structure. [5] As one writer reported, “Italian wall lizards introduced to a tiny island off the coast of Croatia are evolving in ways that would normally take millions of years to play out, new research shows.” [6] Likewise mussels introduced to a new environment were found to evolve “in an evolutionary nanosecond compared to the thousands of years previously assumed.” [7] Such examples of adaptation are not new, and one evolutionist concluded that “evolution can occur much more rapidly than we previously thought. Rapid evolution is pervasive, and the list of examples is growing.” [8]
    The problem is that evolutionary mechanisms are not supposed to work this fast. Clearly these adaptations were induced by environmental change, and the changes appear to be addressing the need rather than independent of the need. If the changes were random with respect to the environmental pressure, then a much longer time period would be needed to evolve such adaptations.
    http://www.darwinspredictions......_variation

    Flax: More Falsifications of Evolution and the Real Warfare Thesis – Cornelius Hunter – 2011
    Excerpt: The latest paper deals with flax plants which, when grown under stressful conditions, modify their genome. The genomic changes help the plant to thrive under the new conditions, and the changes are passed on to the progeny. The flax plant’s genomic changes are not just a lucky strike—the same precise additions, in the same precise location, occur when the experiment is repeated. For the changes are “the result of a targeted, highly specific, complex insertion event.”
    http://darwins-god.blogspot.co.....ution.html

    The Rapid Origin of Domesticated Chicken – Cornelius Hunter – March 2012
    Excerpt: The research finds that epigenetic mechanisms may be the cause of the rapid origin of domesticated chickens brought about by breeding, and that these epigenetic changes are reliably and stably inherited, resulting in lasting change in a population.
    http://darwins-god.blogspot.co.....icken.html

    Antibiotic Resistance Is Prevalent in an Isolated Cave (4 million year old) Microbiome – April 2012
    Excerpt: ‘Antibiotic resistance is manifested through a number of different mechanisms including target alteration, control of drug influx and efflux, and through highly efficient enzyme-mediated inactivation. Resistance can emerge relatively quickly in the case of some mutations in target genes and there is evidence that antibiotics themselves can promote such mutations [43], [44], [45], [46]; however, resistance to most antibiotics occurs through the aegis of extremely efficient enzymes, efflux proteins and other transport systems that often are highly specialized towards specific antibiotic molecules.’
    http://www.plosone.org/article.....ne.0034953

  2. Related note:

    The Mysterious Epigenome – What lies beyond DNA – video
    http://www.youtube.com/watch?v=RpXs8uShFMo

    and number 2, besides the fact that epigentic changes to the genome which are wrought by sophisticated molecular machinery are severely antithetical to the entire Darwinian notion of ‘random’ mutations, is the fact that extensive, and ‘expensive’, studies that have sought to change organisms by purely random mutations have been an abject failure for the neo-Darwinists:

    Many of these researchers also raise the question (among others), why — even after inducing literally billions of induced mutations and (further) chromosome rearrangements — all the important mutation breeding programs have come to an end in the Western World instead of eliciting a revolution in plant breeding, either by successive rounds of selective “micromutations” (cumulative selection in the sense of the modern synthesis), or by “larger mutations” … and why the law of recurrent variation is endlessly corroborated by the almost infinite repetition of the spectra of mutant phenotypes in each and any new extensive mutagenesis experiment (as predicted) instead of regularly producing a range of new systematic species…
    (Wolf-Ekkehard Lönnig, “Mutagenesis in Physalis pubescens L. ssp. floridana: Some Further Research on Dollo’s Law and the Law of Recurrent Variation,” Floriculture and Ornamental Biotechnology Vol. 4 (Special Issue 1): 1-21 (December 2010).)
    http://www.evolutionnews.org/2.....42191.html

    Where’s the substantiating evidence for neo-Darwinism?
    https://docs.google.com/document/d/1q-PBeQELzT4pkgxB2ZOxGxwv6ynOixfzqzsFlCJ9jrw/edit

    Response to John Wise – October 2010
    Excerpt: But there are solid empirical grounds for arguing that changes in DNA alone cannot produce new organs or body plans. A technique called “saturation mutagenesis”1,2 has been used to produce every possible developmental mutation in fruit flies (Drosophila melanogaster),3,4,5 roundworms (Caenorhabditis elegans),6,7 and zebrafish (Danio rerio),8,9,10 and the same technique is now being applied to mice (Mus musculus).11,12 None of the evidence from these and numerous other studies of developmental mutations supports the neo-Darwinian dogma that DNA mutations can lead to new organs or body plans–because none of the observed developmental mutations benefit the organism.
    http://www.evolutionnews.org/2.....38811.html

    One insurmountable problem for neo-Darwinism in this entire notion of ‘randomness’ is that the rarity of functional proteins is found to be far beyond the reach of random searches:

    Stephen Meyer – Functional Proteins And Information For Body Plans – video
    http://www.metacafe.com/watch/4050681

    Moreover, ‘bottom up’ random mutation address the problem of explaining the origin of species from the entirely wrong conceptual level. Dr. Stephen Meyer comments at the end of the preceding video,,,

    ‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does not insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ – Stephen Meyer – (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate – 2009)

    How we could create life: The key to existence will be found not in primordial sludge, but in the nanotechnology of the living cell – Paul Davies – 2002
    Excerpt: Instead, the living cell is best thought of as a supercomputer – an information processing and replicating system of astonishing complexity. DNA is not a special life-giving molecule, but a genetic databank that transmits its information using a mathematical code. Most of the workings of the cell are best described, not in terms of material stuff – hardware – but as information, or software. Trying to make life by mixing chemicals in a test tube is like soldering switches and wires in an attempt to produce Windows 98. It won’t work because it addresses the problem at the wrong conceptual level. – Paul Davies
    http://www.guardian.co.uk/educ.....ucation.uk

    Glycan Carbohydrate Molecules – A Whole New Level Of Scarcely Understood Information on The Surface of Cells
    https://docs.google.com/document/d/1bO5txsOPde3BEPjOqcUNjL0mllfEc894LkDY5YFpJCA/edit

    Cortical Inheritance: The Crushing Critique Against Genetic Reductionism – Arthur Jones – video
    http://www.metacafe.com/watch/4187488

    Epigenetics and the “Piano” Metaphor – January 2012
    Excerpt: And this is only the construction of proteins we’re talking about. It leaves out of the picture entirely the higher-level components — tissues, organs, the whole body plan that draws all the lower-level stuff together into a coherent, functioning form. What we should really be talking about is not a lone piano but a vast orchestra under the directing guidance of an unknown conductor fulfilling an artistic vision, organizing and transcending the music of the assembly of individual players.
    http://www.evolutionnews.org/2.....54731.html

  3. In fact, bottom up ‘random’ mutations, besides being an abject failure at producing any new body plan information (producing any new species whatsoever), are found to, in fact, produce negative epistasis when combined together:

    Mutations : when benefits level off – June 2011 – (Lenski’s e-coli after 50,000 generations)
    Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
    http://www2.cnrs.fr/en/1867.htm?theme1=7

    The reason why bottom up ‘random’ mutations produce negative epistasis when they are combined in a genome is because of what is termed ‘the polyconstraint of polyfuctionality’ in the genome (John Sanford; Genetic Entropy; 05)

    Poly-Functional Complexity equals Poly-Constrained Complexity
    https://docs.google.com/document/d/1xkW4C7uOE8s98tNx2mzMKmALeV8-348FZNnZmSWY5H8/edit

    Scientists Map All Mammalian Gene Interactions – August 2010
    Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome.
    http://www.sciencedaily.com/re.....142044.htm

    Moreover, results from studies in population genetics agree that Darwinism is ill equiped to handle the extreme ‘top down’ poly-functionality being found in genomes:

    The next evolutionary synthesis: from Lamarck and Darwin to genomic variation and systems biology
    Excerpt: If more than about three genes (nature unspecified) underpin a phenotype, the mathematics of population genetics, while qualitatively analyzable, requires too many unknown parameters to make quantitatively testable predictions [6]. The inadequacy of this approach is demonstrated by illustrations of the molecular pathways that generates traits [7]: the network underpinning something as simple as growth may have forty or fifty participating proteins whose production involves perhaps twice as many DNA sequences, if one includes enhancers, splice variants etc. Theoretical genetics simply cannot handle this level of complexity, let alone analyse the effects of mutation..
    http://www.biosignaling.com/co.....X-9-30.pdf

    The Fate of Darwinism: Evolution After the Modern Synthesis – David J. Depew and Bruce H. Weber – 2011
    Excerpt: We trace the history of the Modern Evolutionary Synthesis, and of genetic Darwinism generally, with a view to showing why, even in its current versions, it can no longer serve as a general framework for evolutionary theory. The main reason is empirical. Genetical Darwinism cannot accommodate the role of development (and of genes in development) in many evolutionary processes.,,,
    http://www.springerlink.com/co.....03g3t7002/

    Moreover, for multi-cellular creatures the limit found for fixing a truly random beneficial mutation is found to be much more severe than it is for single cell creatures:

    Experimental Evolution in Fruit Flies (35 years of trying to force fruit flies to evolve in the laboratory fails, spectacularly) – October 2010
    Excerpt: “Despite decades of sustained selection in relatively small, sexually reproducing laboratory populations, selection did not lead to the fixation of newly arising unconditionally advantageous alleles.,,, “This research really upends the dominant paradigm about how species evolve,” said ecology and evolutionary biology professor Anthony Long, the primary investigator.
    http://www.arn.org/blogs/index.....ruit_flies

    More from Ann Gauger on why humans didn’t happen the way Darwin said – July 2012
    Excerpt: Getting a feature that requires six neutral mutations is the limit of what bacteria can produce. For primates (e.g., monkeys, apes and humans) the limit is much more severe. Because of much smaller effective population sizes (an estimated ten thousand for humans instead of a billion for bacteria) and longer generation times (fifteen to twenty years per generation for humans vs. a thousand generations per year for bacteria), it would take a very long time for even a single beneficial mutation to appear and become fixed in a human population.
    You don’t have to take my word for it. In 2007, Durrett and Schmidt estimated in the journal Genetics that for a single mutation to occur in a nucleotide-binding site and be fixed in a primate lineage would require a waiting time of six million years. The same authors later estimated it would take 216 million years for the binding site to acquire two mutations, if the first mutation was neutral in its effect.
    Facing Facts
    But six million years is the entire time allotted for the transition from our last common ancestor with chimps to us according to the standard evolutionary timescale. Two hundred and sixteen million years takes us back to the Triassic, when the very first mammals appeared. One or two mutations simply aren’t sufficient to produce the necessary changes— sixteen anatomical features—in the time available. At most, a new binding site might affect the regulation of one or two genes.
    http://www.uncommondescent.com.....rwin-said/

    If all of the preceding was not bad enough to the Darwinian notion of ‘bottom up’ random mutations being the ‘creative engine’ that produces all life on earth, it is found that there are multiple overlapping layers of error correction in the cell that prevent any truly random mutations from happening in the first place:

    Repair mechanisms in DNA include:
    A proofreading system that catches almost all errors
    A mismatch repair system to back up the proofreading system
    Photoreactivation (light repair)
    Removal of methyl or ethyl groups by O6 – methylguanine methyltransferase
    Base excision repair
    Nucleotide excision repair
    Double-strand DNA break repair
    Recombination repair
    Error-prone bypass
    http://www.newgeology.us/presentation32.html

    The Evolutionary Dynamics of Digital and Nucleotide Codes: A Mutation Protection Perspective – February 2011
    Excerpt: “Unbounded random change of nucleotide codes through the accumulation of irreparable, advantageous, code-expanding, inheritable mutations at the level of individual nucleotides, as proposed by evolutionary theory, requires the mutation protection at the level of the individual nucleotides and at the higher levels of the code to be switched off or at least to dysfunction. Dysfunctioning mutation protection, however, is the origin of cancer and hereditary diseases, which reduce the capacity to live and to reproduce. Our mutation protection perspective of the evolutionary dynamics of digital and nucleotide codes thus reveals the presence of a paradox in evolutionary theory between the necessity and the disadvantage of dysfunctioning mutation protection. This mutation protection paradox, which is closely related with the paradox between evolvability and mutational robustness, needs further investigation.”
    http://www.arn.org/blogs/index....._contradic

  4. Contradiction in evolutionary theory – video – (The contradiction between extensive DNA repair mechanisms and the necessity of ‘random mutations/errors’ for Darwinian evolution)
    http://www.youtube.com/watch?v=dzh6Ct5cg1o

    The Darwinism contradiction of repair systems
    Excerpt: The bottom line is that repair mechanisms are incompatible with Darwinism in principle. Since sophisticated repair mechanisms do exist in the cell after all, then the thing to discard in the dilemma to avoid the contradiction necessarily is the Darwinist dogma.
    http://www.uncommondescent.com.....r-systems/

    In fact the Ribosome, which makes the myriad of different, yet specific, types of proteins found in life, is also found to be severely intolerant to any random mutations occurring to proteins.

    The Ribosome: Perfectionist Protein-maker Trashes Errors
    Excerpt: The enzyme machine that translates a cell’s DNA code into the proteins of life is nothing if not an editorial perfectionist…the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products… To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is “shocking” and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis.
    http://www.sciencedaily.com/re.....134529.htm

    Which begs the question of ‘how is the evolution new life forms suppose to ‘randomly’ occur if it is prevented from ‘randomly’ occurring to the proteins in the first place?’

    As well, the ‘errors/mutations’ that are found to ‘naturally’ occur in protein sequences are found to be ‘designed errors’:

    Cells Defend Themselves from Viruses, Bacteria With Armor of Protein Errors – Nov. 2009
    Excerpt: These “regulated errors” comprise a novel non-genetic mechanism by which cells can rapidly make important proteins more resistant to attack when stressed,
    http://www.sciencedaily.com/re.....134701.htm

    On top of that the optimal 1 in 10^70 genetic code (Hubert Yockey) appears to be designed to, among other optimal design features, protect against ‘random’ changes to amino acids:

    The Finely Tuned Genetic Code – Jonathan M. – November 2011
    Excerpt: The total number of possible RNA triplets amounts to 64 different codons. Of those, 61 specify amino acids, with the remaining three (UAG, UAA and UGA) serving as stop codons, which halt the process of protein synthesis. Because there are only twenty different amino acids, some of the codons are redundant. This means that several codons can code for the same amino acid. The cellular pathways and mechanisms that make this 64-to-20 mapping possible is a marvel of molecular logic. It’s enough to make any engineer to drool. But the signs of design extend well beyond the sheer engineering brilliance of the cellular translation apparatus. In this article, I will show several layers of design ingenuity exhibited by this masterpiece of nanotechnology.
    How Is the Genetic Code Finely Tuned?
    As previously stated, the genetic code is degenerate. This means that multiple codons will often signify the same amino acid. This degeneracy is largely caused by variation in the third position, which is recognized by the nucleotide at the 5′ end of the anticodon (the so-called “wobble” position). The wobble hypothesis states that nucleotides that are present in this position can make interactions that aren’t permitted in the other positions (though it still leaves some interactions that aren’t allowed).
    But this arrangement is far from arbitrary. Indeed, the genetic code found in nature is exquisitely tuned to protect the cell from the detrimental effects of substitution mutations. The system is so brilliantly set up that codons differing by only a single base either specify the same amino acid, or an amino acid that is a member of a related chemical group. In other words, the structure of the genetic code is set up to mitigate the effects of errors that might be incorporated during translation (which can occur when a codon is translated by an almost-complementary anti-codon).
    http://www.evolutionnews.org/2.....52611.html

    Synonymous Codons: Another Gene Expression Regulation Mechanism – September 2010
    Excerpt: There are 64 possible triplet codons in the DNA code, but only 20 amino acids they produce. As one can see, some amino acids can be coded by up to six “synonyms” of triplet codons: e.g., the codes AGA, AGG, CGA, CGC, CGG, and CGU will all yield arginine when translated by the ribosome. If the same amino acid results, what difference could the synonymous codons make? The researchers found that alternate spellings might affect the timing of translation in the ribosome tunnel, and slight delays could influence how the polypeptide begins its folding. This, in turn, might affect what chemical tags get put onto the polypeptide in the post-translational process. In the case of actin, the protein that forms transport highways for muscle and other things, the researchers found that synonymous codons produced very different functional roles for the “isoform” proteins that resulted in non-muscle cells,,, In their conclusion, they repeated, “Whatever the exact mechanism, the discovery of Zhang et al. that synonymous codon changes can so profoundly change the role of a protein adds a new level of complexity to how we interpret the genetic code.”,,,
    http://www.creationsafaris.com.....#20100919a

    Such 1 in 10^70 optimality is surely interesting since the genetic code is found to be the element in life which is most resistant to evolutionary pressures:

    Collective evolution and the genetic code – 2006:
    Excerpt: The genetic code could well be optimized to a greater extent than anything else in biology and yet is generally regarded as the biological element least capable of evolving.
    http://www.pnas.org/content/103/28/10696.full

    Shannon Information – Channel Capacity – Perry Marshall – video
    http://www.metacafe.com/watch/5457552/

    “Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible”
    Donald E. Johnson – Bioinformatics: The Information in Life

    Moreover, proteins themselves, after they have been synthesized, and have made it through the guantlet of the ribosome error detection, have now been shown to have a ‘Cruise Control’ mechanism, which works to ‘self-correct’ the integrity of the protein structure from any random mutations imposed on them.

    Proteins with cruise control provide new perspective:
    “A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.”
    http://www.princeton.edu/main/...../60/95O56/

    In fact ‘non-local’, beyond space and time, quantum information is found to reside along the entirety of the protein structure:

    Coherent Intrachain energy migration at room temperature – Elisabetta Collini & Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73
    Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state.

  5. In fact since quantum entanglement/information falsified reductive materialism/local realism in the first place (Einstein, Bohr, John Bell, Alain Aspect; Anton Zeilinger) then finding quantum entanglement/information to be ‘protein specific’ is absolutely shattering to any rational hope that materialists had in their reductive materialistic framework since a ‘transcendent’, ‘non-local’, beyond space and time, cause must be supplied which is specific to each unique protein structure. Reductive materialism, which is the basis of neo-Darwinian thought, is simply at a complete loss to supply such a ‘non-local’ transcendent cause.

    Falsification Of Neo-Darwinism by ‘non-local (beyond space and time, Quantum Entanglement/Information:
    https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US

    The following article supplies more supporting evidence that the specific amino acid sequence of a protein is found to be ‘context dependent’, thus further undermining the reductive materialistic foundation of neo-Darwinism;

    Why Proteins Aren’t Easily Recombined, Part 2 – Ann Gauger May 17, 2012
    Excerpt: In other words, even if only 10% of non-matching residues were changed, the resulting hybrid enzyme no longer functioned. Why? Because the substitution of different amino acids into the existing protein structure destabilized the fold, even though those same amino acids worked well in another context. Thus, each protein’s amino acid sequence works as a whole to help generate a proper stable fold, in a context-dependent fashion.
    http://www.evolutionnews.org/2.....59771.html

    Further note, if one traces the down the ultimate source of entropic randomness in the universe, which Darwinists insist is the creative engine that gave rise to all life on earth, one comes away with some fairly disturbing implications:

    Blackholes- The neo-Darwinists ultimate ‘god of randomness’ which can create all life in the universe (according to them)
    https://docs.google.com/document/d/1fxhJEGNeEQ_sn4ngQWmeBt1YuyOs8AQcUrzBRo7wISw/edit?hl=en_US

    The Darwinian Illusion Of Randomness
    https://docs.google.com/document/d/163-vOs0saDScrr1vIdZ6Q4DKif6AWpVkVL3G6jXzJTQ/edit

    As to Dr. Moran’s primary claim that,,

    1. The appearance of homo sapiens was not an intended part of the evolutionary process.

    I truly would like to know how Dr. Moran intends to empirically substantiate that metaphysical claim, for, number 1, he has no empirical evidence whatsoever demonstrating that unguided Darwinian evolution has generated any functional complexity in life, and number 2, there are, in fact, quite a few lines of evidence that strongly suggest that man was ‘intended’ in this universe:

    Predictions of Materialism compared to Predictions of Theism within the scientific method:
    Excerpt: 16. Materialism predicted animal speciation should happen on a somewhat constant basis on earth. Theism predicted man was the last species created on earth – Man himself is the last generally accepted major fossil form to have suddenly appeared in the fossil record. -
    http://docs.google.com/Doc?doc....._5fwz42dg9

    Anthropic Principle: A Precise Plan for Humanity By Hugh Ross
    Excerpt: Brandon Carter, the British mathematician who coined the term “anthropic principle” (1974), noted the strange inequity of a universe that spends about 15 billion years “preparing” for the existence of a creature that has the potential to survive no more than 10 million years (optimistically).,, Carter and (later) astrophysicists John Barrow and Frank Tipler demonstrated that the inequality exists for virtually any conceivable intelligent species under any conceivable life-support conditions. Roughly 15 billion years represents a minimum preparation time for advanced life: 11 billion toward formation of a stable planetary system, one with the right chemical and physical conditions for primitive life, and four billion more years toward preparation of a planet within that system, one richly layered with the biodeposits necessary for civilized intelligent life. Even this long time and convergence of “just right” conditions reflect miraculous efficiency.
    Moreover the physical and biological conditions necessary to support an intelligent civilized species do not last indefinitely. They are subject to continuous change: the Sun continues to brighten, Earth’s rotation period lengthens, Earth’s plate tectonic activity declines, and Earth’s atmospheric composition varies. In just 10 million years or less, Earth will lose its ability to sustain human life. In fact, this estimate of the human habitability time window may be grossly optimistic. In all likelihood, a nearby supernova eruption, a climatic perturbation, a social or environmental upheaval, or the genetic accumulation of negative mutations will doom the species to extinction sometime sooner than twenty thousand years from now.
    http://christiangodblog.blogsp.....chive.html

    Hugh Ross – The Anthropic Principle and Anthropic Inequality – video
    http://www.metacafe.com/w/8494065

    The Creation of Minerals:
    Excerpt: Thanks to the way life was introduced on Earth, the early 250 mineral species have exploded to the present 4,300 known mineral species. And because of this abundance, humans possessed all the necessary mineral resources to easily launch and sustain global, high-technology civilization.
    http://www.reasons.org/The-Creation-of-Minerals

    “Today there are about 4,400 known minerals – more than two-thirds of which came into being only because of the way life changed the planet. Some of them were created exclusively by living organisms” – Bob Hazen – Smithsonian – Oct. 2010, pg. 54

    To put it mildly, this minimization of poisonous elements, and ‘explosion’ of useful minerals, is strong evidence for Intelligently Designed terra-forming of the earth that ‘just so happens’ to be of great benefit to modern man.

    The following site is very interesting as to supporting the Theistic contention that humans were intended;

    The Scale of The Universe – Part 2 – interactive graph (recently updated in 2012 with cool features)
    http://htwins.net/scale2/scale.....olor=white

  6. The preceding interactive graph points out that the smallest scale visible to the human eye (as well as a human egg) is at 10^-4 meters, which ‘just so happens’ to be directly in the exponential center of all possible sizes of our physical reality (not just ‘nearly’ in the exponential center!). i.e. 10^-4 is, exponentially, right in the middle of 10^-35 meters, which is the smallest possible unit of length, which is Planck length, and 10^27 meters, which is the largest possible unit of ‘observable’ length since space-time was created in the Big Bang, which is the diameter of the universe. This is very interesting for, as far as I can tell, the limits to human vision (as well as the size of the human egg) could have, theoretically, been at very different positions than directly in the exponential middle of reality;

    It is interesting that ‘human observation’ would be found to be in the exact exponential center of all possible sizes in the universe, because ‘conscious observation’ in quantum mechanics is exactly what satisfactorily resolves the geometric centrality we find for ourselves in the universe:

    Centrality of Each Individual Observer In The Universe and Christ’s Very Credible Reconciliation Of General Relativity and Quantum Mechanics
    Excerpt: I find it extremely interesting, and strange, that quantum mechanics tells us that instantaneous quantum wave collapse to its ‘uncertain’ 3-D state is centered on each individual conscious observer in the universe, whereas, 4-D space-time cosmology (General Relativity) tells us each 3-D point in the universe is central to the expansion of the universe. These findings of modern science are pretty much exactly what we would expect to see if this universe were indeed created, and sustained, from a higher dimension by a omniscient, omnipotent, omnipresent, eternal Being who knows everything that is happening everywhere in the universe at the same time. These findings certainly seem to go to the very heart of the age old question asked of many parents by their children, “How can God hear everybody’s prayers at the same time?”,,, i.e. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that you or I, or anyone else, should exist? Only Theism offers a rational explanation as to why you or I, or anyone else, should have such undeserved significance in such a vast universe:

    Psalm 33:13-15
    The LORD looks from heaven; He sees all the sons of men. From the place of His dwelling He looks on all the inhabitants of the earth; He fashions their hearts individually; He considers all their works.
    https://docs.google.com/document/d/17SDgYPHPcrl1XX39EXhaQzk7M0zmANKdYIetpZ-WB5Y/edit?hl=en_US

    The expansion of every 3D point in the universe, and the quantum wave collapse of the entire universe to each point of conscious observation in the universe, is obviously a very interesting congruence in science between the very large (relativity) and the very small (quantum mechanics). A congruence that Physicists, and Mathematicians, seem to be having a extremely difficult time ‘unifying’ into a ‘theory of everything’. (Einstein, Penrose, Hawking, etc..).

    Yet the unification, into a ‘theory of everything’, between what is in essence the ‘infinite Theistic world of Quantum Mechanics’ and the ‘finite Materialistic world of the space-time of General Relativity’ seems to be directly related to what Jesus apparently joined together with His resurrection, i.e. related to the unification of infinite God with finite man.

    The Center Of The Universe Is Life – General Relativity, Quantum Mechanics, Entropy and The Shroud Of Turin – video
    http://vimeo.com/34084462

    Thus despite however strong Dr. Moran may feel the empirical evidence is that completely unguided random mutation produced all life on earth, or how he may feel that humans were ‘not intended’, the fact of the matter is that I can find no evidence of truly random mutations doing anything other than degrading preexisting functional information, and, moreover, I can find several lines of evidence strongly suggesting that man was indeed intended in this universe, thus directly countering his primary claim that humans were the unintended product of evolution! I understand that many times Dr. Moran resorts to name-calling when confronted with evidence (or perhaps when someone insults him), but my hope is that Dr. Moran would rise above such shallow things and ‘honestly’ address the crushing evidence against his position and truthfully admit that these issues have no resolution within his seemingly a-priorily preferred materialistic paradigm.

    music and verse:

    10,000 Reasons (Bless the Lord) – Matt Redman (Worship with lyrics)
    http://www.youtube.com/watch?v=DXDGE_lRI0E

    Psalm 145:3
    Great is the LORD, and greatly to be praised; and his greatness is unsearchable.

  7. Note to Larry Moran- according to the theory of evolution, mutations are random, period- meaning they are just chance event, happenstance, unplanned, unpredictable, unguided.

  8. @Bilbiography77

    On behalf of my poor scroll wheel:

    I find your sources quite useful, but as a frequent reader of this blog I’ve seen many of them before. Again, perhaps you should start a wiki, so that they’re available in a centralized and categorized location for everyone, instead of strewn randomly across various blogs.

    If I’m not mistaken, ID currently has no such wiki?

  9. Joe, you may have seen them before, but I have not. I appreciate tremendously born-again’s contribution.

  10. 10
    critical rationalist

    jonnyb: The big question is, are mutations random with respect to their ultimate usefulness? In this particular case, the answer is a resounding no.

    It’s unclear how the answer being “no” *in this particular case* is actually problem, as evolutionary theory does not suggest biological organisms are not well adapted for a particular purpose.

    Biological adaptations represent transformations of matter. These transformations occur when the requisite knowledge of how to perform those particular transformations are present. As such, the question is, “How was the knowledge used to perform these adaptations, which is currently found in the genome, created?”

    What do I mean by knowledge? I’m referring to Popper’s definition, which is independent of human belief. And, as you pointed out, a specific intron in the genome determines exactly where the mutation takes place, so the knowledge of which antibody gene to mutate exists in the genome itself.

    How does ID explain how this knowledge was created?

  11. I think the example you cite is a very interesting one, the mutations that relieve the starvation stress arise at high rates only when the mutation would diminish the stress insertions.

    But I don’t think it has to be either/or, Lamarckian or Darwinian. The truth is probably closer to the middle. As Koonin once noted:

    A close examination of a variety of widespread processes that contribute to the generation of genomic variation shows that evolution does not rely entirely on stochastic mutation. Instead, generation of variation is often controlled via elaborate molecular machinery that instigates adaptive responses to environmental challenges of various degrees of specificity.

    Thus, genome evolution appears to span the entire spectrum of scenarios, from the purely Darwinian, based on random variation, to bona fide Lamarckian where a specific mechanism of response to a cue is fixed in an evolving population through a distinct modification of the genome. In a broad sense, all these routes of genomic variation reflect the interaction between the evolving population and the environment in which the active role belongs either to selection alone (pure Darwinian scenario) or to directed variation that itself may become the target of selection (Lamarckian scenario).

    http://www.biology-direct.com/content/4/1/42

  12. critical rationalist -

    Excellent points, if I followed them all. First of all, you say, “How does ID explain how this knowledge was created?” This is a fine question, but I think that sometimes it is asked with an incorrect idea of how it should be answered. ID does not propose a “mechanism” (which I will define shortly) to create the knowledge. It assumes that the process is non-mechanistic. But that doesn’t mean that it is unexplainable. In ID, logical relations can be promoted to causes and not just be after-the-fact observations. For example, when I write computer code, if someone asks me why I factored (i.e. structured) my code one way and not another, the *explanation* for this is the logical structures that it sets up. Now, one could also mention the biochemistry involved in me typing it out, but that isn’t really the explanation for how the code is factored the way it is.

    So, I don’t think we should shy away from logical explanations, or even teleological explanations. In addition, we can kind of mix the two if we allow ourselves to use mathematics which are incomputable (such as Alan Turing’s oracle formalism) [going back to the definition of "mechanism" -- my definition of mechanism is mechanism=calculatable]. This is a fairly standard approach in mathematics and computer science, but has not been popular in the natural sciences. I think that bridging these ideas would be a powerful means of ID moving forward.

    If you’re interested in this sort of thing, I took a stab at integrating cognitive psychology with incomputable formalisms here.

    So, in short, I don’t think that ID provides a specific mechanism, but it does provide us a framework for understanding it, and for analyzing proposed mechanisms (to see how such mechanisms would be analyzed, see this paper). I hope that as ID moves further, we can continue to pursue these angles further.

  13. Starbuck -

    I don’t disagree with you per se. However, I think that the overwhelming weight of evolution, to the extent that it occurs, is on the design/lamarckian side. Certainly stochasticity and selection both operate, but I think their operation is dwarfed by the operation of teleological principles.

    I should also point out that “selection” is often synthetic instead of natural. In other words, the organism’s feedback often dictates selection just as much as the actual fitness of the resulting states. In many cases, “selection against” is carried out, not by the cell being unfit in its environment, but by the regulated action of apoptosis. In such cases, the selection is teleological rather than ateleological, as required by the Darwinian paradigm.

  14. 14
    critical rationalist

    Johnyb,

    I’m still unclear how the observations in this specific case is actually a problem for evolutionary theory.

    Perhaps you can start out by explaining how knowledge is created, then point out how evolutionary theory doesn’t fit that explanation?

  15. “I’m still unclear how the observations in this specific case is actually a problem for evolutionary theory”

    It is a problem for a specific claim of evolutionary theory – that mutations are haphazard regarding the fitness of the organism or population. As we can see in this example, they clearly aren’t. They are goal-directed.

    I also use this as one piece of an argument that evolution, as far as it actually works, tends to be based on teleological mutations utilizing existing information.

    But my main point is to point out the simple fact that there is abundant evidence for the existence, and perhaps prevalence, of mutations which are not random in the evolutionary sense.

  16. … but the mutations are random with respect to fitness, and they can’t contribute to evolution.

  17. Mutations can be random wrt futness and still be directed.

  18. 18
    critical rationalist

    Johnyb: It is a problem for a specific claim of evolutionary theory – that mutations are haphazard regarding the fitness of the organism or population. As we can see in this example, they clearly aren’t. They are goal-directed.

    Again, evolutionary theory does not suggest that organisms are *not* adapted for a purpose.

    Specifically, you have’t explained how *this particular case* is actually a problem for the underlying explanation behind evolutionary theory. Rather, you seem to be assuming predictions of scientific theories can be evaluated in isolation from the underlying explanation they are based on. Therefore, predictions are merely prophecy to either be observed or not observed.

    Nor do you seem to suggest an intelligent agent is directly intervening to choose antibody gene to mutate based on which pathogen is present. In contrast, unlike biological organisms, cars do not contain the knowledge of how to build themselves. The knowledge of how to build cars exists in us. And this knowledge exists in an explanatory form, rather than an non-explanatory, useful rule of thumb.

    Darwinism indicates mutations are random in that they do not take into account the specific problem to be solved. Then natural selection discards gene variants that are less capable of causing themselves to be passed on in future generations. As such, evolutionary theory doesn’t build organisms but creates the knowledge of how to build biological adaptations though an error correcting process. This would also include creating knowledge of which antibody gene to mutate.

    What explanation does ID present for the knowledge of which antibody gene to mutate? How was it was created? Or perhaps you’re suggesting ID does not provide one?

    For example, before a magic trick can be performed, the explanation of how to perform that trick must be known to the magician who invented it (and passed down to subsequent magicians who perform it). The origin of that knowledge is the origin of the magic trick. In the absence of said knowledge, it would not be a magic trick but actually magic, as the knowledge of how to bring about the desired outcome would have been spontaneously generated.

    The same can be said in regards to the knowledge of which antibody gene to mutate, which is located in the genome. The origin of the knowledge is the origin of antibodies genes for the purpose of fighting pathogens.

    Johnyb: I also use this as one piece of an argument that evolution, as far as it actually works, tends to be based on teleological mutations utilizing existing information.

    Again, what is the origin of this information?

    Second, isn’t the appearance of design that which is to be explained, rather than the explanation? It’s as of you think design is an immutable primitive that cannot be explained.

Leave a Reply