Home » Darwinism, Genomics, News » Ann Gauger on watching Ayala’s no Adam or Eve analysis crumble …

Ann Gauger on watching Ayala’s no Adam or Eve analysis crumble …

… in light of later research.

In “On Population Genetics Estimates” (Biologic Institute, August 3, 2012),Ann Gauger explains,

In his review of our book Science and Human Origins, Paul McBride wonders why I have not engaged the broader population genetics literature on human origins, but instead chose to focus on a single paper from 1995 by Francisco Ayala.

As I stated in the book, I chose that paper because in my opinion it presented the most difficult challenge to a very small bottleneck in our history as a species. If Ayala was right, and we shared thirty-two allelic lineages with chimps, then there was no way for a bottleneck as small as two individuals to have occurred. That kind of evidence, if substantiated, would have been conclusive. That’s why I found it so fascinating as I watched his analysis crumble in the light of later research.

I was very aware that others beside Ayala have investigated human origins, using other methods and data. I chose not to address those studies directly in the book because I wanted to focus on the intriguing problem of HLA-DRB1’s patchwork phylogenetic history. I did allude to them in discussing problems with retrospective analyses, however. The fact that I had not addressed those alternate estimates is one reason why I never claimed to have proved the existence of a two-person bottleneck, but rather questioned the rush to judgment against such a bottleneck on the part of others.

So now, let’s consider how much these other methods add to the discussion.

See also: Ann Gauger sets record straight on Wistar II

New York Times report on human evolution controversy vindicates book Science and Human Origins

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

60 Responses to Ann Gauger on watching Ayala’s no Adam or Eve analysis crumble …

  1. God that’s stupid.

    Why doesn’t Gauger run some simulations of a population expanding from Ne = 2 to Ne =10 000 in ~10 000 years and see what level of polymorphism you’d expect to see in the resulting population given the observed mutation rate?

  2. Poor McBride, outclassed and out gunned. Give it up son, you’re not doing yourself or science any favours.

  3. I’m not sure Gauger is doing herself any favours. First she publishes a book where she criticises a 15 year old paper, and ignores the more recent literature. Then, when this is pointed out to her she’s reduced to arm waving:

    But more worrying to me are the hidden assumptions in evolutionary models. Population genetics is a theory-laden subject, based entirely on neo-Darwinian assumptions. These assumptions, combined with over-simplifications required by current model building and/or mathematical analysis, can lead to erroneous claims about past genetic history.

    Because of these difficulties, in my opinion it is an open question whether present genetic diversity provides sufficient information from which to draw conclusions about ancient populations. Determining events in deep human history may be beyond the reach of population genetics methods.

    Yes, of course population genetic models are simplifications, so yes they might be wrong. But if Gauger’s attempts at criticism are to carry any weight, she should do the work to show that what she has identified as problems actually make a difference.

  4. Coresa

    Poor McBride, outclassed and out gunned. Give it up son, you’re not doing yourself or science any favours.

    I’m sorry, perhaps you can elaborate. What part of this do you think is so particularly devastating?

  5. As to Identifying problems in neo-Darwinian population genetics models:

    Here’s one problem that is ignored:

    John Sanford on (Genetic Entropy) – Down, Not Up – 2-4-2012 (at Loma Linda University) – video
    http://www.youtube.com/watch?v=PHsu94HQrL0

    Notes from John Sanford’s preceding video:

    *3 new mutations every time a cell divides in your body
    * Average cell of 15 year old has up to 6000 mutations
    *Average cell of 60 year old has 40,000 mutations
    Reproductive cells are ‘designed’ so that, early on in development, they are ‘set aside’ and thus they do not accumulate mutations as the rest of the cells of our bodies do. Regardless of this protective barrier against the accumulation of slightly detrimental mutations still we find that,,,
    *60-175 mutations are passed on to each new generation.

    It is also extremely interesting to note that the principle of Genetic Entropy, a principle which stands in direct opposition of the primary claim of neo-Darwinian evolution, lends itself quite well to mathematical analysis by computer simulation:

    Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load:
    Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances.,, Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space.
    http://bioinformatics.cau.edu......aproof.pdf
    MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE

    Here is a short sweet overview of Mendel’s Accountant:

    When macro-evolution takes a final, it gets an “F” – Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory (Mendel’s Accountant)
    Excerpt of Conclusion: This (computer) program (Mendel’s Accountant) is a powerful teaching and research tool. It reveals that all of the traditional theoretical problems (in population genetics) that have been raised about evolutionary genetic theory are in fact very real and are empirically verifiable in a scientifically rigorous manner. As a consequence, evolutionary genetic theory now has no theoretical support—it is an indefensible scientific model. Rigorous analysis of evolutionary genetic theory consistently indicates that the entire enterprise is actually bankrupt.
    http://radaractive.blogspot.co.....ution.html

    Here is a nice simple overview of the mutation ‘problem’

    Human evolution or extinction – discussion on acceptable mutation rate per generation (with clips from Dr. John Sanford) – video
    http://www.youtube.com/watch?v=aC_NyFZG7pM

    Interestingly, besides such a unacceptably high mutation rate, several other insurmountable ‘problems’ from population genetics have actually been known about for quite a while now.

    Haldane’s Dilemma
    Excerpt: Haldane was the first to recognize there was a cost to selection which limited what it realistically could be expected to do. He did not fully realize that his thinking would create major problems for evolutionary theory. He calculated that in man it would take 6 million years to fix just 1,000 mutations (assuming 20 years per generation).,,, Man and chimp differ by at least 150 million nucleotides representing at least 40 million hypothetical mutations (Britten, 2002). So if man evolved from a chimp-like creature, then during that process there were at least 20 million mutations fixed within the human lineage (40 million divided by 2), yet natural selection could only have selected for 1,000 of those. All the rest would have had to been fixed by random drift – creating millions of nearly-neutral deleterious mutations. This would not just have made us inferior to our chimp-like ancestors – it surely would have killed us. Since Haldane’s dilemma there have been a number of efforts to sweep the problem under the rug, but the problem is still exactly the same. ReMine (1993, 2005) has extensively reviewed the problem, and has analyzed it using an entirely different mathematical formulation – but has obtained identical results.
    John Sanford PhD. – “Genetic Entropy and The Mystery of the Genome” – pg. 159-160

    Kimura’s Quandary
    Excerpt: Kimura realized that Haldane was correct,,, He developed his neutral theory in responce to this overwhelming evolutionary problem. Paradoxically, his theory led him to believe that most mutations are unselectable, and therefore,,, most ‘evolution’ must be independent of selection! Because he was totally committed to the primary axiom (neo-Darwinism), Kimura apparently never considered his cost arguments could most rationally be used to argue against the Axiom’s (neo-Darwinism’s) very validity.
    John Sanford PhD. – “Genetic Entropy and The Mystery of the Genome” – pg. 161 – 162

    Majestic Ascent: Berlinski on Darwin on Trial – David Berlinski – November 2011
    Excerpt: The publication in 1983 of Motoo Kimura’s The Neutral Theory of Molecular Evolution consolidated ideas that Kimura had introduced in the late 1960s. On the molecular level, evolution is entirely stochastic, and if it proceeds at all, it proceeds by drift along a leaves-and-current model. Kimura’s theories left the emergence of complex biological structures an enigma, but they played an important role in the local economy of belief. They allowed biologists to affirm that they welcomed responsible criticism. “A critique of neo-Darwinism,” the Dutch biologist Gert Korthof boasted, “can be incorporated into neo-Darwinism if there is evidence and a good theory, which contributes to the progress of science.”
    By this standard, if the Archangel Gabriel were to accept personal responsibility for the Cambrian explosion, his views would be widely described as neo-Darwinian.
    http://www.evolutionnews.org/2.....53171.html

    At the 2:45 minute mark of the following video the mathematical roots of the junk DNA argument, that is still used by Darwinists, is traced through Haldane, Kimura, and Ohno’s work, in the 1950′s, 60′s and 70′s, in population genetics:

    What Is The Genome? It’s Not Junk! – Dr. Robert Carter – video – (Notes in video description)
    http://www.metacafe.com/w/8905583

    Further criticism of the ‘neutral theory’:

    Here is a Completely Different Way of Doing Science – Cornelius Hunter PhD. – April 2012
    Excerpt: But how then could evolution proceed if mutations were just neutral? The idea was that neutral mutations would accrue until finally an earthquake, comet, volcano or some such would cause a major environmental shift which suddenly could make use of all those neutral mutations. Suddenly, those old mutations went from goat-to-hero, providing just the designs that were needed to cope with the new environmental challenge. It was another example of the incredible serendipity that evolutionists call upon.
    Too good to be true? Not for evolutionists. The neutral theory became quite popular in the literature. The idea that mutations were not brimming with cool innovations but were mostly bad or at best neutral, for some, went from an anathema to orthodoxy. And the idea that those neutral mutations would later magically provide the needed innovations became another evolutionary just-so story, told with conviction as though it was a scientific finding.
    Another problem with the theory of neutral molecular evolution is that it made even more obvious the awkward question of where these genes came from in the first place.
    http://darwins-god.blogspot.co.....ay-of.html

    Thou Shalt Not Put Evolutionary Theory to a Test – Douglas Axe – July 18, 2012
    Excerpt: “For example, McBride criticizes me for not mentioning genetic drift in my discussion of human origins, apparently without realizing that the result of Durrett and Schmidt rules drift out. Each and every specific genetic change needed to produce humans from apes would have to have conferred a significant selective advantage in order for humans to have appeared in the available time (i.e. the mutations cannot be ‘neutral’). Any aspect of the transition that requires two or more mutations to act in combination in order to increase fitness would take way too long (>100 million years).
    My challenge to McBride, and everyone else who believes the evolutionary story of human origins, is not to provide the list of mutations that did the trick, but rather a list of mutations that can do it. Otherwise they’re in the position of insisting that something is a scientific fact without having the faintest idea how it even could be.” Doug Axe PhD.
    http://www.evolutionnews.org/2.....62351.html

    Michael Behe on the theory of constructive neutral evolution – February 2012
    Excerpt: I don’t mean to be unkind, but I think that the idea seems reasonable only to the extent that it is vague and undeveloped; when examined critically it quickly loses plausibility. The first thing to note about the paper is that it contains absolutely no calculations to support the feasibility of the model. This is inexcusable. – Michael Behe
    http://www.uncommondescent.com.....evolution/

  6. Further notes:

    Genetic Entropy vs. Evolution – The Stark Reality Darwinists Don’t want to face – video
    http://vimeo.com/24870022

    Further notes on the ‘problems’ with population genetics:

    Oxford University Admits Darwinism’s Shaky Math Foundation – May 2011
    Excerpt: However, mathematical population geneticists mainly deny that natural selection leads to optimization of any useful kind. This fifty-year old schism is intellectually damaging in itself, and has prevented improvements in our concept of what fitness is. – On a 2011 Job Description for a Mathematician, at Oxford, to ‘fix’ the persistent mathematical problems with neo-Darwinism within two years.
    http://www.evolutionnews.org/2.....46351.html

    Population Genetics and Adam & Eve? – The answer you get depends on several unknown a-priori assumptions – Dr. Ann Gauger, Pt. 2 – podcast
    http://intelligentdesign.podom.....9_04-07_00

    The next evolutionary synthesis: from Lamarck and Darwin to genomic variation and systems biology
    Excerpt: If more than about three genes (nature unspecified) underpin a phenotype, the mathematics of population genetics, while qualitatively analyzable, requires too many unknown parameters to make quantitatively testable predictions [6]. The inadequacy of this approach is demonstrated by illustrations of the molecular pathways that generates traits [7]: the network underpinning something as simple as growth may have forty or fifty participating proteins whose production involves perhaps twice as many DNA sequences, if one includes enhancers, splice variants etc. Theoretical genetics simply cannot handle this level of complexity, let alone analyse the effects of mutation..
    http://www.biosignaling.com/co.....X-9-30.pdf

    DNA Degeneration: Top Population Geneticists agree neo-Darwinism is not supported by the data – John Sanford
    http://www.youtube.com/watch?v=tYEkqwOXE5U

    Here is a paper which, though technical, shows that the modern population genetic evidence we now have actually supports Adam and Eve. Moreover, the evidence it presents from the latest genetic research is completely inexplicable to neo-Darwinism, i.e. neo-Darwinism, once again, completely falls apart upon rigid scrutiny; (and although I don’t agree with the extreme 6000 year Young Earth model used as a starting presumption in the paper for deriving the graphs, the model, none-the-less, can be amended quite comfortably to a longer time period. (In fact the longer to model to more acute the ‘problem’ becomes for neo-Darwinism and which I, personally, think provides a much more ‘comfortable’ fit to the overall body of evidence)

    The Non-Mythical Adam and Eve! – Refuting errors by Francis Collins and BioLogos
    http://creation.com/historical-adam-biologos

    CMI has a excellent video of the preceding paper by Dr. Carter, that makes the technical aspects of population genetics much easier to understand;

    The Non Mythical Adam and Eve (Dr Robert Carter) – video
    http://www.youtube.com/watch?v=8ftwf0owpzQ

  7. Is it reasonable to assume that they do not make a difference? I’m the layman’s layman, but it doesn’t seem at all reasonable to me, in view of the extant farcical, evolutionary paradigm, i.e. simply on the basis of Gauger’s first sentence you quote above.

    And combined with that modelling problem, which looks like the animist ‘scientists” equivalent of the extraordinarily defective ‘modelling’ of the economists, asking her to ‘do the math’, so to speak, seems insulting. What she has identified as problems sound very much as if they are empirical givens, rather than the anecdotal givens of evolutionists.

    The animists need to get back to science, properly so-called, i.e ID.

  8. OK – I have made a brief response to Ann Gauger’s post.

  9. bornagain77,

    you certainly person of lengthy study speaking for subjects religion, science, filosofia, mathematicas, music, video. member for religion order like priest, monk, or scientist, professor of university, maybe of Academia Pontificia de Ciencias? however, very generous of sharing knowledge. thank you.

    sergio

  10. paulmc can you please address why the fatal detrimental mutation problem is not a problem for neo-Darwinism?

    The following study surveys four decades of experimental work, and solidly backs up the conclusion that genetic entropy (as cited previously) is a insurmountabler problem for neo-Darwinism:

    “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
    Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net ‘fitness gain’ within a ‘stressed’ environment i.e. remove the stress from the environment and the parent strain is always more ‘fit’)
    http://behe.uncommondescent.co.....evolution/

    Michael Behe talks about the preceding paper on this podcast:

    Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time – December 2010
    http://intelligentdesign.podom.....3_46-08_00

    Where’s the substantiating evidence for neo-Darwinism?
    https://docs.google.com/document/d/1q-PBeQELzT4pkgxB2ZOxGxwv6ynOixfzqzsFlCJ9jrw/edit

    Moreover, Genetic Entropy is in harmony with the second law of thermodynamics, entropy, whereas, despite what Darwinist vehemently protest to the contrary, entropy is in severe disharmony with the primary claim of neo-Darwinism that purely material processes can generate highly integrated functional complexity that man can only dream of imitating in computer programs:

    Are You Looking for the Simplest and Clearest Argument for Intelligent Design? – Granville Sewell (2nd Law) – video
    http://www.evolutionnews.org/2.....56711.html

    Here is a defence of Dr. Sewell’s 2nd Law argument:

    Physicist Rob Sheldon offers some thoughts on Sal Cordova vs. Granville Sewell on 2nd Law Thermo – July 2012
    Excerpt: The Equivalence: Boltzmann’s famous equation (and engraved on his tombstone) S = k ln W, merely is an exchange rate conversion. If W is lira, and S is dollars, then k ln() is the conversion of the one to the other, which is empirically determined. Boltzmann’s constant “k” is a semi-empirical conversion number that made Gibbs “stat mech” definition work with the earlier “thermo” definition of Lord Kelvin and co.
    Despite this being something as simple as a conversion factor, you must realize how important it was to connect these two. When Einstein connected mass to energy with E = (c2) m, we can now talk about mass-energy conservation, atom bombs and baby universes, whereas before Einstein they were totally different quantities.
    Likewise, by connecting the two things, thermodynamics and statistical mechanics, then the hard rules derived from thermo can now be applied to statistics of counting permutations.
    This is where Granville derives the potency of his argument, since a living organism certainly shows unusual permutations of the atoms, and thus has stat mech entropy that via Boltzmann, must obey the 2nd law. If life violates this, then it must not be lawfully possible for evolution to happen (without an input of work or information.)
    The one remaining problem, is how to calculate it precisely (how to calculate the entropy precisely).
    note: (And because it is extremely difficult to calculate entropy precisely for living cells, this is exactly where Darwinists try to claim evolution does not violate the second law. Yet regardless of the games Darwinists play because of this lack of mathematical precision, for all intents and purposes as far as we can ascertain, for evolution to occur would indeed violate the ‘iron clad’ second law of thermodynamics!)
    http://www.uncommondescent.com.....aw-thermo/

  11. BA77 – purifying selection usually does the trick. And indeed we expect things to evolve to extinction when their populations are sufficiently marginal (low effective population size) that purifying selection is very weak.

  12. paulmc,

    purifying selection usually does the trick. And indeed we expect things to evolve to extinction

    The problem with invoking selection is that the ‘slightly’ detrimental mutations are far below the power of selection to see before they accumulate. Selection is simply completely blind at that level so as to have the power to select away the ‘slightly’ detrimental mutations that accumulate in genomes before the damage is already done.

    *3 new mutations every time a cell divides in your body
    * Average cell of 15 year old has up to 6000 mutations
    *Average cell of 60 year old has 40,000 mutations
    Reproductive cells are ‘designed’ so that, early on in development, they are ‘set aside’ and thus they do not accumulate mutations as the rest of the cells of our bodies do. Regardless of this protective barrier against the accumulation of slightly detrimental mutations still we find that,,,
    *60-175 mutations are passed on to each new generation.
    - per John Sanford PhD.

    Genetic Entropy – Dr. John Sanford – Evolution vs. Reality – video (Notes in description)
    http://vimeo.com/35088933

    Evolution Vs Genetic Entropy – Andy McIntosh – video
    http://www.metacafe.com/watch/4028086

    Here is a nice simple overview that the slightly detrimental mutation ‘problem’ presents for population genetics that selection simply cannot overcome:

    Human evolution or extinction – discussion on acceptable mutation rate per generation (with clips from Dr. John Sanford) – video
    http://www.youtube.com/watch?v=aC_NyFZG7pM

    Moreover paulmc, you don’t even have evidence that any rare beneficial mutations, when added together, can overcome negative epistasis:

    Mutations : when benefits level off – June 2011 – (Lenski’s e-coli after 50,000 generations)
    Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
    http://www2.cnrs.fr/en/1867.htm?theme1=7

    Moreover paulmc, you can’t even empirically demonstrate the fixation of a single beneficial mutation in a multicellular creature:

    Experimental Evolution in Fruit Flies (35 years of trying to force fruit flies to evolve in the laboratory fails, spectacularly) – October 2010
    Excerpt: “Despite decades of sustained selection in relatively small, sexually reproducing laboratory populations, selection did not lead to the fixation of newly arising unconditionally advantageous alleles.,,, “This research really upends the dominant paradigm about how species evolve,” said ecology and evolutionary biology professor Anthony Long, the primary investigator.
    http://www.arn.org/blogs/index.....ruit_flies

    paulmc, despite what you may prefer to think, this problem simply can’t be ‘selectively swept under the rug’. The ‘problem’ runs much deeper than selection can handle!

    Further note:

    More from Ann Gauger on why humans didn’t happen the way Darwin said – July 2012
    Excerpt: Each of these new features probably required multiple mutations. Getting a feature that requires six neutral mutations is the limit of what bacteria can produce. For primates (e.g., monkeys, apes and humans) the limit is much more severe. Because of much smaller effective population sizes (an estimated ten thousand for humans instead of a billion for bacteria) and longer generation times (fifteen to twenty years per generation for humans vs. a thousand generations per year for bacteria), it would take a very long time for even a single beneficial mutation to appear and become fixed in a human population.
    You don’t have to take my word for it. In 2007, Durrett and Schmidt estimated in the journal Genetics that for a single mutation to occur in a nucleotide-binding site and be fixed in a primate lineage would require a waiting time of six million years. The same authors later estimated it would take 216 million years for the binding site to acquire two mutations, if the first mutation was neutral in its effect.
    Facing Facts
    But six million years is the entire time allotted for the transition from our last common ancestor with chimps to us according to the standard evolutionary timescale. Two hundred and sixteen million years takes us back to the Triassic, when the very first mammals appeared. One or two mutations simply aren’t sufficient to produce the necessary changes— sixteen anatomical features—in the time available. At most, a new binding site might affect the regulation of one or two genes.
    http://www.uncommondescent.com.....rwin-said/

  13. And combined with that modelling problem, which looks like the animist ‘scientists” equivalent of the extraordinarily defective ‘modelling’ of the economists, asking her to ‘do the math’, so to speak, seems insulting. What she has identified as problems sound very much as if they are empirical givens, rather than the anecdotal givens of evolutionists.

    The animists need to get back to science, properly so-called, i.e ID.

    Gauger hasn’t even shown there is a problem – she’s claiming that population genetic models are wrong, but doesn’t show that this makes a difference. Indeed she doesn’t even say what is wrong.

    What’s so insulting about asking her to do the maths? She’s criticising mathematical models, so what’s so bad about asking her to back her criticisms up? it’s what we animists do – we find evidence for our positions. Why is it insulting to ask her to be held to the same standard when discussing science?

  14. BA77 – most of the mutations that occur in humans are selectively neutral – this is why mutational load has not caused our extinction.

  15. paulmc you state,

    most of the mutations that occur in humans are selectively neutral – this is why mutational load has not caused our extinction.

    selectively neutral? yes! (selection certainly can’t see them!), slightly detrimentally neutral? no! (functionally they do indeed, the vast majority of times, have a negative impact on the information content inherent in the genome). i.e. If nature is allowed to run her full course, without outside intervention, humans, as well as all other life on earth, is headed for extinction! The evidence for the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders.

    Inside the Human Genome: A Case for Non-Intelligent Design – Pg. 57 By John C. Avise
    Excerpt: “Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens.”

    I went to the mutation database website cited by John Avise and found:

    HGMD®: Now celebrating our 100,000 mutation milestone!
    http://www.hgmd.org/

    The primary reason why mutations are, for the vast majority of times, slightly detrimental, is that the human genome is severely “polyconstrained” to changes by undirected ‘random’ mutations by what is termed ‘polyfunctionality’:

    Scientists Map All Mammalian Gene Interactions – August 2010
    Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome.
    http://www.sciencedaily.com/re.....142044.htm

    Poly-Functional Complexity equals Poly-Constrained Complexity
    https://docs.google.com/document/d/1xkW4C7uOE8s98tNx2mzMKmALeV8-348FZNnZmSWY5H8/edit

    What Is The Genome? It’s Certainly Not Junk! – Dr. Robert Carter – video – (Notes in video description)
    http://www.metacafe.com/w/8905583

    paulmc, you simply are not even in the right ballpark of reason in trying to explain such massively integrated complexity found in life in a ‘bottom up’, chance and necessity (RM & NS), fashion!

    Systems biology: Untangling the protein web – July 2009
    Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. “Combine all this and you can start to think that maybe some of the information flow can be captured,” he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. “The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent,” he says. “The simple pathway models are a gross oversimplification of what is actually happening.”
    http://www.nature.com/nature/j.....0415a.html

    As well paulmc, there are now found to be higher levels of information in life, above the ‘simple’ sequences in DNA, that it is simply impossible for neo-Darwinism (RM & NS) to explain:

    ‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does not insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ – Stephen Meyer – (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate – 2009)

    Epigenetics and the “Piano” Metaphor – January 2012
    Excerpt: And this is only the construction of proteins we’re talking about. It leaves out of the picture entirely the higher-level components — tissues, organs, the whole body plan that draws all the lower-level stuff together into a coherent, functioning form. What we should really be talking about is not a lone piano but a vast orchestra under the directing guidance of an unknown conductor fulfilling an artistic vision, organizing and transcending the music of the assembly of individual players.
    http://www.evolutionnews.org/2.....54731.html

    An Electric Face: A Rendering Worth a Thousand Falsifications – September 2011
    Excerpt: The video suggests that bioelectric signals presage the morphological development of the face. It also, in an instant, gives a peak at the phenomenal processes at work in biology. As the lead researcher said, “It’s a jaw dropper.”
    http://darwins-god.blogspot.co.....usand.html

    The (Electric) Face of a Frog – video
    http://www.youtube.com/watch?v=ndFe5CaDTlI

    Hopefully you can now see that the problem is far greater than you at first realized paulmc!

    Music:

    Nickelback – Savin’ Me – music
    http://www.youtube.com/watch?v=jPc-o-4Nsbk

  16. Nope, there is no evidence that the majority of mutations are slightly deleterious. A point mutation to most intron and intergenic sequences will not have fitness effects. That you’re trying to link this to gene interactions shows you don’t understand the evidence.

    You are simply firing random information from the famous BA77 blunderbuss.

  17. paulmc you claim without citation

    Nope, there is no evidence that the majority of mutations are slightly deleterious.

    I’ve already cited the mutation database that lists over 100,000 different disease causing mutations that are identified to date in Homo-sapiens, where you just dogmatically claim that there are no slightly deleterious effects to mutations. Well regardless of what you wish to be true for your preferred hypothesis of neo-Darwinism, the fact is that there is much evidence to support the position that the vast majority of mutations in coding AND in non-coding regions are slightly deleterious:

    Unexpectedly small effects of mutations in bacteria bring new perspectives – November 2010
    Excerpt:,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed.
    http://www.physorg.com/news/20.....teria.html

    “Moreover, there is strong theoretical reasons for believing there is no truly neutral nucleotide positions. By its very existence, a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding, and nucleosome building. If a nucleotide carries absolutely no (useful) information, it is, by definition, slightly deleterious, as it slows cell replication and wastes energy.,, Therefore, there is no way to change any given site without some biological effect, no matter how subtle.”
    - John Sanford – Genetic Entropy and The Mystery of The Genome – pg. 21 – Inventor of the ‘Gene Gun’

    Shoddy Engineering or Intelligent Design? Case of the Mouse’s Eye – April 2009
    Excerpt: — The (entire) nuclear genome is thus transformed into an optical device that is designed to assist in the capturing of photons. This chromatin-based convex (focusing) lens is so well constructed that it still works when lattices of rod cells are made to be disordered. Normal cell nuclei actually scatter light. — So the next time someone tells you that it “strains credulity” to think that more than a few pieces of “junk DNA” could be functional in the cell – remind them of the rod cell nuclei of the humble mouse.
    http://www.evolutionnews.org/2......html#more

    Arriving At Intelligence Through The Corridors Of Reason (Part II) – April 2010
    Excerpt: ,,, since junk DNA would put an unnecessary energetic burden on cells during the process of replication, it stands to reason that it would more likely be eliminated through selective pressures.
    http://www.uncommondescent.com.....n-part-ii/

    Astonishing DNA complexity demolishes neo-Darwinism – Alex Williams:
    Excerpt: DNA information is overlapping-multi-layered and multi-dimensional; it reads both backwards and forwards; and the ‘junk’ is far more functional than the protein code, so there is no fossilized history of evolution…All the vast amount of meta-information in the human genome only has meaning when applied to the problem of using the human genome to make, maintain and reproduce human beings. Genic regions are transcribed on average in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the ‘junk’ about 50 times more active than the genes.
    http://creation.com/images/pdf.....11-117.pdf

    The information in DNA is simply beyond anything man has ever seen or built:

    The data compression of some stretches of human DNA is estimated to be up to 12 codes thick (12 different ways of DNA transcription) (Trifonov, 1989). (This is well beyond the complexity of any computer code ever written by man). John Sanford – Genetic Entropy

    3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell – Oct. 2009
    Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip — while avoiding the knots and tangles that might interfere with the cell’s ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication.
    http://www.sciencedaily.com/re.....142957.htm

    Moreover the complexity and fidelity of DNA replication is beyond human capacity, and almost beyond human comprehension in its complexity:

    Quantum Dots Spotlight DNA-Repair Proteins in Motion – March 2010
    Excerpt: “How this system works is an important unanswered question in this field,” he said. “It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It’s akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour.” Dr. Bennett Van Houten – of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot.
    http://www.sciencedaily.com/re.....123522.htm

    ‘How good would each typists have to be, in order to match the DNA’s performance? The answer is almost too ludicrous to express. For what it is worth, every typists would have to have an error rate of about one in a trillion; that is, he would have to be accurate enough to make only a single error in typing the Bible 250,000 times at a stretch. A good secretary in real life has an error rate of about one per page. This is about a billion times the error rate of the histone H4 gene. A line of real life secretaries (without a correcting reference) would degrade the text to 99 percent of its original by the 20th member of the line of 20 billion. By the 10,000 member of the line less than 1 percent would survive. The point near total degradation would be reached before 99.9995% of the typists had even seen it.
    Richard Dawkins – The blind watchmaker – Page 123-124

  18. Moreover, there are multiple layers of error correction to DNA:

    Although evolution depends on ‘mutations/errors’ to DNA to make evolution plausible, there are multiple layers of error correction in the cell to protect against any “random changes” to DNA from happening in the first place:

    The Evolutionary Dynamics of Digital and Nucleotide Codes: A Mutation Protection Perspective – February 2011
    Excerpt: “Unbounded random change of nucleotide codes through the accumulation of irreparable, advantageous, code-expanding, inheritable mutations at the level of individual nucleotides, as proposed by evolutionary theory, requires the mutation protection at the level of the individual nucleotides and at the higher levels of the code to be switched off or at least to dysfunction. Dysfunctioning mutation protection, however, is the origin of cancer and hereditary diseases, which reduce the capacity to live and to reproduce. Our mutation protection perspective of the evolutionary dynamics of digital and nucleotide codes thus reveals the presence of a paradox in evolutionary theory between the necessity and the disadvantage of dysfunctioning mutation protection. This mutation protection paradox, which is closely related with the paradox between evolvability and mutational robustness, needs further investigation.”
    http://www.arn.org/blogs/index....._contradic

    Contradiction in evolutionary theory – video – (The contradiction between extensive DNA repair mechanisms and the necessity of ‘random mutations/errors’ for Darwinian evolution)
    http://www.youtube.com/watch?v=dzh6Ct5cg1o

    The Darwinism contradiction of repair systems
    Excerpt: The bottom line is that repair mechanisms are incompatible with Darwinism in principle. Since sophisticated repair mechanisms do exist in the cell after all, then the thing to discard in the dilemma to avoid the contradiction necessarily is the Darwinist dogma.
    http://www.uncommondescent.com.....r-systems/

    Repair mechanisms in DNA include:

    A proofreading system that catches almost all errors
    A mismatch repair system to back up the proofreading system
    Photoreactivation (light repair)
    Removal of methyl or ethyl groups by O6 – methylguanine methyltransferase
    Base excision repair
    Nucleotide excision repair
    Double-strand DNA break repair
    Recombination repair
    Error-prone bypass
    http://www.newgeology.us/presentation32.html

    Thus paulmc, you claim that most purely random mutations are not slightly deleterious yet we have several lines of very suggestive evidence indicating functionality for the entire genome and on top of that we have multiple layers of error correction preventing purely random mutations from ever happening in the first place. This position that you are holding that a very large portion of mutations are not slightly deleterious, is not a realistic position for you to hold, nor is your position realistic that a large portion of the genome is junk.

    Moreover, I remind you that Behe’s paper on reductive evolution shows that the overwhelming tendency of evolutionary processes is ‘reductive evolution’, meaning, among other things, that evolutionary processes will always tend to discard ‘dead weight’ (such as junk DNA), or break something, in order to confer a survival advantage well before they can ever reasonable be expected to build any functional complexity.

    Moreover, thermodynamics gives us another very strong reason to believe that ‘functional, i.e. useful, information’ is required for EVERY molecule of the cell to stay so as far out of thermodynamic equilibrium as it is:,,

    Also of interest is the information content that is derived in a cell when working from a thermodynamic perspective:

    “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

    of note: The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, considered the most accurate measure of the transcendent quantum information/entanglement constraining a ‘simple’ life form to be so far out of thermodynamic equilibrium.

    “Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…” Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

    For calculations, from the thermodynamic perspective, please see the following site:

    Moleular Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
    Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
    http://www.astroscu.unam.mx/~a.....ecular.htm

    In fact it is now shown that it is ‘information’ that is what is constraining the cell to be so far out of thermodynamic equilibrium:

    Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
    Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
    http://journals.witpress.com/paperinfo.asp?pid=420

    Does DNA Have Telepathic Properties?-A Galaxy Insight – 2009
    Excerpt: The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
    http://www.dailygalaxy.com/my_.....ave-t.html

    Quantum Information/Entanglement In DNA – Elisabeth Rieper – short video
    http://www.metacafe.com/watch/5936605/

    Quantum entanglement between the electron clouds of nucleic acids in DNA – Elisabeth Rieper, Janet Anders and Vlatko Vedral – February 2011
    http://arxiv.org/PS_cache/arxi.....4053v2.pdf

    etc.. etc..
    paulmc you simply have no basis in reality for your belief that undirected random mutations are not ‘slightly’ harmful!

  19. I’ve already cited the mutation database that lists over 100,000 different disease causing mutations that are identified to date in Homo-sapiens, where you just dogmatically claim that there are no slightly deleterious effects to mutations.

    Yeah. The existence of genetic diseases does not prove that all mutations are slightly deleterious.

    If you read what I wrote above, I am well aware that there are slightly deleterious mutations, and that inefficient selection does result in some accumulating and can result in species extinction. However, the *majority* of mutations that occur are neutral. We know this because we exist. If there was an unbearable mutational load we would be extinct.

  20. paulmc, you state:

    However, the *majority* of mutations that occur are neutral. We know this because we exist. If there was an unbearable mutational load we would be extinct.

    That is called ‘begging the question’ paulmc.

  21. notes:

    A. L. Hughes’s New Non-Darwinian Mechanism of Adaption Was Discovered and Published in Detail by an ID Geneticist 25 Years Ago – Wolf-Ekkehard Lönnig – December 2011
    Excerpt: The original species had a greater genetic potential to adapt to all possible environments. In the course of time this broad capacity for adaptation has been steadily reduced in the respective habitats by the accumulation of slightly deleterious alleles (as well as total losses of genetic functions redundant for a habitat), with the exception, of course, of that part which was necessary for coping with a species’ particular environment….By mutative reduction of the genetic potential, modifications became “heritable”. — As strange as it may at first sound, however, this has nothing to do with the inheritance of acquired characteristics. For the characteristics were not acquired evolutionarily, but existed from the very beginning due to the greater adaptability. In many species only the genetic functions necessary for coping with the corresponding environment have been preserved from this adaptability potential. The “remainder” has been lost by mutations (accumulation of slightly disadvantageous alleles) — in the formation of secondary species.
    http://www.evolutionnews.org/2.....53881.html

    Evolutionists Are Losing Ground Badly: Both Pattern and Process Contradict the Aging Theory – Cornelius Hunter
    Excerpt: Contradictory patterns in biology include the abrupt appearance of so many forms and the diversity explosions followed by a winnowing of diversity in the fossil record. It looks more like the inverse of an evolutionary tree with bursts of new species which then die off over time.
    http://darwins-god.blogspot.co.....badly.html

    Alvin Plantinga: Divine Action – video
    http://www.youtube.com/watch?v=N5DPneR-Rtc

    Does Science Show That Miracles Can’t Happen? (Alvin Plantinga) – video
    http://www.youtube.com/watch?v=gcvSSQGYIu8

    Predictions of Materialism compared to Predictions of Theism within the scientific method:
    http://docs.google.com/Doc?doc....._5fwz42dg9

    Music and Verse:

    Creation Calls — are you listening? Music by Brian Doerksen
    http://www.youtube.com/watch?v=LwGvfdtI2c0

    Isaiah 6:3
    And they were calling to one another: “Holy, holy, holy is the LORD Almighty; the whole earth is full of his glory.”

  22. 22

    bornagain77,

    you resemble intellect prizefighter/matador with jabs sharp, footing like butterfly, same time guiding Darwin bull for final estocada.

    sergio

  23. Sure. Begging the question. To clarify, you think that we sustain 100-odd slightly deleterious mutations in each individual, because you assume each nucleotide is functional, something you imagine is possible because you don’t believe in evolutionary timescales. Perhaps you can clarify how a point mutation in a broken transposon affects fitness? How about a deletion in the middle of an intron? Compensatory back mutations?

    p.s. don’t try and cite Austin Hughes to support your case – that looks silly. The guy is a neutralist.

  24. If you read what I wrote above, I am well aware that there are slightly deleterious mutations, and that inefficient selection does result in some accumulating and can result in species extinction. However, the *majority* of mutations that occur are neutral. We know this because we exist. If there was an unbearable mutational load we would be extinct.

    In his book Sanford argues that mutational load means that YECs are right, so you are making assumptions that aren’t accepted by everyone here.

  25. paulmc you think:

    you think that we sustain 100-odd slightly deleterious mutations in each individual, because you assume each nucleotide is functional, something you imagine is possible,,,,.

    No paulmc, I KNOW ‘we sustain 10060-odd slightly deleterious mutations in each individual’ because this has been directly measured not because of any false imagination I may have had beforehand. i.e. I believe this because of what the evidence says not in spite of what the evidence says as neo-Darwinists seem to always do with their unfounded a-priori belief in huge amounts of junk DNA:

    We Are All Mutants: First Direct Whole-Genome Measure of Human Mutation Predicts 60 New Mutations in Each of Us – June 2011
    http://www.sciencedaily.com/re.....012758.htm

    Sanford’s pro-ID thesis supported by PNAS paper, read it and weep, literally – September 2010
    Excerpt: Unfortunately, it has become increasingly clear that most of the mutation load is associated with mutations with very small effects distributed at unpredictable locations over the entire genome, rendering the prospects for long-term management of the human gene pool by genetic counseling highly unlikely for all but perhaps a few hundred key loci underlying debilitating monogenic genetic disorders (such as those focused on in the present study).
    http://www.uncommondescent.com.....literally/

    Further notes:

    The Frailty of the Darwinian Hypothesis
    “The net effect of genetic drift in such (vertebrate) populations is “to encourage the fixation of mildly deleterious mutations and discourage the promotion of beneficial mutations,”
    http://www.evolutionnews.org/2......html#more

    High Frequency of Cryptic Deleterious Mutations in Caenorhabditis elegans ( Esther K. Davies, Andrew D. Peters, Peter D. Keightley)
    “In fitness assays, only about 4 percent of the deleterious mutations fixed in each line were detectable. The remaining 96 percent, though cryptic, are significant for mutation load…the presence of a large class of mildly deleterious mutations can never be ruled out.”
    http://www.sciencemag.org/cgi/...../5434/1748

    Why are we still alive? – LAURENCE LOEWE – Institute of Evolutionary Biology, School of Biological Sciences, University of Edinburgh, – 2006
    Excerpt: In the last few years evolution@home has accumulated over 100 years of computing time in its quest for a better understanding of the consequences of mutations that are slightly harmful and therefore might not be removed from populations by natural selection.,,, Results show that this may be less than 20 million years, resulting in a genomic decay paradox,,,,
    http://www.evolutionary-resear.....till-alive

    As well, the slow accumulation of ‘slightly detrimental mutations’ in humans, that is ‘slightly detrimental mutations’ which are far below the power of natural selection to remove from our genomes, is revealed by this following clear example:

    “When first cousins marry, their children have a reduction of life expectancy of nearly 10 years. Why is this? It is because inbreeding exposes the genetic mistakes within the genome (slightly detrimental recessive mutations) that have not yet had time to “come to the surface”. Inbreeding is like a sneak preview, or foreshadowing, of where we are going to be genetically as a whole as a species in the future. The reduced life expectancy of inbred children reflects the overall aging of the genome that has accumulated thus far, and reveals the hidden reservoir of genetic damage that have been accumulating in our genomes.”
    Sanford; Genetic Entropy; page 147

    Children of incest – Journal of Pediatrics
    Abstract: Twenty-nine children of brother-sister or father-daughter matings were studied. Twenty-one were ascertained because of the history of incest, eight because of signs or symptoms in the child. In the first group of 21 children, 12 had abnormalities, which were severe in nine (43%). In one of these the disorder was autosomal recessive. All eight of the group referred with signs or symptoms had abnormalities, three from recessive disorders. The high empiric risk for severe problems in the children of such close consanguineous matings should be borne in mind, as most of these infants are relinquished for adoption.
    http://www.jpeds.com/article/S.....8/abstract

    Inbreeding is also a very big problem that must be carefully guarded against in animal husbandry in selecting for desired, inherent, traits, and inbreeding is also witnessed to be a major problem for some natural populations:

    Inbreeding – Pros and cons
    Excerpt: The ultimate result of continued inbreeding is terminal lack of vigor and probable extinction as the gene pool contracts, fertility decreases, abnormalities increase and mortality rates rise.
    http://www.dogbreedinfo.com/inbreeding.htm

  26. BA77,

    Those studies don’t show that the ~60 mutations are deleterious. The Lynch paper actually describes studies on the likely mutation spectrum. But note that paper is entirely about the potential results of relaxed selection in modern societies…

  27. BA77:

    No paulmc, I KNOW ‘we sustain 60-odd slightly deleterious mutations in each individual’ because this has been directly measured not because of any false imagination I may have had beforehand

    You know nothing of the sort. Obviously the mutations occur, but you know nothing of their fitness consequences.
    Again:

    Perhaps you can clarify how a point mutation in a broken transposon affects fitness? How about a deletion in the middle of an intron? Compensatory back mutations?

  28. paulmc you falsely believe:

    because you don’t believe in evolutionary long timescales

    No, I believe in a approx. 14 byo universe. Moreover, despite what evolutionists may imagine to be true with their magic wand of long periods of time, the fact is that the passing of ‘temporal’ time is fundamentally connected to the second law of thermodynamics. i.e. if you saw a film of a cup of coffee spontaneously reassembling itself from thousands of shattered pieces on a floor you would automatically believe that the film was running backwards instead of ever believing that ‘temporal’ time had seemingly reversed and the second law, entropy, had been broken (although there is nothing strictly within the second law that would prevent cups from reassembling themselves).

    Notes:

    The the fine-tuning of one particular parameter, the ‘original phase-space volume’ of the universe, required such precision that the “Creator’s aim must have been to an accuracy of 1 part in 10^10^123”. This number is gargantuan. If this number were written out in its entirety, 1 with 10^123 zeros to the right, it could not be written on a piece of paper the size of the entire visible universe, even if a number were written down on each sub-atomic particle in the entire universe, since the universe only has 10^80 sub-atomic particles in it.

    Roger Penrose discusses initial entropy of the universe. – video
    http://www.youtube.com/watch?v=WhGdVMBk6Zo

    The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose
    Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).”
    http://www.pul.it/irafs/CD%20I.....enrose.pdf

    How special was the big bang? – Roger Penrose
    Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123.
    (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989)

    Roger Penrose – How Special Was The Big Bang?
    “But why was the big bang so precisely organized, whereas the big crunch (or the singularities in black holes) would be expected to be totally chaotic? It would appear that this question can be phrased in terms of the behaviour of the WEYL part of the space-time curvature at space-time singularities. What we appear to find is that there is a constraint WEYL = 0 (or something very like this) at initial space-time singularities-but not at final singularities-and this seems to be what confines the Creator’s choice to this very tiny region of phase space.”

    Further notes:

    Evolution is a Fact, Just Like Gravity is a Fact! UhOh!
    Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged.
    http://www.uncommondescent.com.....fact-uhoh/

    Entropy of the Universe – Hugh Ross – May 2010
    Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated.
    http://www.reasons.org/entropy-universe

    “Gain in entropy always means loss of information, and nothing more.”
    Gilbert Newton Lewis – Eminent Chemist

    “Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
    Tom Siegfried, Dallas Morning News, 5/14/90 – Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article

    Blackholes – The neo-Darwinian ‘god of entropic randomness’ which can create all things (at least according to them)
    https://docs.google.com/document/d/1fxhJEGNeEQ_sn4ngQWmeBt1YuyOs8AQcUrzBRo7wISw/edit?hl=en_US

    somewhat related notes:

    Atheistic neo-Darwinists claim that given enough time the improbable becomes probable. i.e. Evolution, no matter how improbable, becomes certain if you allow enough time according to their reasoning. Thus to counter such simplistic reasoning in the power of time to work miracles, here are a few notes to the contrary of what the neo-Darwinists take on blind faith in the power of time;

    William Lane Craig – If Human Evolution Did Occur It Was A Miracle – video
    http://www.youtube.com/watch?v=GUxm8dXLRpA

    In Barrow and Tippler’s book The Anthropic Cosmological Principle, they list ten steps necessary in the course of human evolution, each of which, is so improbable that if left to happen by chance alone, the sun would have ceased to be a main sequence star and would have incinerated the earth. They estimate that the odds of the evolution (by chance) of the human genome is somewhere between 4 to the negative 180th power, to the 110,000th power, and 4 to the negative 360th power, to the 110,000th power. Therefore, if evolution did occur, it literally would have been a miracle and evidence for the existence of God.
    William Lane Craig

    A review of The Edge of Evolution: The Search for the Limits of Darwinism
    Excerpt: The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have “invented” little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155).
    http://creation.com/review-mic.....-evolution

    Waiting Longer for Two Mutations – Michael J. Behe
    Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’ (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model.
    http://www.discovery.org/a/9461

    The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway – Ann K. Gauger and Douglas D. Axe – April 2011
    Excerpt: We infer from the mutants examined that successful functional conversion would in this case require seven or more nucleotide substitutions. But evolutionary innovations requiring that many changes would be extraordinarily rare, becoming probable only on timescales much longer than the age of life on earth.
    http://bio-complexity.org/ojs/.....O-C.2011.1

  29. When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe
    Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied.
    http://biologicinstitute.org/2.....t-collide/

    Book Review – Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009.
    Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren’t chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.
    So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it’s a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.
    http://www.fourmilab.ch/docume.....k_726.html

    Stephen Meyer – Functional Proteins And Information For Body Plans – video
    http://www.metacafe.com/watch/4050681

    Dr. Stephen Meyer comments at the end of the preceding video,,,

    ‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ – Stephen Meyer – (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate – 2009)

    Dr. Hugh Ross – Origin Of Life Paradox – video
    http://www.metacafe.com/watch/4012696

    Archaean Microfossils and the Implications for Intelligent Design – August 2011
    Excerpt: This dramatically limits the amount of time, and thus the probablistic resources, available to those who wish to invoke purely unguided and purposeless material processes to explain the origin of life.
    http://www.evolutionnews.org/2.....49921.html

    Without enzyme, biological reaction essential to life takes 2.3 billion years: UNC study:
    In 1995, Wolfenden reported that without a particular enzyme, a biological transformation he deemed “absolutely essential” in creating the building blocks of DNA and RNA would take 78 million years.“Now we’ve found a reaction that – again, in the absence of an enzyme – is almost 30 times slower than that,” Wolfenden said. “Its half-life – the time it takes for half the substance to be consumed – is 2.3 billion years, about half the age of the Earth. Enzymes can make that reaction happen in milliseconds.”
    http://www.med.unc.edu/www/new.....-unc-study

    “Phosphatase speeds up reactions vital for cell signalling by 10^21 times. Allows essential reactions to take place in a hundreth of a second; without it, it would take a trillion years!” Jonathan Sarfati
    http://www.pnas.org/content/100/10/5607.abstract

    Not only do we not have enough time for Darwinian evolution, we don’t, as massive as it is, even have a big enough universe for Darwinian evolution:

    Abiogenic Origin of Life: A Theory in Crisis – Arthur V. Chadwick, Ph.D.
    Excerpt: The synthesis of proteins and nucleic acids from small molecule precursors represents one of the most difficult challenges to the model of prebiological evolution. There are many different problems confronted by any proposal. Polymerization is a reaction in which water is a product. Thus it will only be favored in the absence of water. The presence of precursors in an ocean of water favors depolymerization of any molecules that might be formed. Careful experiments done in an aqueous solution with very high concentrations of amino acids demonstrate the impossibility of significant polymerization in this environment. A thermodynamic analysis of a mixture of protein and amino acids in an ocean containing a 1 molar solution of each amino acid (100,000,000 times higher concentration than we inferred to be present in the prebiological ocean) indicates the concentration of a protein containing just 100 peptide bonds (101 amino acids) at equilibrium would be 10^-338 molar. Just to make this number meaningful, our universe may have a volume somewhere in the neighborhood of 10^85 liters. At 10^-338 molar, we would need an ocean with a volume equal to 10^229 universes (100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000) just to find a single molecule of any protein with 100 peptide bonds. So we must look elsewhere for a mechanism to produce polymers. It will not happen in the ocean.
    http://origins.swau.edu/papers.....fault.html

    Music:

    The Rolling Stones – Time Is On My Side
    http://www.youtube.com/watch?v=rIE2GAqnFGw

  30. For a third time:

    Perhaps you can clarify how a point mutation in a broken transposon affects fitness? How about a deletion in the middle of an intron? Compensatory back mutations?

  31. paulmc, you mention several mutation scenarios to DNA, to ‘presupposed’ completely functionless sequences, yet it interesting to note that ‘simple’ sequences to DNA are in many respects now found to be the ‘bottom rung of the ladder’ in the information hierarchy of the cell. A ‘scratch the surface’ overview of the information hierarchy is here:

    Multidimensional Genome – Dr. Robert Carter – video
    http://www.metacafe.com/watch/8905048/

    The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin
    http://www.metacafe.com/watch/8593991/

    But perhaps of more interest, at least for me, is that energy itself is now found to be communicating information in the cell

    Cellular Communication through Light
    Excerpt: Information transfer is a life principle. On a cellular level we generally assume that molecules are carriers of information, yet there is evidence for non-molecular information transfer due to endogenous coherent light. This light is ultra-weak, is emitted by many organisms, including humans and is conventionally described as biophoton emission.
    http://www.plosone.org/article.....ne.0005086

    An Electric Face: A Rendering Worth a Thousand Falsifications – September 2011
    Excerpt: The video suggests that bioelectric signals presage the morphological development of the face. It also, in an instant, gives a peak at the phenomenal processes at work in biology. As the lead researcher said, “It’s a jaw dropper.”
    http://darwins-god.blogspot.co.....usand.html

    The (Electric) Face of a Frog – video
    http://www.youtube.com/watch?v=ndFe5CaDTlI

    Not in the Genes: Embryonic Electric Fields – Jonathan Wells – December 2011
    Excerpt: although the molecular components of individual sodium-potassium channels may be encoded in DNA sequences, the three-dimensional arrangement of those channels — which determines the form of the endogenous electric field — constitutes an independent source of information in the developing embryo.
    http://www.evolutionnews.org/2.....54071.html

    Biophotons – The Light In Our Cells – Marco Bischof – March 2005
    Excerpt page 2: The Coherence of Biophotons: ,,, Biophotons consist of light with a high degree of order, in other words, biological laser light. Such light is very quiet and shows an extremely stable intensity, without the fluctuations normally observed in light. Because of their stable field strength, its waves can superimpose, and by virtue of this, interference effects become possible that do not occur in ordinary light. Because of the high degree of order, the biological laser light is able to generate and keep order and to transmit information in the organism.
    http://www.international-light.....hotons.pdf

  32. The reason why it is very interesting for me to learn that energy itself, instead of just molecules, are communicating massive amounts of information in the cell is that energy, per Einstein, has shown itself to be of a ‘higher dimensional’ nature than mass itself is:

    i.e. time, as we understand it, would come to a complete stop at the speed of light. To grasp the whole ‘time coming to a complete stop at the speed of light’ concept a little more easily, imagine moving away from the face of a clock at the speed of light. Would not the hands on the clock stay stationary as you moved away from the face of the clock at the speed of light? Moving away from the face of a clock at the speed of light happens to be the same ‘thought experiment’ that gave Einstein his breakthrough insight into e=mc2.

    Albert Einstein – Special Relativity – Insight Into Eternity – ‘thought experiment’ video
    http://www.metacafe.com/w/6545941/

    As well, please note the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as a ‘hypothetical’ observer moves towards the ‘higher dimension’ of the speed of light: (Of note: This following video was made by two Australian University Physics Professors with a supercomputer.)

    Approaching The Speed Of Light – Optical Effects – video
    http://www.metacafe.com/watch/5733303/

    Here is the interactive website, with link to the relativistic math at the bottom of the page, related to the preceding video;

    Seeing Relativity
    http://www.anu.edu.au/Physics/Searle/

    Moreover, there is even a higher quality of information found in the cell than even that of the ‘higher dimensionality’ of energy is:

    Quantum Information/Entanglement In DNA – Elisabeth Rieper – short video
    http://www.metacafe.com/watch/5936605/

    Light and Quantum Entanglement Both Reflect Some Characteristics Of God – video
    http://www.metacafe.com/watch/4102182

    AS to how this relates to the hierarchy of information in the cell:

    Materialism had postulated for centuries that everything reduced to, or emerged from material atoms, yet the correct structure of reality is now found by science to be as follows:

    1. material particles (mass) normally reduces to energy (e=mc^2)
    2. energy and mass both reduce to information (quantum teleportation)
    3. information reduces to consciousness (geometric centrality of conscious observation in universe dictates that consciousness must precede quantum wave collapse to its single bit state)

  33. paulmc:

    Nope, there is no evidence that the majority of mutations are slightly deleterious. A point mutation to most intron and intergenic sequences will not have fitness effects. That you’re trying to link this to gene interactions shows you don’t understand the evidence.

    Well, paul, there’s np evidence that any amount of genetic change can transform a knuckle-walker/ quadraped into an upright biped.

    So the question would be why do you ignore that? THAT says that YOU do NOT understand the evidence.

    Where are all the transforming mutations, paul? The safe money says they do not exist and never have.

  34. Why doesn’t Gauger run some simulations of a population expanding from Ne = 2 to Ne =10 000 in ~10 000 years and see what level of polymorphism you’d expect to see in the resulting population given the observed mutation rate?

    God that is retarded. If Gauger is right then the observed mutation rates do not apply as what we observe now is not what was originally designed, duh.

    But yes I am sure someone could write a genetic algorithm that could easily produce the diversity observed in a few thousand generations.

  35. A Gene:

    Yes, of course population genetic models are simplifications, so yes they might be wrong. But if Gauger’s attempts at criticism are to carry any weight, she should do the work to show that what she has identified as problems actually make a difference.

    Yet no one has ever done any work to demonstrate that mutations can accumulate in such a way as to transform a knuckle-walker/ quadraped into an upright biped. As a matter of fact the “theory” of evolution is void of work.

  36. No Matter What Type Of Selection, Mutations Deteriorate Genetic Information – article and video
    http://www.uncommondescent.com.....ns-weasel/

  37. Joe, if Gauger wants to, for no reason other than protecting her pet hypothesis, invoke large changes in mutation rate over time she’s welcome to. Run the simulation with different values of ? – see how big it has to be to get to observed levels of polymorphism.

    She won’t, of course…

    BA,

    Some mutations must fall in, say, broken transposons. How are they likely to “deterioate genetic information”. In fact, if a mutation A -> G deteriorates genetic information then, obviously, the mutation G -> A resotres it right? So some mutations create genetic information?

  38. random mutations A -> G (of whatever ‘random’ change) ALWAYS deteriorates information, whereas DNA repair mechanisms to correct those ‘random’ changes, compensatory (calculated) changes to alleles, and/or epigentic regulatory actions on DNA sequences (Shapiro) etc.. etc.. are NOT truly random mutations as is required per your theoretical basis in materialistic neo-Darwinism!

    Notes on the fruitless search for the all so elusive random mutation that actually would be beneficial as to building complex functional information:

    Mutations : when benefits level off – June 2011 – (Lenski’s e-coli after 50,000 generations)
    Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
    http://www2.cnrs.fr/en/1867.htm?theme1=7

    “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
    Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net ‘fitness gain’ within a ‘stressed’ environment i.e. remove the stress from the environment and the parent strain is always more ‘fit’)
    http://behe.uncommondescent.co.....evolution/

    The GS (genetic selection) Principle – David L. Abel – 2009
    Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.”
    http://www.bioscience.org/2009.....lltext.htm

    Experimental Evolution in Fruit Flies – October 2010
    Excerpt: “This research really upends the dominant paradigm about how species evolve”.,,, as stated in regards to the 35 year experimental failure to fixate a single beneficial mutation within fruit flies.
    http://www.arn.org/blogs/index.....ruit_flies

    “I have seen estimates of the incidence of the ratio of deleterious-to-beneficial mutations which range from one in one thousand up to one in one million. The best estimates seem to be one in one million (Gerrish and Lenski, 1998). The actual rate of beneficial mutations is so extremely low as to thwart any actual measurement (Bataillon, 2000, Elena et al, 1998). Therefore, I cannot …accurately represent how rare such beneficial mutations really are.” (J.C. Sanford; Genetic Entropy page 24) – 2005

    Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? (Thomas Bataillon)
    Abstract……It is argued that, although most if not all mutations detected in mutation accumulation experiments are deleterious, the question of the rate of favourable mutations (and their effects) is still a matter for debate.
    http://www.nature.com/hdy/jour.....7270a.html

    “But in all the reading I’ve done in the life-sciences literature, I’ve never found a mutation that added information… All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not increase it.”
    Lee Spetner – Ph.D. Physics – MIT – Not By Chance

    John Sanford writes in “Genetic Entropy & the Mystery of the Genome”: “Bergman (2004) has studied the topic of beneficial mutations. Among other things, he did a simple literature search via Biological Abstracts and Medline. He found 453,732 ‘mutation’ hits, but among these only 186 mentioned the word ‘beneficial’ (about 4 in 10,000). When those 186 references were reviewed, almost all the presumed ‘beneficial mutations’ were only beneficial in a very narrow sense–but each mutation consistently involved loss of function changes–hence loss of information. While it is almost universally accepted that beneficial (information creating) mutations must occur, this belief seems to be based upon uncritical acceptance of RM/NS, rather than upon any actual evidence.” (pp. 26-27)
    http://www.trueorigin.org/evomyth01.asp

    “The neo-Darwinians would like us to believe that large evolutionary changes can result from a series of small events if there are enough of them. But if these events all lose information they can’t be the steps in the kind of evolution the neo-Darwin theory is supposed to explain, no matter how many mutations there are. Whoever thinks macroevolution can be made by mutations that lose information is like the merchant who lost a little money on every sale but thought he could make it up on volume.”
    Lee Spetner (Ph.D. Physics – MIT – Not By Chance)

  39. 39

    Joe,

    “genetic algorithm” = ??. something obtained in computer science, yes?

    sergio

  40. 40

    bornagain77,

    how stay focus in differrent statements of others to respond to one person? me too distract, loss of ink for pen twice for notes!

    sergio

  41. BA,

    Are you actually saying all random mutations destroy information? And that, say, once and A -> G mutation has occured the back mutation G -> A either (a) couldn’t happen by random mutation or (b) would also decrease information?

    In either case, I should like an explanation (ideally with blockquotes…) as to why you think this.

  42. …that should read “without blockquotes” – just an explanation for why you believe this to be true, thanks.

  43. Because, as pointed out yesterday, completely random changes to the genetic text written in the DNA are, for one thing, random changes to the lowest level of information in the the information hierarchy of the cell:

    Multidimensional Genome – Dr. Robert Carter – video
    http://www.metacafe.com/watch/8905048/

    The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin
    http://www.metacafe.com/watch/8593991/

    On top of that amazing fact, as Dr. Bohlin pointed out in his talk, unlike the ‘one dimensional’ computer code written in our computers, in which we certainly wouldn’t expect unguided random changes to confer any benefit, we are dealing with ‘one dimensional’ coding that is ‘overlapping’ which makes the problem much more severe for the ‘bottom up’ neo-Darwinists who are dogmatically committed to their materialistic worldview:

    Astonishing DNA complexity update
    Excerpt: (ENCODE revealed) The untranslated regions (now called UTRs, rather than ‘junk’) are far more important than the translated regions (the genes), as measured by the number of DNA bases appearing in RNA transcripts. Genic regions are transcribed on average in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the ‘junk’ about 50 times more active than the genes.
    http://creation.com/astonishin.....ity-update

    ‘It’s becoming extremely problematic to explain how the genome could arise and how these multiple levels of overlapping information could arise, since our best computer programmers can’t even conceive of overlapping codes. The genome dwarfs all of the computer information technology that man has developed. So I think that it is very problematic to imagine how you can achieve that through random changes in a code.,,, More and more it looks like top down design and not just bottom up chance discovery of making complex systems.’ – Dr. John Sanford – quote taken from this talk:
    http://www.youtube.com/watch?v.....ature=plcp

    Dual-Coding Genes in Mammalian Genomes – 2007
    Abstract: Coding of multiple proteins by overlapping reading frames is not a feature one would associate with eukaryotic genes. Indeed, codependency between codons of overlapping protein-coding regions imposes a unique set of evolutionary constraints, making it a costly arrangement. Yet in cases of tightly coexpressed interacting proteins, dual coding may be advantageous. Here we show that although dual coding is nearly impossible by chance, a number of human transcripts contain overlapping coding regions. Using newly developed statistical techniques, we identified 40 candidate genes with evolutionarily conserved overlapping coding regions. Because our approach is conservative, we expect mammals to possess more dual-coding genes. Our results emphasize that the skepticism surrounding eukaryotic dual coding is unwarranted: rather than being artifacts, overlapping reading frames are often hallmarks of fascinating biology.
    http://www.ploscompbiol.org/ar.....bi.0030091

    etc.. etc..

    John Sanford, a leading expert in Genetics, co-inventor of ‘the gene-gun’, comments on some of the stunning poly-functional complexity found in the genome here:

    “There is abundant evidence that most DNA sequences are poly-functional, and therefore are poly-constrained. This fact has been extensively demonstrated by Trifonov (1989). For example, most human coding sequences encode for two different RNAs, read in opposite directions i.e. Both DNA strands are transcribed ( Yelin et al., 2003). Some sequences encode for different proteins depending on where translation is initiated and where the reading frame begins (i.e. read-through proteins). Some sequences encode for different proteins based upon alternate mRNA splicing. Some sequences serve simultaneously for protein-encoding and also serve as internal transcriptional promoters. Some sequences encode for both a protein coding, and a protein-binding region. Alu elements and origins-of-replication can be found within functional promoters and within exons. Basically all DNA sequences are constrained by isochore requirements (regional GC content), “word” content (species-specific profiles of di-, tri-, and tetra-nucleotide frequencies), and nucleosome binding sites (i.e. All DNA must condense). Selective condensation is clearly implicated in gene regulation, and selective nucleosome binding is controlled by specific DNA sequence patterns – which must permeate the entire genome. Lastly, probably all sequences do what they do, even as they also affect general spacing and DNA-folding/architecture – which is clearly sequence dependent. To explain the incredible amount of information which must somehow be packed into the genome (given that extreme complexity of life), we really have to assume that there are even higher levels of organization and information encrypted within the genome. For example, there is another whole level of organization at the epigenetic level (Gibbs 2003). There also appears to be extensive sequence dependent three-dimensional organization within chromosomes and the whole nucleus (Manuelides, 1990; Gardiner, 1995; Flam, 1994). Trifonov (1989), has shown that probably all DNA sequences in the genome encrypt multiple “codes” (up to 12 codes).
    Dr. John Sanford; Genetic Entropy 2005

    The reason why this introduces ‘polyconstraint’ is that if we were to actually get a beneficial effect from a ‘random mutation’ in a overlapping ‘polyfunctional’ genome then we would actually be encountering something akin to this illustration found on page 141 of the book Genetic Entropy by Dr. Sanford.

    S A T O R
    A R E P O
    T E N E T
    O P E R A
    R O T A S

    https://docs.google.com/document/d/1xkW4C7uOE8s98tNx2mzMKmALeV8-348FZNnZmSWY5H8/edit

    Which is translated ;
    THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS.

    This ancient puzzle, which dates back to at least 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation (save for the center spot).
    This is what is meant when Dr. Sanford says a poly-functional genome is poly-constrained to any random mutations. The evidence clearly indicates ‘top-down’ design. And it, severe polyfunctionality of the genome, is certainly not a situation in which we should expect random changes/mutations to confer benefit, which is certainly what is found after exhaustive search:

    Response to John Wise – October 2010
    Excerpt: But there are solid empirical grounds for arguing that changes in DNA alone cannot produce new organs or body plans. A technique called “saturation mutagenesis”1,2 has been used to produce every possible developmental mutation in fruit flies (Drosophila melanogaster),3,4,5 roundworms (Caenorhabditis elegans),6,7 and zebrafish (Danio rerio),8,9,10 and the same technique is now being applied to mice (Mus musculus).11,12 None of the evidence from these and numerous other studies of developmental mutations supports the neo-Darwinian dogma that DNA mutations can lead to new organs or body plans–because none of the observed developmental mutations benefit the organism.
    http://www.evolutionnews.org/2.....38811.html

    Neo-Darwinists, with the requirement of ‘bottom up’ random mutations, are simply not even in the right conceptual field to begin with in order to try to understand this astonishing level of interweaved complexity:

    How we could create life: The key to existence will be found not in primordial sludge, but in the nanotechnology of the living cell – Paul Davies – 2002
    Excerpt: Instead, the living cell is best thought of as a supercomputer – an information processing and replicating system of astonishing complexity. DNA is not a special life-giving molecule, but a genetic databank that transmits its information using a mathematical code. Most of the workings of the cell are best described, not in terms of material stuff – hardware – but as information, or software. Trying to make life by mixing chemicals in a test tube is like soldering switches and wires in an attempt to produce Windows 98. It won’t work because it addresses the problem at the wrong conceptual level. – Paul Davies
    http://www.guardian.co.uk/educ.....ucation.uk

    Also of interest, besides overlapping coding, is that the integrated coding between the DNA, RNA and Proteins of the cell apparently seems to be ingeniously programmed along the very stringent guidelines laid out by Landauer’s principle for ‘reversible computation’ in order to achieve such amazing energy efficiency. The amazing energy efficiency possible with ‘reversible computation’ has been known about since Rolf Landauer laid out the principles for such programming decades ago, but as far as I know, due to the extreme level of complexity involved in achieving such ingenious ‘reversible coding’, has yet to be accomplish in any meaningful way for our computer programs even to this day:

    Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon – Charles H. Bennett
    Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,,
    http://www.hep.princeton.edu/~.....501_03.pdf

    Perhaps computer programmers should study this more closely so as to learn how to ‘design’ better programs? Oh wait, that is already being done, at least by one group funded by Bill Gates. Bill Gates, in recognizing this ‘far more superior’ coding, has now funded research into this area:

    Welcome to CoSBi – (Computational and Systems Biology)
    Excerpt: Biological systems are the most parallel systems ever studied and we hope to use our better understanding of how living systems handle information to design new computational paradigms, programming languages and software development environments. The net result would be the design and implementation of better applications firmly grounded on new computational, massively parallel paradigms in many different areas.

  44. No. I asked a very simply question, and you please answer it without the spam?

  45. wd400, well regardless of what you think of the ‘spam’ I linked, the ‘spam’ I linked is precisely the reason why I think ‘random’, bottom up, unguided, mutations are wholly inadequate for creating new complex functional information in the genome and why I believe they, truly random mutations, will ALWAYS tend to degrade the complex functional information that is already in the genome. Of course if you disagree that chance and necessity processes of nature are not up to the task, which is your right to disagree since we do still live in America where that right is guaranteed, then you can easily prove your point by generating functional information above that which is already present in the genome:

    Michael Behe on Falsifying Intelligent Design – video
    http://www.youtube.com/watch?v=N8jXXJN4o_A

    Three subsets of sequence complexity and their relevance to biopolymeric information – Abel, Trevors
    Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,,

    Testable hypotheses about FSC

    What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses:

    Null hypothesis #1
    Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.

    Null hypothesis #2
    Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function.

    Null hypothesis #3
    Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function.

    Null hypothesis #4
    Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time.

    We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified.
    http://www.tbiomed.com/content/2/1/29

    Kirk Durston – Functional Information In Biopolymers – video
    http://www.youtube.com/watch?v=QMEjF9ZH0x8

  46. wd400:

    Joe, if Gauger wants to, for no reason other than protecting her pet hypothesis, invoke large changes in mutation rate over time she’s welcome to.

    Well heck your position invokes magical mystery muations, for no other reason then to protect your worthless position.

    And perhaps she doesn’t need to adjust the mutation rates, just the mutational target, as we appear to get more than enough mutations per birth.

  47. Joe – what on earth are you talking about?

  48. Here is an excellent article, just up on ENV, that is related to the overarching topic at hand:

    “Complexity Brake” Defies Evolution – August 2012
    Excerpt: In a recent Perspective piece called “Modular Biological Complexity” in Science, Christof Koch (Allen Institute for Brain Science, Seattle; Division of Biology, Caltech) explained why we won’t be simulating brains on computers any time soon:

    “Although such predictions excite the imagination, they are not based on a sound assessment of the complexity of living systems. Such systems are characterized by large numbers of highly heterogeneous components, be they genes, proteins, or cells. These components interact causally in myriad ways across a very large spectrum of space-time, from nanometers to meters and from microseconds to years. A complete understanding of these systems demands that a large fraction of these interactions be experimentally or computationally probed. This is very difficult.”

    Physicists can use statistics to describe a homogeneous system like an ideal gas, because one can assume all the member particles interact the same. Not so with life. When describing heterogeneous systems each with a myriad of possible interactions, the number of discrete interactions grows faster than exponentially. Koch showed how Bell’s number (the number of ways a system can be partitioned) requires a comparable number of measurements to exhaustively describe a system. Even if human computational ability were to rise exponentially into the future (somewhat like Moore’s law for computers), there is no hope for describing the human “interactome” — the set of all interactions in life.

    This is bad news. Consider a neuronal synapse — the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse — about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years…, even though it is assumed that the underlying technology speeds up by an order of magnitude each year.

    Even with shortcuts like averaging, “any possible technological advance is overwhelmed by the relentless growth of interactions among all components of the system,” Koch said. “It is not feasible to understand evolved organisms by exhaustively cataloging all interactions in a comprehensive, bottom-up manner.” He described the concept of the Complexity Brake:

    Allen and Greaves recently introduced the metaphor of a “complexity brake” for the observation that fields as diverse as neuroscience and cancer biology have proven resistant to facile predictions about imminent practical applications. Improved technologies for observing and probing biological systems has only led to discoveries of further levels of complexity that need to be dealt with. This process has not yet run its course. We are far away from understanding cell biology, genomes, or brains, and turning this understanding into practical knowledge.

    Why can’t we use the same principles that describe technological systems? Koch explained that in an airplane or computer, the parts are “purposefully built in such a manner to limit the interactions among the parts to a small number.” The limited interactome of human-designed systems avoids the complexity brake. “None of this is true for nervous systems.”,,,

    to read more click here:
    http://www.evolutionnews.org/2.....62961.html

  49. wd400- I was responding to your post. Apparently you can’t follow along

  50. No, really, what are you talking about? What’s a “mutational target”? What does it have to do with mutations per generation? Where have I invoked “mystery” mutations?

  51. A mutational target would be the genes in which the polymorphs are found- IOW mutations are not random, they would have had specific targets.

    And the entire theory of evolution invokes magical mystery mutations-

    Anyone involved in a debate about evolution has come to realize that the theory of evolution and universal common descent rely heavily on magical mystery mutations.

    I say that because those mutations can change an invertebrate to a vertebrate and no one knows how or why. Those mutations can change a fish into a land animal and then a land animal into an aquatic one- again without anyone knowing how or why.

    These magical mystery mutations operate when/ where no one can observe them. They cannot be studied which means no testing and no verification.

    We are told we just have to accept the “fact” that universal common descent occured even though the same data for UCD can be used for alternative scenarios, such as a common design or convergence.

    By relying on these magical mystery mutations evolutionitwits are admitting their scenario is a fairy tale and doesn’t belong in a science classroom.

  52. Yeah, so none of that has anything to do with what I’m talking about.

    Why do you think Gauger hasn’t just run the sims to see what sort of mutation rate you’d need to explain observed amount of polymorphism?

  53. “observed amount of polymorphism”

    can you please point to an example of ‘observed amount of polymorphism’ from random mutations that needs to be explained?

    Response to John Wise – October 2010
    Excerpt: But there are solid empirical grounds for arguing that changes in DNA alone cannot produce new organs or body plans. A technique called “saturation mutagenesis”1,2 has been used to produce every possible developmental mutation in fruit flies (Drosophila melanogaster),3,4,5 roundworms (Caenorhabditis elegans),6,7 and zebrafish (Danio rerio),8,9,10 and the same technique is now being applied to mice (Mus musculus).11,12 None of the evidence from these and numerous other studies of developmental mutations supports the neo-Darwinian dogma that DNA mutations can lead to new organs or body plans–because none of the observed developmental mutations benefit the organism.

  54. i.e. what you imagine to have happened in the past does not count as ‘observed’ within empirical science:

    of note:

    “We have all seen the canonical parade of apes, each one becoming more human. We know that, as a depiction of evolution, this line-up is tosh (i.e. nonsense). Yet we cling to it. Ideas of what human evolution ought to have been like still colour our debates.”
    Henry Gee, editor of Nature (478, 6 October 2011, page 34, doi:10.1038/478034a),

  55. wd400- why don’t YOU run a simulation that would support your position? I know why, because everyone would then see what a fraud your position is.

    What sort of mutation rate do you think is necessary to get the observed polymorphs starting with a founding population of 2? Please show your work.

  56. can you please point to an example of ‘observed amount of polymorphism’ from random mutations that needs to be explained?

    -> HapMap

    -> 1000 genomes

    Anna Guager mentions both in her blogpost that this post is the subject of.

  57. A Gene- no one has observed any amount of polymorphs via random mutations. No one knows if they have observed any random mutations.

  58. A Gene: observe is not the same word as infer. After all, I can infer the moon is made out of green cheese all day long from cherry picking select evidence that supports my position and ignoring all disconfirming evidence that points to a contrary conclusion. But that is the beauty of empirical science. You must actually produce repeatable, observational, evidence in order to support your claim that you have observed body-plan morphogenesis. And that, sir, you simply do not have:

    “Whatever we may try to do within a given species, we soon reach limits which we cannot break through. A wall exists on every side of each species. That wall is the DNA coding, which permits wide variety within it (within the gene pool, or the genotype of a species)-but no exit through that wall. Darwin’s gradualism is bounded by internal constraints, beyond which selection is useless.”
    R. Milner, Encyclopedia of Evolution (1990)

    “Despite a close watch, we have witnessed no new species emerge in the wild in recorded history. Also, most remarkably, we have seen no new animal species emerge in domestic breeding. That includes no new species of fruitflies in hundreds of millions of generations in fruitfly studies, where both soft and harsh pressures have been deliberately applied to the fly populations to induce speciation. And in computer life, where the term “species” does not yet have meaning, we see no cascading emergence of entirely new kinds of variety beyond an initial burst. In the wild, in breeding, and in artificial life, we see the emergence of variation. But by the absence of greater change, we also clearly see that the limits of variation appear to be narrowly bounded, and often bounded within species.”
    Kevin Kelly from his book, “Out of Control”
    http://www.uncommondescent.com.....ent-392638

    etc.. etc..

    In fact Dr. Stephen Meyer’s next book is going to be on the sheer impossibility of neo-Darwinian processes to explain the origination of ‘Body-Plan information’:

    Here is a sneak peek at his forthcoming book:

    Dr. Stephen Meyer: Why Are We Still Debating Darwin? pt. 2 – podcast
    http://intelligentdesign.podom.....6_22-07_00

    Of related interest, there is a whole other level of information on a cell’s surface that is scarcely even beginning to be understood (which would seem to be important if one were to claim that he understood body-plan morphogenesis:

    Glycan Carbohydrate Molecules – A Whole New Level Of Scarcely Understood Information on The Surface of Cells

    Glycan carbohydrate molecules are very complex molecules found primarily on a cell’s surface and are found to be very important for cell surface functions, such as immunity responses, and are found to show “remarkably discontinuous distribution across evolutionary lineages,”;

    Glycans: Where Are They and What Do They Do? – short video
    http://www.youtube.com/watch?v=BgZ61TxnxKo

    New tools developed to unveil mystery of the ‘glycome’ – June 10, 2012
    Excerpt: One of the Least Understood Domains of Biology: The “glycome”—the full set of sugar molecules in living things and even viruses—has been one of the least understood domains of biology. While the glycome encodes key information that regulates things such as cell trafficking events and cell signaling, this information has been relatively difficult to “decode.” Unlike proteins, which are relatively straightforward translations of genetic information, functional sugars have no clear counterparts or “templates” in the genome. Their building blocks are simple, diet-derived sugar molecules, and their builders are a set of about 250 enzymes known broadly as glycosyltransferases.,,,
    http://phys.org/news/2012-06-t.....ycome.html

    Glycans rival DNA and proteins in terms of complexity;

    Glycans: What Makes Them So Special? – The Complexity Of Glycans – short video
    http://www.youtube.com/watch?v=WXez_OyNBQA

    Yet Glycans, despite their complexity and importance to cell function, are found, like DNA and Proteins, to be ‘rather uncooperative’ with neo-Darwinian evolution;

    This Non Scientific Claim Regularly Appears in Evolutionary Peer Reviewed Papers – Cornelius Hunter – April 2012
    Excerpt: Indeed these polysaccharides, or glycans, would become rather uncooperative with evolution. As one recent paper explained, glycans show “remarkably discontinuous distribution across evolutionary lineages,” for they “occur in a discontinuous and puzzling distribution across evolutionary lineages.” This dizzying array of glycans can be (i) specific to a particular lineage, (i) similar in very distant lineages, (iii) and conspicuously absent from very restricted taxa only. In other words, the evidence is not what evolution expected.
    Here is how another paper described early glycan findings:
    There is also no clear explanation for the extreme complexity and diversity of glycans that can be found on a given glycoconjugate or cell type. Based on the limited information available about the scope and distribution of this diversity among taxonomic groups, it is difficult to see clear trends or patterns consistent with different evolutionary lineages. It appears that closely related species may not necessarily share close similarities in their glycan diversity, and that more derived species may have simpler as well as more complex structures. Intraspecies diversity can also be quite extensive, often without obvious functional relevance.
    http://darwins-god.blogspot.co.....larly.html

    As well, It seems clear that Glycans, being on the cell’s surface, would, besides immunity responses, be very important for explaining the exact positioning of cells in a multicellular organism (body-plan morphogenesis). In fact, experiments have been done rearranging parts of a cell’s surface in which the ‘rearrangement’ on the cell’s surface carried forward even though the DNA sequence had remained exactly the same:

    Cortical Inheritance: The Crushing Critique Against Genetic Reductionism – Arthur Jones – video
    http://www.metacafe.com/watch/4187488

    So it seems clear that this ‘cell surface information’ represents another whole new level of information that is not reducible to DNA (central dogma (modern synthesis) of neo-Darwinism) and yet this is clearly very important information to understand if one were to try to explain body-plan morphogenesis coherently.

  59. You must actually produce repeatable, observational, evidence in order to support your claim that you have observed body-plan morphogenesis.

    I never made that claim anyway. It’s not even a straw man.

  60. “No. I asked a very simply question, and you please answer it without the spam?” – WD400

    Hilarious! I was just going to say, ‘Steady, there, bornagain. You know that when you blind them with REAL science from that encyclopaedic brain of yours, they bawl out that you’re just spamming!!!’

    ‘What’s he saying, Mummy? I can’t understand ‘im. It’s just nonsense!’

Leave a Reply