Home » Darwinism, Informatics » The Darwinist and the computer programmer

The Darwinist and the computer programmer

Actually the available hardware computing power is enormous and the software technologies are very sophisticated and powerful. Given the above fortunate situation about the technological advance of informatics, many phenomena and processes in many fields are successfully computer simulated. Routinely airplane pilots and astronauts learn their job in dedicated simulators, and complex processes, as weather forecast and atomic explosions, are simulated on computers.

Question: why Darwinian unguided evolution hasn’t been yet computer simulated? I wonder why evolutionists haven’t yet simulated it, so to prove us that Darwinism works. As known, experiments of evolution in vitro failed, then maybe experiments in silico would work. Why don’t evolutionists show us in a computer the development of new biological complexity by simulating random mutations and selection on self-reproductive digital organisms?

Here I try my answer, then you are free to provide your own. I will do it in the format of an imaginary dialogue. Let’s suppose a Darwinist who meets a computer programmer to ask him to develop a simulation program of Darwinian evolution.

Programmer (P): “What’s your problem? I can program whatever you want. What we need is a detailed description of the phenomenon and a correct model of the process.”

Darwinist (D): “I would like to simulate biological evolution, the process thanks to which a species transforms into another species, by means of random mutations and natural selection”.

P: “Well, I think first off we need a model of an organism and its development, or something like that”.

D: “We have a genotype (containing the heritable information, the genome, the DNA) and its product, the phenotype”.

P: “I read that the DNA is a long sequence of four symbols. We could model it as a long string of characters. String of characters and operations on them are easily manipulable by computers. Just an idea.”

D: “Good, it is indeed unguided variations on DNA that drive evolution.”

P: “Ok, if you want, after modeling the genome, we can perform on the DNA character strings any unguided variation: permutations, substitutions, translations, insertions, deletions, import, export, pattern scrambling, whatever you like. We have very good pseudo random generators to simulate these operations”.

D: “Cool. Indeed those unintelligent variations produce the transformations of the phenotypes, what is called ‘evolution’”.

P: “Hmm… wait, just a question. There is a thing not perfectly clear to me. To write the instructions to output the phenotype from the genotype I need also a complete model of the phenotype and a detailed description of how it arises from the genotype. You see, the computer wants anything in the format of sequences composed of 0s and 1s, it is not enough to send it generic commands”.

D: “The genotype determines the genes and in turn the genes are receipts for proteins. The organisms basically are made of proteins.”

P: “Organisms are made of proteins, like buildings are made of bricks, aren’t they? It seems to me that these definitions are an extremely simplistic and reductive way of considering organisms and buildings. Both are not simple “containers” of proteins/bricks, as potatoes in a bag. It seems to me it is entirely missing the process of construction from proteins to organisms (while it is perfectly known in the case of bricks and buildings)”.

D: “To be honest I don’t know in detail how the phenotype comes from the genotype… actually no one on earth do.”

P: “Really? You know, in my damn job one has to perfectly specify all instructions and data in a formal language that doesn’t allow equivocations. It is somewhat mathematical. If you are unable to perfectly specify the phenotypic model and the process driving the construction of the phenotype from the genotype, I cannot program the simulation of evolution for you. What we would eventually obtain would be less than a toy and would have no explicative value compared to the biological reality (by the way I assure you that, differently, all computer games are serious works, where everything is perfectly specified and programmed, at the bit and pixel level, believe me)… Sorry… I don’t want to be indiscreet, but how can Darwinists claim with such certainty that variations in a process produce certain results if they know little of the models and nothing of the process involved in the first place?

D: _no-answer_

The above short dialogue between the Darwinist and the programmer shows us a thing. There are two worlds: the world of informatics where all instructions/data must be perfectly specified and have to pass checks, otherwise the business doesn’t work; and the world of the just so stories, where the statements may be equivocal and even inconsistent and have to pass no check. Evolutionism pertains to the latter kind of worlds. As the programmer politely noted, evolutionism pretends to claim that variations on a process produce specific results when the process itself is unknown and unspecified. In other words, why – to put it a la Sermonti – from the genome of a fly arises a fly, not a horse? If they cannot answer that basic question, how can they claim that unguided variations on genomes produced even the 500 million past and living species?

This fundamental incoherence and simplism can “work” in the Darwin’s world, but stops at the outset in the logic world of informatics. This is one of the reasons why a convincing and complete computer simulation of Darwinian evolution has not yet been performed far now, despite Darwinians would like to get it.

P.S. Thanks to Mung for the suggestion about the topic of this post.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

64 Responses to The Darwinist and the computer programmer

  1. Some notes on trying (and failing) to model organisms realistically with computers:

    The Humpty-Dumpty Effect: A Revolutionary Paper with Far-Reaching Implications – Paul Nelson – October 23, 2012
    Excerpt: Put simply, the Levinthal paradox states that when one calculates the number of possible topological (rotational) configurations for the amino acids in even a small (say, 100 residue) unfolded protein, random search could never find the final folded conformation of that same protein during the lifetime of the physical universe.
    http://www.evolutionnews.org/2.....65521.html

    So Much For Random Searches – PaV – September 2011
    Excerpt: There’s an article in Discover Magazine about how gamers have been able to solve a problem in HIV research in only three weeks (!) that had remained outside of researcher’s powerful computer tools for years. This, until now, unsolvable problem gets solved because: “They used a wide range of strategies, they could pick the best places to begin, and they were better at long-term planning. Human intuition trumped mechanical number-crunching.” Here’s what intelligent agents were able to do within the search space of possible solutions:,,, “until now, scientists have only been able to discern the structure of the two halves together. They have spent more than ten years trying to solve structure of a single isolated half, without any success. The Foldit players had no such problems. They came up with several answers, one of which was almost close to perfect. In a few days, Khatib had refined their solution to deduce the protein’s final structure, and he has already spotted features that could make attractive targets for new drugs.” Thus,,
    Random search by powerful computer: 10 years and No Success
    Intelligent Agents guiding powerful computing: 3 weeks and Success.
    http://www.uncommondescent.com.....-searches/

    To Model the Simplest Microbe in the World, You Need 128 Computers – July 2012
    Excerpt: Mycoplasma genitalium has one of the smallest genomes of any free-living organism in the world, clocking in at a mere 525 genes. That’s a fraction of the size of even another bacterium like E. coli, which has 4,288 genes.,,,
    The bioengineers, led by Stanford’s Markus Covert, succeeded in modeling the bacterium, and published their work last week in the journal Cell. What’s fascinating is how much horsepower they needed to partially simulate this simple organism. It took a cluster of 128 computers running for 9 to 10 hours to actually generate the data on the 25 categories of molecules that are involved in the cell’s lifecycle processes.,,,
    ,,the depth and breadth of cellular complexity has turned out to be nearly unbelievable, and difficult to manage, even given Moore’s Law. The M. genitalium model required 28 subsystems to be individually modeled and integrated, and many critics of the work have been complaining on Twitter that’s only a fraction of what will eventually be required to consider the simulation realistic.,,,
    http://www.theatlantic.com/tec.....rs/260198/

    “Complexity Brake” Defies Evolution – August 2012
    Excerpt: “This is bad news. Consider a neuronal synapse — the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse — about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years…, even though it is assumed that the underlying technology speeds up by an order of magnitude each year.”,,,
    Even with shortcuts like averaging, “any possible technological advance is overwhelmed by the relentless growth of interactions among all components of the system,” Koch said. “It is not feasible to understand,, organisms by exhaustively cataloging all interactions in a comprehensive, bottom-up manner.” He described the concept of the Complexity Brake:,,,
    to read more go here:
    http://www.evolutionnews.org/2.....62961.html

    Related notes:

    Stephen Meyer – Functional Proteins And Information For Body Plans – video
    http://www.metacafe.com/watch/4050681

    Dr. Stephen Meyer comments at the end of the preceding video,,,

    ‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does not insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ -
    Stephen Meyer – (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate – 2009)

    HOW BIOLOGISTS LOST SIGHT OF THE MEANING OF LIFE — AND ARE NOW STARING IT IN THE FACE – Stephen L. Talbott – May 2012
    Excerpt: “If you think air traffic controllers have a tough job guiding planes into major airports or across a crowded continental airspace, consider the challenge facing a human cell trying to position its proteins”. A given cell, he notes, may make more than 10,000 different proteins, and typically contains more than a billion protein molecules at any one time. “Somehow a cell must get all its proteins to their correct destinations — and equally important, keep these molecules out of the wrong places”. And further: “It’s almost as if every mRNA [an intermediate between a gene and a corresponding protein] coming out of the nucleus knows where it’s going” (Travis 2011),,,
    Further, the billion protein molecules in a cell are virtually all capable of interacting with each other to one degree or another; they are subject to getting misfolded or “all balled up with one another”; they are critically modified through the attachment or detachment of molecular subunits, often in rapid order and with immediate implications for changing function; they can wind up inside large-capacity “transport vehicles” headed in any number of directions; they can be sidetracked by diverse processes of degradation and recycling… and so on without end. Yet the coherence of the whole is maintained.
    The question is indeed, then, “How does the organism meaningfully dispose of all its molecules, getting them to the right places and into the right interactions?”
    The same sort of question can be asked of cells, for example in the growing embryo, where literal streams of cells are flowing to their appointed places, differentiating themselves into different types as they go, and adjusting themselves to all sorts of unpredictable perturbations — even to the degree of responding appropriately when a lab technician excises a clump of them from one location in a young embryo and puts them in another, where they may proceed to adapt themselves in an entirely different and proper way to the new environment. It is hard to quibble with the immediate impression that form (which is more idea-like than thing-like) is primary, and the material particulars subsidiary.
    Two systems biologists, one from the Max Delbrück Center for Molecular Medicine in Germany and one from Harvard Medical School, frame one part of the problem this way:
    “The human body is formed by trillions of individual cells. These cells work together with remarkable precision, first forming an adult organism out of a single fertilized egg, and then keeping the organism alive and functional for decades. To achieve this precision, one would assume that each individual cell reacts in a reliable, reproducible way to a given input, faithfully executing the required task. However, a growing number of studies investigating cellular processes on the level of single cells revealed large heterogeneity even among genetically identical cells of the same cell type. (Loewer and Lahav 2011)”,,,
    And then we hear that all this meaningful activity is, somehow, meaningless or a product of meaninglessness. This, I believe, is the real issue troubling the majority of the American populace when they are asked about their belief in evolution. They see one thing and then are told, more or less directly, that they are really seeing its denial. Yet no one has ever explained to them how you get meaning from meaninglessness — a difficult enough task once you realize that we cannot articulate any knowledge of the world at all except in the language of meaning.,,,
    http://www.netfuture.org/2012/May1012_184.html#2

  2. “It is not feasible to understand,, organisms by exhaustively cataloging all interactions in a comprehensive, bottom-up manner.”

    Which would be like trying to understand a battleship by modeling the interactions its molecules. One can only hope to “understand” such entities by hypothesizing design macro-feature purpose and reverse engineering. Nothing useful can be gleaned from the materialist approach; only the assumption that the macro-feature was designed and purposefully engineered offers a worthwhile, actionable investigatory pathway.

  3. Given an algorithm to translate genotype to phenotype, you would need to model biochemistry – that too me seems impossible to model in a computer program.

    Why not build a semi-complex replicating program that copies itself, and place it in a virtual environment where it has access to program bits, bytes or whatever…and competes with others. Perhaps, even include in the replicator program, a 3D representation.

    It seems to me, that this would at least test the creative power of RM + NS.

    My predication is that the code will end up smaller than the original replicator… not a replicator with more novel & more complex survival features (physical traits or or behaviors).

  4. p.s. perhaps not impossible, but biochemistry seems would be far too computationally intensive to be practical.

  5. Darwinist have tried and tried. Dr.Dawkins thought he had a great “METHINKS IT IS LIKE A WEASEL” program and even sold it to unwitting followers for $10 ! When nothing worked, they declared ‘Evolution has no goal’ and are still complaining that probability is being misused by IDist. When there is no goal, no process, no system to follow, there can be no model. The closest that can model aimless evolution is stochastic process, but what do you model when there is no aim?

  6. JGuy @3:

    Why not build a semi-complex replicating program that copies itself, and place it in a virtual environment where it has access to program bits, bytes or whatever…and competes with others. . . .

    It seems to me, that this would at least test the creative power of RM + NS.

    Yeah, this is what Darwinists have claimed to do with evolutionary algorithms like Avida. Unfortunately, the devil is in the details and when you have a very easily-achievable result with the digital “organism” being carefully lead up the back side of Mount Improbable, it is not particularly surprising that you get some directional change, which is touted by the Wizard of Oz as confirmation of the theory. The problem with things like Avida is that they don’t simulate anything in the real world, so we can have digital organisms “mutating” and “developing” all we want and it teaches us precisely nothing about whether evolution would work in real biology.

    The only way to model evolution is to have a very good handle on what is involved. And no-one has anything even approaching a solid idea as to what is required to turn creature A into creature B.

    Furthermore, what is to be simulated is even questionable. For example, no-one knows whether fiddling with DNA is even in principle capable of forming a new creature (apart from minor allele traits between members of the same species). So even if such an event were simulated in silico (which we know wouldn’t work, but let’s assume for purposes of discussion that it did), it still would not confirm that it is relevant to actual organisms in the real world.

    The difference between modeling evolution and, say, the flight simulator training niwrad refers to is that in the latter case we have a very good sense as to the factors involved and how they interact with each other (aerodynamics, thrust, weigh-ratios, wind speed, vectors, and so on); we have precise mathematical calculations and well-defined parameters.

    We have nothing even approaching this in evolutionary theory. As of 2013 the idea continues to consist of little more than vague generalizations and hypothetical assertions. There is no comprehensive list of parameters; not even close. There are no well-defined equations that state that if x occurs, y will be the outcome. All we have is a blanket assertion, void of all relevant details, that if something occurs then something else will result.

    —–

    Now, having said all that, I do agree that there is great value in using computer models to deal with very specific aspects of biological interactions. But unfortunately it is practically impossible, given our current state of knowledge and technology, to adequately model biological systems. And even if the simulation didn’t work, the Darwinist would simply say “Well, all that shows is that it didn’t happen with this particular set of parameters. It must have happened some other way.”

  7. I suppose developmental biology, molecualr biology, biochemistry, physiology and ecology should all be discarded since the processes underlying these sciences can’t be simulated at the same level of detail you require?

    BTW, what ever happened to that blogger here who’s every post was an ode to how awesome the physics simulation he used was?

  8. wd400

    Dr. David Berlinski: Accounting for Variations – video
    http://www.youtube.com/watch?v=aW2GkDkimkE

    “The computer is not going to generate anything realistic if it uses Darwinian mechanisms.”
    David Berlinski

  9. No, developmental biology, molecular biology, biochemistry, physiology and ecology shouldn’t be discarded also if not computer simulated, insofar as they provide descriptions of facts, sure data and eventually sensible hypothesis related to facts/data.

    The case of Darwinism is fully different, because it is only an hypothesis, not a fact. Worse yet, Darwinism is an absurd and contradictory hypothesis, contrary to all principles and all evidences.

  10. So, your position is that a science that explains the way changes in genotype and environment manifest themselves in phenotype (developmental biology and quant. genetics) but can’t model that process is fine. Likewise, a science that explains the way organisms interact with each other and abiotic parts of their environment but can’t model that process in detail (ecology) is fine too.

    But in order build a theory that includes the results of developmental biology, quant. genetics and ecology then we need to model those processes down to the individual atoms? And you call “Darwinism” absurd and contradictory?

  11. wd400 claims that Darwinism is a,,,

    a science that explains the way changes in genotype and environment manifest themselves in phenotype

    Yet the actual fact of the matter is that,,

    With a Startling Candor, Oxford Scientist Admits a Gaping Hole in Evolutionary Theory – November 2011
    Excerpt: As of now, we have no good theory of how to read [genetic] networks, how to model them mathematically or how one network meshes with another; worse, we have no obvious experimental lines of investigation for studying these areas. There is a great deal for systems biology to do in order to produce a full explanation of how genotypes generate phenotypes,,,
    http://www.evolutionnews.org/2.....52821.html

    Not Junk After All—Conclusion – August 29, 2013
    Excerpt: Many scientists have pointed out that the relationship between the genome and the organism — the genotype-phenotype mapping — cannot be reduced to a genetic program encoded in DNA sequences. Atlan and Koppel wrote in 1990 that advances in artificial intelligence showed that cellular operations are not controlled by a linear sequence of instructions in DNA but by a “distributed multilayer network” [150]. According to Denton and his co-workers, protein folding appears to involve formal causes that transcend material mechanisms [151], and according to Sternberg this is even more evident at higher levels of the genotype-phenotype mapping [152].
    http://www.uncommondescent.com.....onclusion/

    The next evolutionary synthesis: Jonathan BL Bard (2011)
    Excerpt: We now know that there are at least 50 possible functions that DNA sequences can fulfill [8], that the networks for traits require many proteins and that they allow for considerable redundancy [9]. The reality is that the evolutionary synthesis says nothing about any of this; for all its claim of being grounded in DNA and mutation, it is actually a theory based on phenotypic traits. This is not to say that the evolutionary synthesis is wrong, but that it is inadequate – it is really only half a theory!
    http://www.biosignaling.com/co.....X-9-30.pdf

    The Fairyland of Evolutionary Modeling – May 7, 2013
    Excerpt: Salazar-Ciudad and Marín-Riera have shown that not only are suboptimal dead ends an evolutionary possibility, but they are also exceedingly likely to occur in real, developmentally complex structures when fitness is determined by the exact form of the phenotype.
    http://www.evolutionnews.org/2.....71901.html

    Response to John Wise – October 2010
    Excerpt: A technique called “saturation mutagenesis”1,2 has been used to produce every possible developmental mutation in fruit flies (Drosophila melanogaster),3,4,5 roundworms (Caenorhabditis elegans),6,7 and zebrafish (Danio rerio),8,9,10 and the same technique is now being applied to mice (Mus musculus).11,12 None of the evidence from these and numerous other studies of developmental mutations supports the neo-Darwinian dogma that DNA mutations can lead to new organs or body plans–because none of the observed developmental mutations benefit the organism.
    http://www.evolutionnews.org/2.....38811.html

  12. wd400 claims that Darwinism is a,,,

    Actually, I didn’t.

  13. And wd400, since Darwinism doesn’t, and IMHO can’t possibly, explain how ‘changes in genotype and environment manifest themselves in phenotype’, you support Darwinism why exactly?

  14. Try and think things through…

    For the record

    I’m not a Darwinist.

    Evolutionary biology doesn’t explain how changes in genotype and environment manifest themselves in phenotype (that’s the domain of developmental biology and quantitative genetics).

    It does require that some genetic changes alter the phenotype of their carriers

    It is obviously true that some genetic changes alter the phenotype of their carriers.

  15. “I’m not a Darwinist.”

    Really??? Well blow me over with a feather, All of the sudden I’ve lost all interest in anything else in this thread, please do tell??

  16. There’s no great revelation in that statement. Like many evolutionary biologists, I tend to emphasizing non-Darwinian mechanisms (drift, sub-functionalisation etc) because there are large amounts of data that purely Darwinian evolution can’t explain (notably, the preponderance of junk DNA in many eukaryote genomes , a comment which I’m sure will set you off on other round of link spam…)

  17. “the preponderance of junk DNA in many eukaryote genomes”

    LOL, yep your a Darwinist alright! You may deny it trying to save face, but only a Darwinist would ever claim that!

  18. Against my better judgement, one last comment.

    Try and create a Darwinian (i.e. selection-focused) explanation for junk DNA..

  19. One problem with the OP is that it creates a straw-man version of neo-Darwinism.

    In neo-Darwinism, how one gets from genotype to phenotype is irrelevant.

    The Changing Role of the Embryo in Evolutionary Thought: Roots of Evo-Devo

  20. wd400:

    Try and create a Darwinian (i.e. selection-focused) explanation for junk DNA..

    Easy. It’s not under selection, that’s what allows it to accumulate.

    If it doesn’t accumulate, it’s evolution. If it does accumulate, it’s evolution. Ain’t modern evolutionary theory grand!

  21. Think of the programing language as the genotype and the program itself as the phenotype. How one gets from the genotype to the phenotype is termed development.

    In programming, changes to the programming language may or may not have an effect on a program. e.g., for compiled languages, the program may or may not require re-compilation.

    Consider a theory of how programs change over time. Imagine such a theory that focuses only on the programs and the programming languages. That’s neo-Darwinism.

  22. Easy. It’s not under selection, that’s what allows it to accumulate.

    Right… so that’s non-Darwinian because it’s something that happens without selection. It’s also an idea that allows us to make predictions about what future data will like. We know, for instance, that selection is stronger when effective population sizes are larger. In this way, you might predict that, all else being equal, organisms with large effective population sizes will smaller (less junk-ridden) genomes for instance…

  23. Mung @ 19

    Are you primarily referring to or considering the statement in the OP: “If they cannot answer that basic question, how can they claim that unguided variations on genomes produced even the 500 million past and living species?”

  24. Another problem with the OP is that it attempts to turn a strength of programming and simulation (abstraction) into a weakness in evolutionary theory. There’s no justification for this. It’s like saying we can’t take every element of an organism’s ecology and put it into a computer therefore evolution is false. It just doesn’t follow.

    That said, I appreciate the questions raised in the OP, but I suggest that they need some refinement.

  25. Still seems to me there should be some way to test the most basic idea of NS + RM to develop new information in the form of novel complex functions. I don’t think real biochemistry needs to be simulated to disprove Darwinism. Just one of the most important features of the simulation needs to ensure the simulation is sterile of programmers intelligence – i.e. keeping outside information from leaking into the simulation.

  26. wd400

    You seem to imply that non-functional DNA is useless junk.

    If so, is it not possible that ‘junk DNA’ is being stored in a dormant state for a reason? Perhaps, like a savings account, it is being keep in reserve in order to preserve and assure greater evolutionary potential.

    Since we now know that organisms can manipulate their DNA arrangements to adapt, etc., (a la James Shapiro), it seems quite premature to jump to the conclusion that nonfunctional DNA is the accumulation of waste.

  27. Littlejohn,

    It’s always possible to make a post-hoc justification to rescue a favorred hypothesis – but (James Shapiro notwithstanding) there is no evidence for this, and it’s very hard to imagine how something like this could possibly work.

  28. Scientists go deeper into DNA (Video report) (Junk No More) – Sept. 2012
    http://bcove.me/26vjjl5a

    Quote from preceding video:
    “It’s just been an incredible surprise for me. You say, ‘I bet it’s going to be complicated’, and then you are faced with it and you are like ‘My God, that is mind blowing.’”
    Ewan Birney – senior scientist – ENCODE

    ENCODE: Encyclopedia Of DNA Elements – video
    http://www.youtube.com/watch?v=Y3V2thsJ1Wc

    Quote from preceding video:
    “It’s very hard to get over the density of information (in the genome),,, The data says its like a jungle of stuff out there. There are things we thought we understood and yet it is much, much, more complex. And then (there are) places of the genome we thought were completely silent and (yet) they’re (now found to be) teeming with life, teeming with things going on. We still really don’t understand that.”
    Ewan Birney – senior scientist – ENCODE

    (ENCODE) An integrated encyclopedia of DNA elements in the human genome – September 2012
    Excerpt: The human genome encodes the blueprint of life, but the function of the vast majority of its nearly three billion bases is unknown. The Encyclopedia of DNA Elements (ENCODE) project has systematically mapped regions of transcription, transcription factor association, chromatin structure and histone modification. These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions. Many discovered candidate regulatory elements are physically associated with one another and with expressed genes, providing new insights into the mechanisms of gene regulation.
    http://www.nature.com/nature/j.....11247.html

    Junk No More: ENCODE Project Nature Paper Finds “Biochemical Functions for 80% of the Genome” – Casey Luskin – September 5, 2012
    Excerpt: The Discover Magazine article further explains that the rest of the 20% of the genome is likely to have function as well:
    “And what’s in the remaining 20 percent? Possibly not junk either, according to Ewan Birney, the project’s Lead Analysis Coordinator and self-described “cat-herder-in-chief”. He explains that ENCODE only (!) looked at 147 types of cells, and the human body has a few thousand. A given part of the genome might control a gene in one cell type, but not others. If every cell is included, functions may emerge for the phantom proportion. “It’s likely that 80 percent will go to 100 percent,” says Birney. “We don’t really have any large chunks of redundant DNA. This metaphor of junk isn’t that useful.”"
    We will have more to say about this blockbuster paper from ENCODE researchers in coming days, but for now, let’s simply observe that it provides a stunning vindication of the prediction of intelligent design that the genome will turn out to have mass functionality for so-called “junk” DNA. ENCODE researchers use words like “surprising” or “unprecedented.” They talk about of how “human DNA is a lot more active than we expected.” But under an intelligent design paradigm, none of this is surprising. In fact, it is exactly what ID predicted.
    http://www.evolutionnews.org/2.....64001.html

    ENCODE: The Encyclopedia of DNA Elements (Interviews with members of the ENCODE Project) – video
    http://www.youtube.com/watch?v=PsV_sEDSE2o
    Quotes from preceding video:
    “Very little of our genomes are junk. 80% of our genome is engaged in at least one biochemical activity. For a large fraction of our genome, not now 5%, but 80% of the genome, we can (now) say that we know that it does something.”
    “This metaphor about Junk DNA has become very entrenched. It has been entrenched publicly and entrenched scientifically. And ENCODE totally challenges that. We just don’t have big, blank, boring, bits of the genome. All the genome is alive at some level.”
    “There are about 2000 DNA binding proteins in the genome. We looked at about 100 of those, 115 of those, so there is a long way to go yet, there is a lot more to study.”

    Here is a recent paper (July 2013) that defends the Sept. 2012 ENCODE findings, of pervasive functionality across the genome, from Darwinian attempts to discredit the findings:

    The extent of functionality in the human genome – John S Mattick and Marcel E Dinger – July 2013
    Excerpt of abstract: Finally, we suggest that resistance to these (ENCODE) findings is further motivated in some quarters by the use of the dubious concept of junk DNA as evidence against intelligent design.
    http://link.springer.com/artic.....ltext.html

    What Is The Genome? It’s Certainly Not Junk! – Dr. Robert Carter – video – (Notes in video description)
    http://www.metacafe.com/w/8905583

    Multidimensional Genome – Dr. Robert Carter – video (Notes in video description)
    http://www.metacafe.com/w/8905048

    Bits of Mystery DNA, Far From ‘Junk,’ Play Crucial Role – September 2012
    Excerpt: The system, though, is stunningly complex, with many redundancies. Just the idea of so many switches was almost incomprehensible, Dr. Bernstein said.
    There also is a sort of DNA wiring system that is almost inconceivably intricate.
    “It is like opening a wiring closet and seeing a hairball of wires,” said Mark Gerstein, an Encode researcher from Yale. “We tried to unravel this hairball and make it interpretable.”
    There is another sort of hairball as well: the complex three-dimensional structure of DNA. Human DNA is such a long strand — about 10 feet of DNA stuffed into a microscopic nucleus of a cell — that it fits only because it is tightly wound and coiled around itself. When they looked at the three-dimensional structure — the hairball — Encode researchers discovered that small segments of dark-matter DNA are often quite close to genes they control. In the past, when they analyzed only the uncoiled length of DNA, those controlling regions appeared to be far from the genes they affect.
    http://www.nytimes.com/2012/09.....wanted=all

    DNA – Replication, Wrapping & Mitosis – video
    https://vimeo.com/33882804

    The only place Junk DNA really exists is in the imagination of neo-Darwinists!

  29. Scientists go deeper into DNA (Video report) (Junk No More) – Sept. 2012
    http://bcove.me/26vjjl5a

    Quote from preceding video:
    “It’s just been an incredible surprise for me. You say, ‘I bet it’s going to be complicated’, and then you are faced with it and you are like ‘My God, that is mind blowing.’”
    Ewan Birney – senior scientist – ENCODE 2012

    ENCODE: Encyclopedia Of DNA Elements – video
    http://www.youtube.com/watch?v=Y3V2thsJ1Wc

    Quote from preceding video:
    “It’s very hard to get over the density of information (in the genome),,, The data says its like a jungle of stuff out there. There are things we thought we understood and yet it is much, much, more complex. And then (there are) places of the genome we thought were completely silent and (yet) they’re (now found to be) teeming with life, teeming with things going on. We still really don’t understand that.”
    Ewan Birney – senior scientist – ENCODE

    (ENCODE) An integrated encyclopedia of DNA elements in the human genome – September 2012
    Excerpt: The human genome encodes the blueprint of life, but the function of the vast majority of its nearly three billion bases is unknown. The Encyclopedia of DNA Elements (ENCODE) project has systematically mapped regions of transcription, transcription factor association, chromatin structure and histone modification. These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions. Many discovered candidate regulatory elements are physically associated with one another and with expressed genes, providing new insights into the mechanisms of gene regulation.
    Per Nature

    Junk No More: ENCODE Project Nature Paper Finds “Biochemical Functions for 80% of the Genome” – Casey Luskin – September 5, 2012
    Excerpt: The Discover Magazine article further explains that the rest of the 20% of the genome is likely to have function as well:
    “And what’s in the remaining 20 percent? Possibly not junk either, according to Ewan Birney, the project’s Lead Analysis Coordinator and self-described “cat-herder-in-chief”. He explains that ENCODE only (!) looked at 147 types of cells, and the human body has a few thousand. A given part of the genome might control a gene in one cell type, but not others. If every cell is included, functions may emerge for the phantom proportion. “It’s likely that 80 percent will go to 100 percent,” says Birney. “We don’t really have any large chunks of redundant DNA. This metaphor of junk isn’t that useful.”"
    We will have more to say about this blockbuster paper from ENCODE researchers in coming days, but for now, let’s simply observe that it provides a stunning vindication of the prediction of intelligent design that the genome will turn out to have mass functionality for so-called “junk” DNA. ENCODE researchers use words like “surprising” or “unprecedented.” They talk about of how “human DNA is a lot more active than we expected.” But under an intelligent design paradigm, none of this is surprising. In fact, it is exactly what ID predicted.
    http://www.evolutionnews.org/2.....64001.html

    ENCODE: The Encyclopedia of DNA Elements (Interviews with members of the ENCODE Project) – video
    http://www.youtube.com/watch?v=PsV_sEDSE2o
    Quotes from preceding video:
    “Very little of our genomes are junk. 80% of our genome is engaged in at least one biochemical activity. For a large fraction of our genome, not now 5%, but 80% of the genome, we can (now) say that we know that it does something.”
    “This metaphor about Junk DNA has become very entrenched. It has been entrenched publicly and entrenched scientifically. And ENCODE totally challenges that. We just don’t have big, blank, boring, bits of the genome. All the genome is alive at some level.”
    “There are about 2000 DNA binding proteins in the genome. We looked at about 100 of those, 115 of those, so there is a long way to go yet, there is a lot more to study.”

    Here is a recent paper (July 2013) that defends the Sept. 2012 ENCODE findings, of pervasive functionality across the genome, from Darwinian attempts to discredit the findings:

    The extent of functionality in the human genome – John S Mattick and Marcel E Dinger – July 2013
    Excerpt of abstract: Finally, we suggest that resistance to these (ENCODE) findings is further motivated in some quarters by the use of the dubious concept of junk DNA as evidence against intelligent design.
    http://link.springer.com/artic.....ltext.html

    What Is The Genome? It’s Certainly Not Junk! – Dr. Robert Carter – video – (Notes in video description)
    http://www.metacafe.com/w/8905583

    Multidimensional Genome – Dr. Robert Carter – video (Notes in video description)
    http://www.metacafe.com/w/8905048

    Bits of Mystery DNA, Far From ‘Junk,’ Play Crucial Role – September 2012
    Excerpt: The system, though, is stunningly complex, with many redundancies. Just the idea of so many switches was almost incomprehensible, Dr. Bernstein said.
    There also is a sort of DNA wiring system that is almost inconceivably intricate.
    “It is like opening a wiring closet and seeing a hairball of wires,” said Mark Gerstein, an Encode researcher from Yale. “We tried to unravel this hairball and make it interpretable.”
    There is another sort of hairball as well: the complex three-dimensional structure of DNA. Human DNA is such a long strand — about 10 feet of DNA stuffed into a microscopic nucleus of a cell — that it fits only because it is tightly wound and coiled around itself. When they looked at the three-dimensional structure — the hairball — Encode researchers discovered that small segments of dark-matter DNA are often quite close to genes they control. In the past, when they analyzed only the uncoiled length of DNA, those controlling regions appeared to be far from the genes they affect.
    http://www.nytimes.com/2012/09.....wanted=all

    DNA – Replication, Wrapping & Mitosis – video
    https://vimeo.com/33882804

    The only place Junk DNA really exists is in the imagination of neo-Darwinists!

  30. Actually, Darwinism would not be that difficult to simulate in a computer program, but no Darwinist will ever do it because it will show what they don’t want to see – mutations kill.

    To simulate Darwinism, you would simply write a program that has all of the elements of the simplest life form possible – self-contained code to replicate, metabolise, code to interpret that code, code to execute its own code for replication/interpretation, etc. Then place it in a virtual machine where that code competes against other copies of that same code for resources needed to continue existence. Then you would allow purely random modifications of the code itself during replication, that allow any type of mutation Darwinists believe exists – deletions, modifications, duplications, etc. No direction allowed – the code or vm must not have any code that arbitrarily picks “winning” or “losing” code, beyond the code’s ability to continue competing for resources. And there must not be any restrictions on the types of mutations that can occur – any change to any section of the code must be allowed.

    Then set it loose and see what happens. Any programmer knows what will happen when you allow mutations (aka errors) randomly to occur in code. The code breaks.

    Of course, a realistic simulation would be much more stringent. You’d have to create a vm with resources, and then randomly inject bits and bytes and wait for a piece of self-replicating code to magically appear. Yeah, right.

  31. (This is a great thread because it targets the biggest vulnerability of the theory of evolution, in my opinion.)

    drc466 @29, you hit the nail on the head. The very thing that supposedly drives innovation in Darwinian evolution is what kills it dead before it gets a chance to do anything.

    It never ceases to amaze me how some of the most brilliant people on earth actually believes in this cr@p. It’s either a case of mass stupidity or mass cowardice or both. Worse, the stupidity is blatant and in your face. The same can be said about materialism.

  32. wd400 makes a valid point that organisms are subject to various mechanisms that do not depend directly on the classical RM+NS mechanism (drift, for example). And I agree that there are good mathematical models that can be brought to bear relating to population genetics.

    However, RM+NS is still considered to be the primary avenue of biological change. More importantly, regardless of whether something results from, say, drift, the original source of the biological novelty is still allegedly what essentially amounts to a random event.

    So, yes, the NS part of the equation is problematic because it may not function perfectly to preserve or to discard. But the much worse problem is the RM part. Does it really have the capacity to create what we see around us?

    It doesn’t matter whether we are relying on natural selection, neutral mutations, genetic drift, sexual selection or otherwise to preserve something. The real question for evolutionists is: What is your evidence that these random changes can do all this work of creating?

    That is what needs to be modeled. It can’t be modeled in even semi-comprehensive detail, because too many particulars are still unknown. But I do agree it can be modeled perhaps in a simple fashion (such as that suggested by commenters above). And it is found utterly wanting.

  33. wd-400 #27

    If I am not mistaken, the immune systems of mammals and other animals use programmed DNA rearrangements to produce anti-bodies. This evidence might help you imagine how intrinsic genetic manipulation is likely exploited by other bio-systems.

    More than that, how many other organelles, cells, tissues, organs, and body plan structures and/or components, would you consider to be composed of large volumes of waste, and why should we expect the genome to break the pattern of precise optimization of resources that we seem to find at every other level of organization?

    Just imagine junk DNA as packets of evolutionary potential, just waiting to be activated or utilized when the time is right.

  34. Just imagine junk DNA as packets of evolutionary potential, just waiting to be activated or utilized when the time is right.

    … and accruing mutations (which are univerally bad news according to many IDists…) while they wait.

  35. Mung #19

    One problem with the OP is that it creates a straw-man version of neo-Darwinism. In neo-Darwinism, how one gets from genotype to phenotype is irrelevant.

    Darwinism does or does not pretend to be the cause of the construction of all organisms? (if it doesn’t we all can go home). To what pretends to be the cause of the construction the construction is not irrelevant.

    If I pretend to be a constructor and a client wanting a building asks me how I construct, I cannot answer “the construction is irrelevant”.

  36. Mung #19

    Another problem with the OP is that it attempts to turn a strength of programming and simulation (abstraction) into a weakness in evolutionary theory. There’s no justification for this. It’s like saying we can’t take every element of an organism’s ecology and put it into a computer therefore evolution is false. It just doesn’t follow.

    The OP doesn’t affirm that evolution is false because and only because it is not computer simulated. Evolution is false just for countless other reasons. The computer simulation of evolution would simply add an additional reason. The OP simply asks “why Darwinian unguided evolution hasn’t been yet computer simulated?”, and has got many interesting answers. Among them I particularly like the following by drc466 #30:

    Actually, Darwinism would not be that difficult to simulate in a computer program, but no Darwinist will ever do it because it will show what they don’t want to see – mutations kill.

  37. Calling all Darwinists, where is your best population genetics simulation? – September 12, 2013
    Excerpt: So Darwinists, what is your software, and what are your results? I’d think if evolutionary theory is so scientific, it shouldn’t be the creationists making these simulations, but evolutionary biologists! So what is your software, what are your figures, and what are your parameters. And please don’t cite Nunney, who claims to have solved Haldane’s dilemma but refuses to let his software and assumptions and procedures be scrutinized in the public domain. At least Hey was more forthright, but unfortunately Hey’s software affirmed the results of Mendel’s accountant.
    http://www.uncommondescent.com.....imulation/

    Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory – 2008
    Abstract: Evolutionary genetic theory has a series of apparent “fatal flaws” which are well known to population geneticists, but which have not been effectively communicated to other scientists or the public. These fatal flaws have been recognized by leaders in the field for many decades—based upon logic and mathematical formulations. However population geneticists have generally been very reluctant to openly acknowledge these theoretical problems, and a cloud of confusion has come to surround each issue.
    Numerical simulation provides a definitive tool for empirically testing the reality of these fatal flaws and can resolve the confusion. The program Mendel’s Accountant (Mendel) was developed for this purpose, and it is the first biologically-realistic forward-time population genetics numerical simulation program. This new program is a powerful research and teaching tool. When any reasonable set of biological parameters are used, Mendel provides overwhelming empirical evidence that all of the “fatal flaws” inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified—with a degree of certainty which should satisfy any reasonable and open-minded person.
    http://www.icr.org/i/pdf/techn.....Theory.pdf

    Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle – “Haldane’s Ratchet” – Christopher L. Rupe and John C. Sanford – 2013
    Excerpt: We then perform large-scale experiments to examine the feasibility of the ape-to-man scenario over a six million year period. We analyze neutral and beneficial fixations separately (realistic rates of deleterious mutations could not be studied in deep time due to extinction). Using realistic parameter settings we only observe a few hundred selection-induced beneficial fixations after 300,000 generations (6 million years). Even when using highly optimal parameter settings (i.e., favorable for fixation of beneficials), we only see a few thousand selection-induced fixations. This is significant because the ape-to-man scenario requires tens of millions of selective nucleotide substitutions in the human lineage.
    Our empirically-determined rates of beneficial fixation are in general agreement with the fixation rate estimates derived by Haldane and ReMine using their mathematical analyses. We have therefore independently demonstrated that the findings of Haldane and ReMine are for the most part correct, and that the fundamental evolutionary problem historically known as “Haldane’s Dilemma” is very real.
    Previous analyses have focused exclusively on beneficial mutations. When deleterious mutations were included in our simulations, using a realistic ratio of beneficial to deleterious mutation rate, deleterious fixations vastly outnumbered beneficial fixations. Because of this, the net effect of mutation fixation should clearly create a ratchet-type mechanism which should cause continuous loss of information and decline in the size of the functional genome. We name this phenomenon “Haldane’s Ratchet”.
    http://creationicc.org/more.php?pk=46

    Here is a short sweet overview of Mendel’s Accountant:

    When macro-evolution takes a final, it gets an “F” – Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory (Mendel’s Accountant)
    Excerpt of Conclusion: This (computer) program (Mendel’s Accountant) is a powerful teaching and research tool. It reveals that all of the traditional theoretical problems that have been raised about evolutionary genetic theory are in fact very real and are empirically verifiable in a scientifically rigorous manner. As a consequence, evolutionary genetic theory now has no theoretical support—it is an indefensible scientific model. Rigorous analysis of evolutionary genetic theory consistently indicates that the entire enterprise is actually bankrupt.
    http://radaractive.blogspot.co.....ution.html

    A bit more detail on the history of the junk DNA argument, and how it was born out of evolutionary thought, is here:

    Functionless Junk DNA Predictions By Leading Evolutionists
    http://docs.google.com/View?id=dc8z67wz_24c5f7czgm

    as to ‘drift:

    Thou Shalt Not Put Evolutionary Theory to a Test – Douglas Axe – July 18, 2012
    Excerpt: “For example, McBride criticizes me for not mentioning genetic drift in my discussion of human origins, apparently without realizing that the result of Durrett and Schmidt rules drift out. Each and every specific genetic change needed to produce humans from apes would have to have conferred a significant selective advantage in order for humans to have appeared in the available time (i.e. the mutations cannot be ‘neutral’). Any aspect of the transition that requires two or more mutations to act in combination in order to increase fitness would take way too long (>100 million years).
    My challenge to McBride, and everyone else who believes the evolutionary story of human origins, is not to provide the list of mutations that did the trick, but rather a list of mutations that can do it. Otherwise they’re in the position of insisting that something is a scientific fact without having the faintest idea how it even could be.” Doug Axe PhD.
    http://www.evolutionnews.org/2.....62351.html

    Michael Behe on the theory of constructive neutral evolution – February 2012
    Excerpt: I don’t mean to be unkind, but I think that the idea seems reasonable only to the extent that it is vague and undeveloped; when examined critically it quickly loses plausibility. The first thing to note about the paper is that it contains absolutely no calculations to support the feasibility of the model. This is inexcusable. – Michael Behe
    http://www.uncommondescent.com.....evolution/

  38. corrected link:

    Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle – “Haldane’s Ratchet” – Christopher L. Rupe and John C. Sanford – 2013
    http://www.creationicc.org/abstract.php?pk=293

  39. After reading through this thread, I was bothered by one curious detail. Suppose the Programmer could create the simulation, there is still the problem of the fact that the simulation required a designer to run.

    The program would have to have specified rules, such as what new strings of code qualify as “living” and functional. This also raises an issue of the initial organism. Is it pre-designed, or must we expect it to emerge from the simulation? If it is expected to emerge, at what point can one distinguish an output representative of inorganic versus organic?

    If a method is used, such as introducing new packets of information (to represent, perhaps early atmospheric changes, etc.) and results are seen, then we are still only showing that observation, with a carefully controlled “randomization” lead to the result.

    It seems silly that NDEs entertain the simulation idea. Any simulation would require a designer creating a simulation favorable to life as they cannot emerge from chaos.

  40. Why not build a semi-complex replicating program that copies itself, and place it in a virtual environment where it has access to program bits, bytes or whatever…and competes with others. Perhaps, even include in the replicator program, a 3D representation.

    It seems to me, that this would at least test the creative power of RM + NS.

    My predication is that the code will end up smaller than the original replicator… not a replicator with more novel & more complex survival features (physical traits or or behaviors).

    A random search will eventually find something given enough time and allowed to run indefinitely.

    But if extinction is modeled, any reasonable computer simulation will show what a dead end RM + NS is – because a few bad mutations will cripple the self-replication, ending the experiment.

    If there is no possibility of failure, the simulation tells us nothing about real world results, where failures are known to be common (extinct species).

  41. Here is a brief proposal to Simulate Evolution using Programming Artifacts.

    A. Let’s consider a well performing Chess Program (CP) – that let’s say usually wins chess games against human chess masters.

    B. Let’s make relatively easy modifications to the CP so that two instances of the Chess Program can play against each other until one wins or a draw is declared.

    C. Let’s consider a Population of Chess Programs (PCP) where initially all CPs in the Population are identical copies of the same Chess Program under discussion. Each Copy of the CP has a unique Identity and an individual “evolution life” that will be described farther down.

    D. Let’s create a Chess Program Fight and Survival (CPFS) programmed Framework (F) by which:


    a. each individual CP: CP(i) can play a chess game with other individual CP: CP(k) selected randomly by the Framework F;

    b. the result of a game increases the Loss Count (LC) recorded for the losing CP.

    c. In case of a draw the loss count stay unchanged for the two CPs.

    d. After a defined Dying Threshold (DF) of losses (let’s say 20 losses) recorded for a CP, that CP “dies” and exits the Chess Program Fight and Survival – after its “life”, “game loses” and “demise” are carefully recorded by the Framework for that particular (individual) CP.

    E. The “evolution” is represented by “random mutations” in a particular CP.


    a. In this context it is proposed that a single “random mutation” consists in changing the value of N consecutive bits of the executable binary of the CP to a random value (RV) of also N consecutive bits starting from a Randomly selected Offset (O) counted from the beginning of the CP executable binary.

    b. The Framework (F) will “inject” a pre-determined number of such “random mutations” (let’s say 10) in each individual CP after every (let’s say) 5 games.

    c. In case one or more “random mutation” makes an individual CP non-responsive (i.e. does not respond in the time granted for a chess move) the Framework F will record a loss for that individual CP.

    d. Similarly if an “evolved” individual CP is not even able to start a chess game (or to provide the expected response at the start of a chess game) the Framework (F) records a loss for that individual CP (and might even declare it “dead” even if the “Dying Threshold” of losses was not reached by that CP.

    F. The Chess Program Fight and Survival (CPFS) competition will go on until only 0.01% of the original population remains “alive” (avoided death by consistently beating other “less/more evolved” individuals)

    G. Half of the Population of Chess Programs (PCP) will not be subjected to “random mutations” and will preserve unaltered their original executable binary code during the whole Chess Program Fight and Survival (CPFS) competition.

    H. Hypothesis A: If Darwinian Evolution is true (and works) then it is expected that NONE of the CPs that were spared the “evolution” (i.e. they were not subjected to random mutations) will be among the 0.01% of surviving population of CPs.

    I. Hypothesis B: If Darwinian Evolution is false (i.e. does not work) then it is expected that:


    a. All CPs in the surviving 0.01% population are of “not-mutated” population

    b. More so: it is expected that when the Original PCP halved – during competition, large majority of individual surviving CPs are from “non-mutated” population.

    NOTES:

    • A lot of Variations on this theme can be imagined and played out.

    • Although this is not a simulation at the level of biological, chemical, organizational details of actual organisms (who can dream of such simulation? this is not possible) I pretend that it capture and emulates quite realistically what “evolution” is expected to achieve:

    • By affecting through random mutations a system very complex that is known that originally functions with precision and is effective (in winning chess games). Similarly it is legitimately assumed that evolution of biological life started only on a “self-replicating” high precision machinery substrate.

    • The mutations are random and there is no “design” involved in “guiding the evolution” in any way (all values are random).

    • There is a fairly representative competition and fair fight for survival – and there are as legitimate expectations as in the Darwinian Evolution that the “most evolved” CPs win the Chess Program Fight and Survival (CPFS).

  42. InVivoVeritas

    Interesting idea. However consider that usually binary executables, as those we find in our computers, are very critical under random variations. In practice just few random bit mutations crash the code, and, depending on the program and the operating system, they could even halt the computer. So you can bet that the outcome of your CPFS simulation would be Hypothesis B: Darwinian Evolution is false.

    Luckily the biological codes are more robust than… Windows and Office, from this point of view. But this of course doesn’t mean that random variations on them can create new organization, as neo-Darwinism pretends.

  43. Niwad,

    Maybe a variation of his proposal written in a scripted language would work, therefore no O/S crashes… and/or… mutations aren’t at the bit level, but maybe the byte level (well, that might crash easy)…or at the expression level.

    That is, mutated by substituting in valid random expressions – forcing valid lexical structure (e.g. syntax, coding rules… semantics). It may still crash, but not the system.

    So, instead of bits being your primitive mutations, you move it up a level. It would still be impossible for it to evolve new beneficial logical functions, I think (not to be confused with defining a function..which in programming can be one line of code).

  44. Niwrad,

    Thanks for your interesting topic and for your comments to my entry.

    The question is: why we should not think that a living organism (maybe order of magnitude more complex then our Chess Program) is also sensitive and mostly negatively affected by random mutations?

    We know for sure that a biological organism functions efficiently, precisely and is a very complex composition of interacting parts that, together, as a system of sub-systems metabolize successfully, replicates successfully, etc. Why a random mutation of any of its sub-systems should not (most probably) negatively affect “the order” and “the working plan” that it uses (no matter how this “working plan” came to be)?

    I propose that my simulation model is quite adequate from this point of view.

    It’s quite probable that a few or repeated mutations of the Chess Program binary will crash the Program – but not the Operating System – which, if well designed – should be isolated from application crashes or failures. Also the computer should not crash.

    It is true that we can speculate that biological organisms can be more resilient to defend/protect against random mutations – I speculate just because they may have very complex defensive, mechanisms.

    The fundamental questions for the proposed Evolution Simulator are:

    Q1. Does it simulates reasonably well random mutations?

    Q2. Does it simulates reasonably well natural selection? (and the fight for survival)?

    Q3. Does it minimally emulates a Irreducible Complex System (Behe) (ICS)that we know that a living organism really is? Our Chess Program definitely is an ICS.

    Q4. Does it provides “tuning knobs” to allow playing various “Simulation Scenarios”?
    Yes by changing various parameters: The Length in bits of a Random Mutation; the Number of Mutation before each Game (set of Games); the number of losses before the Framework will declare an individual Ches Program as “dead”; etc.

  45. Jguy at #43

    I thought about at what level the Random Mutations are to be “injected”:

    a. At the Programming Language Level (Perl, C, Java, etc.). This is a non-trivial problem because, the “Mutator” may need to become “Programming Language Aware” and Replace/Modify one or a group of Language Statements with another Group that – although may not make sense from “what they need to accomplish” point of view, they still must allow:

    1. A successful compilation of the “mutated” Chess Program (CP)

    2. A Successful Build of the CP.
    3. A Successful Execution (Start) of the CP.

    b. At the Executable Binary level. This is much simpler to accomplish – and still preserve a reasonable analogy with “random mutations” in the DNA of a cell/organism.

    I think that the Proposed Simulator far from being perfect it is still a Reasonable Approach – that can be defended as I tried in my comment above at #44.

    I believe also that this Proposal – as it is – and because it carries a Strong Analogy with the simulated target in essential aspects may provide us with a good “projection” and understanding of the Enormity (and logical Impossibility) of the task that Darwinian Evolution pretends to be able to achieve.


  46. b. At the Executable Binary level. This is much simpler to accomplish – and still preserve a reasonable analogy with “random mutations” in the DNA of a cell/organism.

    If I’m not mistaken, about half of amino acid positions in proteins can be substituted with another amino acid – especially if it has at least a similar chemical traits (e.g. hydrophobic or hydrophilic and/or whatever other generic property) … of course, that only applies to coding regions of DNA.

    So, assume that is the general rule, would you be able to change 50% of bits in compiled code without crashing the program?

    I’m not sure how important that is to be representative. I have not thought a lot about it.

  47. p.s. I don’t think the entire program needs to be modified. You could simply consider the a set of methods and logic rules that act as building blocks… enough primitives to build almost any logical process. For example, if I link an AND function and an OR function it will not error out. You just get useless output..that is, if it isn’t helping – in this case, to beat other chess programs.

  48. pps. to illustrate better….. in such a chess program experiment….. it doesn’t seem you really need to modify the skeleton of the chess program….rather, it seems you just need to modify the function(s) that evaluate how the program calculates the value of possible positions.

  49. ppps. and i would not think you want to make it so that it just tunes exiting functions… trial an error can find the settings that are finer tuned… you need to allow it to look for novel functions (i.e. new complex information)

  50. InVivoVeritas et al,

    The problem with using computer software as an analogue for living organisms is that software is algorithmic, i.e., it is a sequential chain of instructions. Break any link in the chain and the entire program crashes. Living organisms, by contrast, are non-algorithmic parallel computing systems and, as such, are very fault tolerant. A malfunction in one or even many components will rarely cause a catastrophic failure. They tend to degrade gracefully.

    Living system simulations must take this into account, in my opinion.

  51. The problem with using computer software as an analogue for living organisms is that software is algorithmic, i.e., it is a sequential chain of instructions. Break any link in the chain and the entire program crashes. Living organisms, by contrast, are non-algorithmic parallel computing systems and, as such, are very fault tolerant. A malfunction in one or even many components will rarely cause a catastrophic failure. They tend to degrade gracefully.

    Parallel computing is more difficult than serial computing. The lack of fault tolerance in serial computing is a conscious trade-off of cost vs. reliability rather than a penalty of serial computing. Typical computing environments aren’t hazardous to computers.

    Satellites and spacecraft are some areas where there is hardware/software hardening to guarantee functionality despite adverse environments.

    The value of comparing computer software to living organisms is not that they’re close equivalents; it’s that the known-human-design is much simpler than life and provides a floor for the minimum amount of “work” needed to accomplish what the more complex design does.

    While some of the graceful degradation observed in life may be a function of the molecules (“harmless” amino acid substitutions), a substantial part is from system “design” which is a function of the information encoded in the system, and not of the molecular properties of the materials. (ex: DNA checking/repairing molecules)

  52. SirHamster:

    The value of comparing computer software to living organisms is not that they’re close equivalents; it’s that the known-human-design is much simpler than life and provides a floor for the minimum amount of “work” needed to accomplish what the more complex design does.

    Well said.

  53. Mapou @ 50

    I don’t think it matters. You’re really only looking for hopeful beneficial mutations, adding them together..and seeing it there is such a detectable step-wise path to higher complexity and function, or whether there is not. Perhpas, that’s too simplistic, but that’s what it seems to me.

    Another maybe: if the system is more resilient with multiple threads running – which I think is apoint htat could still be debated – wouldn’t this mean the selection aspect of the process would be even more unlikely to identify a beneficial effect(?) – when the organism is compared to rivals.

  54. Off topic but just had a quick question: Is symbiogenesis, epigenetics, and saltationism part of the Modern evolutionary synthesis?

  55. JGuy @53:

    Another maybe: if the system is more resilient with multiple threads running – which I think is apoint htat could still be debated – wouldn’t this mean the selection aspect of the process would be even more unlikely to identify a beneficial effect(?) – when the organism is compared to rivals.

    This is an interesting point. There is clearly parallel computing going on,* both inside cells and between cells. And yes, that allows for some robustness (one cell dies, for example, and the whole organism doesn’t come to a screeching halt).

    But it does make it even more challenging to (a) get a mutation captured across the board in those individual places where it is needed/relevant, and (b) get things integrated across the whole.

    One thing we can say with near certainty based on our experience with technology is that increasing levels of functional, coordinated complexity make it harder, not easier, for any given change to work seamlessly across the whole. The whole point of modularity is to try and deal with the escalating cascade of complexity that would otherwise attain.

    —–

    * Parallel computing in the sense of various instruction sets being executed at multiple locations simultaneously clearly occurs pervasively throughout the organism.

    Parallel computing in the sense of multithreading is, I believe, still more of an open question. Arguably, one could say that a form of simple multithreading occurs when, for example, multiple transcriptions from a large DNA section are occurring simultaneously. One might also perhaps argue that the creation of multiple amino-acid-to-protein strands from a single mRNA transcript is a form of multithreading.

    Nevertheless, I don’t know if we could call these true multithreading events or if there even is true multithreading occurring with molecular machines. That would be a remarkable achievement if it does happen!

    (Incidentally, we need to distinguish between true multithreading and the existence of protocol hierarchies, such as when the cellular reproduction mechanism initiates a temporary shutdown of transcription activity so that the DNA can be faithfully copied. The latter is more of a break-in override protocol, than true multithreading.)

  56. Jguy at #46

    If I’m not mistaken, about half of amino acid positions in proteins can be substituted with another amino acid – especially if it has at least a similar chemical traits (e.g. hydrophobic or hydrophilic and/or whatever other generic property) … of course, that only applies to coding regions of DNA.

    So, assume that is the general rule, would you be able to change 50% of bits in compiled code without crashing the program?

    I’m not sure how important that is to be representative. I have not thought a lot about it.

    Several points here:

    * I am not a biologist but there is a chance that changing those amino acid positions – with similar ones – may still have negative side effects – possibly far removed from the place and time of change. It is hard to be sure of anything in biology except that most probable the things are as they are with a very good (at least initial) reason.

    * When talking about the Chess Program (CP) binary executable we should assume that this binary contains not only the executable code proper but also the CP database of moves and known strategies and also any other configuration and metadata information (structure of the chess table, the desription of valid moves for each figure, etc.). It is know that a key element of the success of Chess Programs is among others an extensive “chess knowledge” database.

    * Now when a “random mutation” is injected it may be into the “database space” of the CP binary or in its “configuration space”. This may mean that some (many) of such random mutations may not degrade the Program directly (or make it immediately crash). If the random change modify a “chess move” from the database that is seldom used (or not used in the sequence of games of that particular mutated Program, this may imply a “graceful degradation” of that program – that can be judged somehow quite similar of what you mentioned. If the ratio of Program Space to Database Space in the Binary is (let’s say) 1/4 then 80% of mutations may not be immediately pernicious.

  57. Mapou at #50

    The problem with using computer software as an analogue for living organisms is that software is algorithmic, i.e., it is a sequential chain of instructions. Break any link in the chain and the entire program crashes. Living organisms, by contrast, are non-algorithmic parallel computing systems and, as such, are very fault tolerant. A malfunction in one or even many components will rarely cause a catastrophic failure. They tend to degrade gracefully.

    Living system simulations must take this into account, in my opinion.

    You are right to say that there are very significant differences between the computer programs (computing systems) and the biological organisms. I have a few comments though on this thought.

    * I am sure that there are many “parallel” (micro) resources available in biological systems that partial failure can be masked by unaffected resources.

    * At the macro scale there are still real “heart failures”, “kidney failures” or strokes.

    * The “qualitative similarities” between the Simulator (Programming Artifacts in this Proposal) and the Simulated are:

    – both are Irreducible Complex Systems (made of a large number of interacting components or sub-systems that are precisely coordinated)

    – it is logically similar for the two that any (or at least many) “mutations” in a perfect, harmoniously tuned and finely system may affect (compromise?) that nice working and cooperation between parts.

    – my previous comment at #56 identified certain mutations that can induce also graceful degradations.

    It may be that these qualitative similarities between the Simulator and the Simulated may convey a reasonable level of realism to the Simulation.

  58. InVivoVeritas:

    . . . there is a chance that changing those amino acid positions – with similar ones – may still have negative side effects – possibly far removed from the place and time of change.

    Well said. I’ve raised this point in the past as well, and I think it is worth remembering, at least in the back of our mind.

    Most of the time I’m willing to grant for purposes of discussion and for assessing probabilities that many substitutions will be neutral.

    However, the fact remains that we do not know if all or even most of these allegedly neutral substitutions are indeed neutral.

    There is a whole host of downline operations that could, potentially, be affected by the substitution. The translation process itself often involves multiple snips and concatenations, and error-correction mechanisms. So a substitution in a nucleotide base may end up being neutral, not because it was initially neutral, but because the translation process picked up the change and corrected it.

    More intriguing would be if the translation process picked up the change and acted differently as a result — a different cut, a different concatenation, etc.

    Furthermore, if amino acids can come in different forms (we’re barely starting to grasp some of the quantum effects in the cell) or be affected by added tags, then there could be other downline effects. For example, the protein folding process is not, contrary to regular assertions to the contrary, simply an automatic process, but is a moderated process, with its own protocols and error-correction detection mechanisms. Do we know whether there are any changes in folding process, the completed fold, or post-folding error detection and destruction with particular nucleotide substitutions?

    Additionally, there may be stretches of DNA that are involved in multiple processes and/or with multiple reading frames. In those cases, we can’t assume that the mutations would be 100% neutral.

    Anyway, just throwing some possible ideas out for consideration. I do agree the genetic code seems relatively robust to perturbation, and it might indeed be the case that many nucleotide substitutions are 100% neutral and invisible to the organism. But it is perhaps too soon and our knowledge too limited to allow for such a confident early assertion.

  59. This may mean that some (many) of such random mutations may not degrade the Program directly (or make it immediately crash).

    This is precisely the limitation of knockout experiments.

    Furthermore, even with catastrophic changes, the change will not appear catastrophic until the particular routine is called. This is extremely common with complex technologies, and we see it all the time with our computers, our cars, and so on.

    Finally, there is the issue of redundancy. If we knock out two of the gyroscopes on the Hubble Telescope and it still works, does it mean those two gyroscopes served no purpose? Of course not.

    There are lots of ways that a particular mutation can be harmful, but the harm can lie dormant or be hidden for a time. Indeed, it is quite possible that a reasonable fraction of the allegedly neutral mutations could turn out to be “quietly harmful” rather than purely neutral.

  60. I suppose the degree of harm caused by a mutation depends on where it happens in the genetic code. It is certain that the genome is organized hierarchically and that most DNA sequences are used for regulatory (control) purposes. A mutation in a regulatory sequence high in the hierarchy is likely to have severely deleterious, if not fatal, consequences.

    It’s a good thing that error correcting mechanisms are in place, otherwise no living organism would survive. This is an insurmountable problem for Darwinists because the evolutionary process depends on random mutations but, if given a free reign, truly random mutations would quickly destroy the organism.

  61. Let us assume, for the sake of argument, that the genome drives the construction of the organism, i.e. the genome is a set of assembly instructions for the embryo development. (Personally I believe that is reductive, IMO there are many unknown levels of organizational direction upon the genome.)

    Usually instructions contain symbolic links. An error in a symbolic link might be more devastating than a direct error at the level of the material result. Example. Let’s suppose that an embryonic instruction sounds “at time X add Y cardiac cells to the Z zone of the heart” and a mutation causes an error in the last word, which changes from “heart” to “brain”. We would have that Y cardiac cells go into the brain, where likely they would work as a cancer.

    What I mean is that to reason in terms of instructions doesn’t reduce the danger of mutations/errors. Indeed the contrary. Mutations/errors in the instructions are even more dangerous than direct errors in the final molecules.

    Bottom line: Darwinism from an informatics point of view is even more absurd than thought from other perspectives.

  62. equate65:

    Off topic but just had a quick question: Is symbiogenesis, epigenetics, and saltationism part of the Modern evolutionary synthesis?

    Not really off topic at all. We’re wondering what it would take to simulate evolution in a computer. Whether it’s even possible. And certainly we’d want to ask if these need to be taken into consideration in any simulation.

    But to answer your question, no. To neo-Darwinism, aka the Modern Synthesis, development is a black box. It’s a theory about how genes spread through populations, not a theory about how phenotypes are derived from genotypes.

  63. Chastising scientists for refusing to simulate evolution would be like chastising engineers for refusing to design an electric car. Anyone with half a brain can verify in a few minutes that the premise is untrue. Scientists have been simulating evolution for decades.

    One of the advantages of simulations is the capacity to explore the behavior of models too complex to solve mathematically. I think some of the most interesting models are ones that use RNA folding. For instance, see The Ascent of the Abundant: How Mutational Networks Constrain Evolution by Cowperthwaite, et al. Unlike traditional models in population & evolutionary genetics that ignore development, models based on RNA folding implement a folding algorithm. The RNA is encoded by a gene (which could be RNA or DNA), its folded structure is computed, and then its fitness is computed based on some function of the folded structure. Add in mutation, recombination, etc., and you have the basis of a sophisticated simulation.

    Results from such models are not simply reiterating old Darwinian ideas about selection. The title of the paper cited above invokes a concept of abundance in state-space, similar to an argument made earlier by Stuart Kauffman. This is not fundamentally a Darwinian argument.

    By the way, Behe is mistaken to suggest that there has been no analysis of constructive neutral evolution. Apparently he just read a newsy piece by Lukes, et al without bothering to read the original piece by Stoltzfus, 1999, in which “constructive neutral evolution” was first proposed. This included simulations of an evolutionary model that later (due to an independent proposal by Force, Lynch, et al) became known as the DDC (duplication-degeneration-complementation) or “neutral subfunctionalization” model. Today you can’t read any of the gene duplication literature without coming across this model. The original papers in 1999 & 2000 have received thousands of citations in the scientific literature.

  64. arlin #63

    When asking “why Darwinian unguided evolution hasn’t been yet computer simulated?” obviously I meant “computer simulated realistically and successfully”.

    I know that “scientists have been simulating evolution for decades”. But these simulations have been either not realistic or not successful or both.

Leave a Reply