Uncommon Descent Serving The Intelligent Design Community

Behe’s “Multiple mutations needed for E. coli”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Multiple mutations needed for E. coli

An interesting paper has just appeared in the Proceedings of the National Academy of Sciences, “Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli”. (1) It is the “inaugural article” of Richard Lenski, who was recently elected to the National Academy. Lenski, of course, is well known for conducting the longest, most detailed “lab evolution” experiment in history, growing the bacterium E. coli continuously for about twenty years in his Michigan State lab. For the fast-growing bug, that’s over 40,000 generations!

I discuss Lenski’s fascinating work in Chapter 7 of The Edge of Evolution, pointing out that all of the beneficial mutations identified from the studies so far seem to have been degradative ones, where functioning genes are knocked out or rendered less active. So random mutation much more easily breaks genes than builds them, even when it helps an organism to survive. That’s a very important point. A process which breaks genes so easily is not one that is going to build up complex coherent molecular systems of many proteins, which fill the cell.

In his new paper Lenski reports that, after 30,000 generations, one of his lines of cells has developed the ability to utilize citrate as a food source in the presence of oxygen. (E. coli in the wild can’t do that.) Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it’s not some exotic chemical the bacterium has never seen before). However, the wild bacterium lacks an enzyme called a “citrate permease” which can transport citrate from outside the cell through the cell’s membrane into its interior. So all the bacterium needed to do to use citrate was to find a way to get it into the cell. The rest of the machinery for its metabolism was already there. As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1)

Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.)

The major point Lenski emphasizes in the paper is the historical contingency of the new ability. It took trillions of cells and 30,000 generations to develop it, and only one of a dozen lines of cells did so. What’s more, Lenski carefully went back to cells from the same line he had frozen away after evolving for fewer generations and showed that, for the most part, only cells that had evolved at least 20,000 generations could give rise to the citrate-using mutation. From this he deduced that a previous, lucky mutation had arisen in the one line, a mutation which was needed before a second mutation could give rise to the new ability. The other lines of cells hadn’t acquired the first, necessary, lucky, “potentiating” (1) mutation, so they couldn’t go on to develop the second mutation that allows citrate use. Lenski argues this supports the view of the late Steven Jay Gould that evolution is quirky and full of contingency. Chance mutations can push the path of evolution one way or another, and if the “tape of life” on earth were re-wound, it’s very likely evolution would take a completely different path than it has.

I think the results fit a lot more easily into the viewpoint of The Edge of Evolution. One of the major points of the book was that if only one mutation is needed to confer some ability, then Darwinian evolution has little problem finding it. But if more than one is needed, the probability of getting all the right ones grows exponentially worse. “If two mutations have to occur before there is a net beneficial effect — if an intermediate state is harmful, or less fit than the starting state — then there is already a big evolutionary problem.” (4) And what if more than two are needed? The task quickly gets out of reach of random mutation.

To get a feel for the clumsy ineffectiveness of random mutation and selection, consider that the workers in Lenski’s lab had routinely been growing E. coli all these years in a soup that contained a small amount of the sugar glucose (which they digest easily), plus about ten times as much citrate. Like so many cellular versions of Tantalus, for tens of thousands of generations trillions of cells were bathed in a solution with an abundance of food — citrate — that was just beyond their reach, outside the cell. Instead of using the unreachable food, however, the cells were condemned to starve after metabolizing the tiny bit of glucose in the medium — until an improbable series of mutations apparently occurred. As Lenski and co-workers observe: (1)

“Such a low rate suggests that the final mutation to Cit+ is not a point mutation but instead involves some rarer class of mutation or perhaps multiple mutations. The possibility of multiple mutations is especially relevant, given our evidence that the emergence of Cit+ colonies on MC plates involved events both during the growth of cultures before plating and during prolonged incubation on the plates.”

In The Edge of Evolution I had argued that the extreme rarity of the development of chloroquine resistance in malaria was likely the result of the need for several mutations to occur before the trait appeared. Even though the evolutionary literature contains discussions of multiple mutations (5), Darwinian reviewers drew back in horror, acted as if I had blasphemed, and argued desperately that a series of single beneficial mutations certainly could do the trick. Now here we have Richard Lenski affirming that the evolution of some pretty simple cellular features likely requires multiple mutations.

If the development of many of the features of the cell required multiple mutations during the course of evolution, then the cell is beyond Darwinian explanation. I show in The Edge of Evolution that it is very reasonable to conclude they did.

References

1. Blount, Z.D., Borland, C.Z., and Lenski, R.E. 2008. Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli. Proc. Natl. Acad. Sci. U. S. A 105:7899-7906.

2. Hall, B.G. 1982. Chromosomal mutation for citrate utilization by Escherichia coli K-12. J. Bacteriol. 151:269-273.

3. Pos, K.M., Dimroth, P., and Bott, M. 1998. The Escherichia coli citrate carrier CitT: a member of a novel eubacterial transporter family related to the 2-oxoglutarate/malate translocator from spinach chloroplasts. J. Bacteriol. 180:4160-4165.

4. Behe, M.J. 2007. The Edge of Evolution: the search for the limits of Darwinism. Free Press: New York, p. 106.

5. Orr, H.A. 2003. A minimum on the mean number of steps taken in adaptive walks. J. Theor. Biol. 220:241-247.

Comments
#54 ba77 Thanks for the contribution. I strongly think that future discoveries about "non-junk-ness" of DNA will more and more vindicate ID. Only I'm not completely convinced by the direct correlation you've done of the huge information needed to characterize a human body (or a whichever high level animal for that matter) to the compression of DNA code. Certainly DNA is highly polifunctional and embeds many nested coding level, but it hasn't to code ALL nor most of the information content a body requires. After all this is a common concept in any design. For example, let us consider the design plan of a high complex artifact, e.g. a bridge. The design plan at the higher level does only contain the information needed to put the major information about bridge position, structure and composition. Instead the work plan has to contain much more information to enable field engineers to implement the bridge by controlling the work of a high number of workers and machinery. BUT in any case the overall design information needed to actually implement the bridge hasn't to be equal to thehuge raw information that would be required to cgharacterize correctly the position and composition of every iatom or molecule of the bridge. This is only to state that we cannot strictly speak of a direct compression of the raw information of the human body to the DNA codekairos
June 11, 2008
June
06
Jun
11
11
2008
02:08 AM
2
02
08
AM
PDT
The fact that Qauntum events are defying time and space in the first place is another strong indication of our reality's ultimate basis being founded in a "higher dimension". Thus another stong confirmation of a primary Theistic postulation. To get back to the do^min^ion of information over this reality. We can only determine that information is in fact dominate and primary of energy/matter when we entangle particles, yet there is no reason to presuppose that every energy/matter particle, whether entangled or not, does not have a dominate/primary information signature at its basis, i.e. it naturally reasonable to presume that all foundational and primary information of every energy/material particle in the universe exist continuously in the primary transcendent realm, i.e. it is never reasonable to assume information does not exist since it is foundational and primary to the material realm in the first place. Thus since it is reasonable to presume information of every particle exist prior to the existance of the particle, it is reasonable to presume that the infinite mind of God has knowlege of every material /energy particle prior to its existence no matter how random it is. To presuppose there is no infinite mind of God is to presuppose no overarching structure, i.e. it is to presuppose chaos as the foundation of reality.bornagain77
June 10, 2008
June
06
Jun
10
10
2008
07:12 PM
7
07
12
PM
PDT
ba77: "And finally, Quantum Teleportation experiments by Dr Zeilinger, Spooky action and beyond http://www.signandsight.com/features/614.html actually proves the transcendence and dominion of “information” over the material/energy realm and makes God’s omniscient (all knowing) and omnipotent (all powerful) characteristics plausible with how our reality is actually constructed. ---------- Interviewer:"I'd like to come to the second freedom: the freedom of nature. You said that for example the velocity or the location of a particle are only determined at the moment of the measurement, and entirely at random." Zeilinger:"I maintain: it is so random that not even God knows the answer."JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
06:19 PM
6
06
19
PM
PDT
The very next question of the interviewer addresses this point and I had not read it before my above post. I just read Zeilinger's comment on free will and responded. Here is the interviewer's remark: I'd like to come back to these freedoms. First, if you assumed there were no freedom of the will – and there are said to be people who take this position – then you could do away with all the craziness of quantum mechanics in one go. JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
05:56 PM
5
05
56
PM
PDT
ba77: I am reading the Anton Zeilinger paper now and note the following: "That's right. I call that the two freedoms: first the freedom of the experimenter in choosing the measuring equipment - that depends on my freedom of will; and then the freedom of nature in giving me the answer it pleases. Zeilinger has a default assumption of free will to begin with, not based at all on the conclusions of any of his reasearch. This is significant because my understanding is that a huge amount of the paradox and mysticism surrounding quantum phenomena immediately dissapears if you do not assume free will. IOW there are hyperdeterminstic interpretations of quantum theory that do not entail "observers" influencing events by the power of their free will. But I'm in over my head as well.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
05:54 PM
5
05
54
PM
PDT
The statement should read: That is to say, quantum teleportation establishes beyond any reasonable doubt that “transcendent information” does not arise from energy/matter, as materialism presupposes, but in fact “transcendent information” is indeed completely do^min^ate of energy/matter (material) itself and is therefore, by force of overwhelming logic, foundational and primary to the energy/matter it do^min^ates.bornagain77
June 10, 2008
June
06
Jun
10
10
2008
05:41 PM
5
05
41
PM
PDT
Junkyard, Sidenote: The overwhelming stability of bacteria through thousands upon thousands of generations, strongly indicates that there is no part of the genome (Junk DNA) for evolution to play with,,i.e. Junk DNA is ruled out from straightforward test for genome flexibility! Junkyard: I have to admit that your response in 59 seems a bit vague to me. But I'll try to take this last point: you stated: And my point is, while man and the universe are the “handiwork” of God in a certain sense, God does not have literal hands, and the question is, was there a physical intermediary of laws and the universe that in effect served as Gods hands and can also serve quite completely as a proximal explanation for man’s existence. And if not, why on earth does the physical universe exist. Just How did God Almighty implement the universe, design? Though completely amateur in my effort, I will lay out what I have so far: There are foundational Theistic claims for the characteristics of Almighty God. These characteristics are; Omnipotent Omnipresent Transcendent Eternal and Omniscient His Eternal characteristic has basic plausible empirical confirmation in special relativity with time, as we know it, coming to a complete stop at the speed of light. Thus since all foundation sub-atomic matter was constructed with energy at the Big Bang, this indicates that all matter arose from some eternal "timeless" dimension of energy. The fact that the basic universal laws are precisely the same everywhere we look in the universe and have held exceedingly unchanged over the entire age of the universe (save for the proposed inflation ) gives tentative indication that the universal constants are indeed independent and transcendent of any proposed material basis and also gives tentative confirmation for the omnipresent and transcendent characteristics of God. And finally, Quantum Teleportation experiments by Dr Zeilinger, Spooky action and beyond http://www.signandsight.com/features/614.html actually proves the transcendence and dominion of "information" over the material/energy realm and makes God's omniscient (all knowing) and omnipotent (all powerful) characteristics plausible with how our reality is actually constructed. That is to say, quantum teleportation establishes beyond any reasonable doubt that "transcendent information" does not arise from energy/matter, as materialism presupposes, but in fact "transcendent information" is indeed completely te of energy/matter (material) itself and is therefore, by force of logic, foundational and primary to the energy/matter it tes. Thus you have all the basic postulated Theistic Characteristics of Almighty God tentatively to strongly confirmed by the current empirical evidences of physics. In fact Dr. Zeilinger goes so far as to quote scripture to explain informations foundational role to our reality: http://www.metanexus.net/Magazine/ArticleDetail/tabid/68/id/5896/Default.aspx excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word."bornagain77
June 10, 2008
June
06
Jun
10
10
2008
05:04 PM
5
05
04
PM
PDT
ba77:As well, the overwhelming “slightly detrimental” nature of all observed mutations to DNA in the laboratory has been thoroughly established by Dr. J.C. Sanford, in his book “Genetic Entropy”... Mysterious extinctions which are not part of any known major natural catastrophes in the history of the earth. I would like to point out that since the laws of physics have been clearly proven to have remained stable throughout the history of the universe, then, there is no compelling reason to suspect the naturally occurring mutations to DNA have change significantly from their present rate for any prolonged period of time. Thus the “genetic meltdown theory” is surprisingly strong as the solution to the fairly “constant rate” of mysterious extinctions of higher life-forms in the fossil record." This part of your discussion seemed to be compelling, I'll have to admit.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
04:08 PM
4
04
08
PM
PDT
BornAgain77: "a good teleportation machine must be able to put every atomic molecule back in precisely its proper place. That much information, Braunstein calculated, would require a billion trillion desktop computer hard drives, or a bundle of CD-ROM disks that would take up more space than the moon. ... The capacity of the DNA molecule to store information is so efficient that all the information needed to specify an organism as complex as man weighs less than a few thousand-millionths of a gram. The information needed to specify the design of all species of organisms that have ever existed on earth (a number estimated to be one billion) could easily fit into a teaspoon with plenty of room left to spare for every book that has ever been written on the face of earth. Obviously, I am just barely touching the surface of the complexity that is apparent in the DNA of man. Yet even from this superficial examination, we find truly golden nuggets of astonishing evidence that we are indeed the handiwork of Almighty God. Psalm 139:14 I will praise You, for I am fearfully and wonderfully made;" -------------- (Psa 19:1-8) The heavens declare the glory of God; the skies proclaim the work of his hands. Day after day they pour forth speech; night after night they display knowledge. There is no speech or language where their voice is not heard. Their voice goes out into all the earth, their words to the ends of the world. In the heavens he has pitched a tent for the sun, which is like a bridegroom coming forth from his pavilion, like a champion rejoicing to run his course. It rises at one end of the heavens and makes its circuit to the other; nothing is hidden from its heat. The law of the LORD is perfect, reviving the soul. The statutes of the LORD are trustworthy, making wise the simple.The precepts of the LORD are right, giving joy to the heart. The commands of the LORD are radiant, giving light to the eyes. The above passage is significant to me for the following: Intelligent Design has historically been stated in reference to the two following observations (among others)- That law is not sufficient to generate CSI, and there are not enough particle interactions in the universe to account for the emergence of CSI. So in short, the supposed impotence of law and the heavens are the two pillars of ID thought. The above passage talks about the marvels of the heavens and then seemingly abruptly and with no apparent connection starts talking about the Law and saying that it even gives light to the eyes. In your paper ba77, you talked about how there are multiple levels of meaning in the human genome. Well, there are multiple levels of meaning in the Bible as well. The Old Testament juridicial law may be in view in some general sense in the above passage. It is ironic to me however, in light of ID's denigration of A) the law and B) the heavens, that those two same concepts would be exalted in the way that they are in the above passage in direct connection to one another and also in connection to man. And my point is, while man and the universe are the "handiwork" of God in a certain sense, God does not have literal hands, and the question is, was there a physical intermediary of laws and the universe that in effect served as Gods hands and can also serve quite completely as a proximal explanation for man's existence. And if not, why on earth does the physical universe exist.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
03:36 PM
3
03
36
PM
PDT
bornagain: you write, "This is a quote from a Science Daily article about the landmark study." But what follows does not contain quotation marks, block quotes or any references that can be checked. In this section you say, "The ENCODE consortium’s major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active. The new data indicate the genome contains very little unused sequences and, in fact, is a complex, interwoven network" Was this all a quote from the Science Daily article?JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
02:38 PM
2
02
38
PM
PDT
Junkyard, Post 54 is especially for you buddy.bornagain77
June 10, 2008
June
06
Jun
10
10
2008
01:50 PM
1
01
50
PM
PDT
kairos: thank you for the comments. When I wrote: "“make everything infinitely simpler”, I was referring just to the probability count, which does indeed change dramatically in that scenario, because youn have no more to multily the single probabilities. That's a point which is often misunderstood, but really if you allow for totally efficient selection and expansion of each mutation, we are no more in the realm of randomness. We are ratherin the realm of intelligent selection, like in the "Methinks it's like a weasel" example, and we do know that, under those conditions, probabilities, although not high, are empirically affordable. I don't think that's any form of concession to darwinism: the fact remains that such a kind of selection is possible only if you know in advance the information to be selected (like Dawkins in the Shakespeare example). And the deconstruction of complex information into single bit variation with continuous function increase is obviously impossible. And even if one example existed where it is possible (and I really think it does not exist), how can anyone conceive that it should be possible for all complex information? That would be, indeed, a weird new law of nature, or of logics, of which nobody has ever had any hint! Just think: we have tens of thousand of different proteins even in a single mammal, most of them very different one from the other, with different 3d structures and lots of different domains, and specific active sites. And all that variety of information should be "linked" by efficient single bit pathways with ordered function increase? I think the most kind name for that scenario is "bullshit"... When I wrote "although not necessarily easy", I was exactly thinking of Haldane's dilemma, which you very appropriately cite, and of all the other improbabilities and impossibilities which would make really "hard" to guarantee effective selection and expansion, even if the "pathways", which don't exist, did really exist.gpuccio
June 10, 2008
June
06
Jun
10
10
2008
01:48 PM
1
01
48
PM
PDT
"bornagain77: "Junkyard, I read and reread your post in 44, and it seems to me that you are trying to argue for large unused sequences in a genome. I think you are well aware that this is an evolutionary/materialistic presupposition and as well I think you know this is not the way the cutting edge evidence is going. But that is OK, If you want to continue to hold onto to your “vast swaths” of the DNA belief for - currently non-functional but awaiting future assignment be my guest. Myself I will await further work along the ENCODE lines and see the Theistic ID position validated in stunning fashion for its postulation of a loss of CSI for each divergence of a sub-species from a parent species" So the cutting edge evidence is indicating what, exactly? That the genome is a bunch of unmodular spaghetti code, so that if you make a change one place, its liable to break something else in some other remote area of the code? And furthermore, this proves it was all designed in advance by a disembodied intelligent designer? The general theme of the Boston.com article seems to be ignorance: "As for the remaining 95 percent of the genome? "There's this weird lunar landscape of stuff we don't understand," Lander said. "No one has a handle on what matters and what doesn't." "No one knows what all that extra RNA is doing. It might be regulating genes in absolutely essential ways. Or it may be doing nothing of much importance: genetic busywork serving no real purpose." Scordova was talking above about code redundancy in the genome, so that you can change something without adverse effects appearing. Dave Scot mentioned the necessity of code traversal tools for program testing. IOW even through systematic testing there's liable to be vast swaths of code that you never even hit. You could put whatever garbage you want in these sections and it wouldn't make any difference. You tend to code for contingencies that never ever materialize. If this code never ends up being used it might as well be junk. Then to use the example of e coli, it could have the ability to use some nutrient, but then be an environment where that nutrient is never present, so the ability is never used. Then some mutation happens to it and that ability is broken, but it doesn't make any difference given its current environment. So to me, there's junk dna all over the place. If there are more rigorous papers than the Boston.com article that come right out and say junk dna is an erroneous concept, then you can point me to them if I need to know about them.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
01:42 PM
1
01
42
PM
PDT
Kairos, I wrote this following article in response to people continually debating with me that the DNA contains a majority of "Junk DNA". The Wonder of DNA To illustrate the complexity and wonder in the DNA of man, let's look at some of the work of Samuel Braunstein who is a quantum physicist at the Weizman Institute in Israel. Samuel Braustein was asked to present a talk to the science-fiction club in Rehover. What better topic, he thought, than quantum teleportation? Because of the limitations, imposed by the laws of physics, of ever teleporting any material object, Braunstein suggested the secret to teleportation would lie not in transporting people, or material objects, but would lie in teleporting the molecular information about whatever was to be teleported. Somehow, this Star Trek type teleporter must generate and transmit a parts list and blueprint of the object being teleported. This information could be used in reconstructing the object at its final destination. Presumably, the raw materials would be available to reconstruct the object at its final destination. Naturally this process raises a lot of questions that the script writers for Star Trek never answered. For example, just how much information would it take to describe how every molecule of a human body is put together? In a human body, millimeter accuracy isn't nearly good enough. A molecule a mere millimeter out of place can mean big trouble in your brain and most other parts of your body. A good teleportation machine must be able to put every atomic molecule back in precisely its proper place. That much information, Braunstein calculated, would require a billion trillion desktop computer hard drives, or a bundle of CD-ROM disks that would take up more space than the moon. The atoms in a human being are the equivalent to the information mass of about a thousand billion billion billion bits. Even with today's top technology, this means it would take about 30 billion years to transfer this mass of data for one human body from one spot to another. That's twice the age of the universe. "It would be easier," Braunstein noted, "to walk." Yet the DNA of man contains the parts list and blueprint of how all these trillions upon trillions of protein molecules go together in just 3 billion base pairs of DNA code. As well, the DNA code contains the “self assembly instructions” that somehow tells all these countless trillions of proteins molecules how to put themselves together into the wonder of a human body. Yet far from the billion-trillion computer hard drives calculated by Braustein, these 3 billion letters of information in the DNA of man could easily fit onto the single hard drive of the computer I’m writing this article on with plenty of room left to spare! That ratio of a billion trillion hard drives reduced to one hard drive is truly an astonishing amount of data compression that far exceeds the capacity of man to do as such. It is abundantly clear that all that required information for exactly how all the protein molecules of man are put together is somehow ingenuously encrypted in some kind of “super code” in the DNA of man. Amazingly, many evolutionary scientists “used” to say the majority of DNA that didn’t directly encode for proteins (genes) was leftover “junk” DNA from man’s falsely presumed evolutionary past. Now this blatantly simple-minded view of the required complexity that is inherent in the DNA of man has been solidly overturned. In June 2007, an international research consortium, named ENCODE, published a huge body of preliminary evidence that gives a glimpse into the world of the DNA’s complexity. This is a quote from a Science Daily article about the landmark study. In a group paper published in the June 14, 2007 issue of Nature and in 28 companion papers published in the June issue of Genome Research, the ENCyclopedia Of DNA Elements (ENCODE) consortium, which is organized by the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health (NIH), reported results of its exhaustive, four-year effort to build a parts list of all biologically functional elements in 1 percent of the human genome. Carried out by 35 groups from 80 organizations around the world, the research served as a pilot to test the feasibility of a full-scale initiative to produce a comprehensive catalog of all components of the human genome crucial for biological function. The ENCODE consortium's major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active. The new data indicate the genome contains very little unused sequences and, in fact, is a complex, interwoven network. In this network, genes are just one of many types of DNA sequences that have a functional impact. The revelation of a complex interwoven network is a major blow to evolutionists. Now bear in mind, this is only a “feasibility study” of 1% of the Genome. The interwoven complexity is sure to be multiplied exponentially as the effort extends to decipher the remaining 99% of the DNA. This preliminary study, of how DNA is actually encoded, clearly indicates that most, if not the entire 100%, of the DNA is “poly-functional”. Poly-functional simply means the DNA exhibits extreme data compression in its character. “Poly-functional” DNA sequences will exhibit several different meanings on several different levels. For instance, if you were to write a (very large) book similar to the DNA code, you could read many parts of the book normally and it would have one meaning, you could read the same parts of the book backwards and it would have another completely understandable meaning. Yet then again, a third equally coherent meaning would be found by reading every other letter of the same parts. A fourth level of meaning could be found by using a simple encryption program to get yet another meaning. A fifth and sixth level of meaning could be found in the way you folded the parts of the book into specific two and three dimensional shapes. Please bear in mind, this is just the very beginning of the mind bending complexity scientists are finding in the DNA code. Indeed, a study by Trifonov in 1989 has shown that probably all DNA sequences in the genome encrypt for up to 12 different codes of encryption!! No sentence, paragraph, book or computer program man has ever written comes close to that staggering level of poly-functional encryption we find in the DNA code of man. Here is a quote on the poly-functional nature of the DNA from renowned Cornell Geneticist and inventor Dr. John Sanford from his landmark book, “Genetic Entropy”: There is abundant evidence that most DNA sequences are poly-functional, and therefore are poly-constrained. This fact has been extensively demonstrated by Trifonov (1989). For example, most human coding sequences encode for two different RNAs, read in opposite directions i.e. Both DNA strands are transcribed ( Yelin et al., 2003). Some sequences encode for different proteins depending on where translation is initiated and where the reading frame begins (i.e. read-through proteins). Some sequences encode for different proteins based upon alternate mRNA splicing. Some sequences serve simultaneously for protein-encoding and also serve as internal transcriptional promoters. Some sequences encode for both a protein coding, and a protein-binding region. Alu elements and origins-of-replication can be found within functional promoters and within exons. Basically all DNA sequences are constrained by isochore requirements (regional GC content), “word” content (species-specific profiles of di-, tri-, and tetra-nucleotide frequencies), and nucleosome binding sites (i.e. All DNA must condense). Selective condensation is clearly implicated in gene regulation, and selective nucleosome binding is controlled by specific DNA sequence patterns - which must permeate the entire genome. Lastly, probably all sequences do what they do, even as they also affect general spacing and DNA-folding/architecture - which is clearly sequence dependent. To explain the incredible amount of information which must somehow be packed into the genome (given that extreme complexity of life), we really have to assume that there are even higher levels of organization and information encrypted within the genome. For example, there is another whole level of organization at the epigenetic level (Gibbs 2003). There also appears to be extensive sequence dependent three-dimensional organization within chromosomes and the whole nucleus (Manuelides, 1990; Gardiner, 1995; Flam, 1994). Trifonov (1989), has shown that probably all DNA sequences in the genome encrypt multiple “codes” (up to 12 codes). Dr. John Sanford (PhD in Genetics; inventor of the biolistic “gene gun” process! Holds over 25 patents! In addition to the gene gun, Sanford invented both pathgen derived resistance, and genetic immunization. If you ate today you probably ate some food that has been touched by his work in manipulating the genetics of food crops!) Though the ENCODE consortium is about to undertake the task of deciphering the remaining 99% of the humane genome, I firmly believe that they, and all their super-computers, are soon to be dwarfed by the sheer and awesome complexity at which that much required information is encoded into the three billion letters of the DNA code of man. As a sidelight to this, it takes the most powerful super-computer in the world an entire year just to calculate how a single 100 amino acid protein sequence will fold into a 3-dimensional shape from its 1-dimensional starting point. Needless to say, this impressive endeavor by ENCODE to decipher the entire genome of man will be very, very interesting to watch. Hopefully ENDODE’s research will enable doctors to treat a majority of the over 3500 genetic diseases (mutational disorders) that afflict man without having to fully understand that much apparent complexity in the DNA of man. The only source for purely evolutionary change to DNA, that is available to the atheistic evolutionists, is the natural selection of copying errors that occur to DNA. This is commonly known as natural selection of random mutations to DNA. What evolutionists fail to ever mention is that natural selection is actually just some totally random selection of some hypothetical beneficial mutation that has never actually been clearly demonstrated to occur in the laboratory. For all practical purposes, All random mutations to DNA, that have been observed in the laboratory (we are talking millions of observations here), are either clearly detrimental or slightly detrimental, to the organism having the mutation. All mutations that are deemed to be somewhat beneficial to the organism, such as the anti-biotic resistance of bacteria, all turn out to involve loss of function in the genome. In fact, at least 99.9999% of the copying errors that do occur to DNA are proven to be somewhat harmful and/or to the organism having to mutation (Gerrish and Lenski, 1998). Evolution assumes a high level of beneficial flexibility for DNA. But alas for the atheistic evolutionists, the hard evidence of science indicates an astonishingly high level of integrity in the DNA code! A code which Bill Gates, the founder of Microsoft, states is far, far more complex than any computer code ever written by man. Sometimes a mutation to the DNA is found to be the result of a “complex feedback” of preexisting information that seems to be somewhat beneficial to the organism at the macroscopic level (such as lactase persistence). Yet, even in these extremely rare examples of “beneficial” mutations, the questioned beneficial mutation never shows a violation of what is termed “Genetic Entropy”. Genetic Entropy is a fundamental principle of science that means functional information in the DNA cannot increase “above the level of parent species” without an outside source of intelligence putting the information in the DNA. To be absolutely clear about this, evolutionists have never proven a violation of genetic entropy in the laboratory (Sanford; Genetic Entropy, 2005), thus they have never even proven a gain in information in the DNA of organisms above the level of parent species, thus they have never conclusively proven evolution as a viable theory at the molecular level in the first place! To make matters worse for the evolutionists, even if a purely beneficial random mutation were to ever occur it would be of absolutely no use to the evolutionary scenario for it would be swallowed in a vast ocean of slightly detrimental mutations. Yet evolutionists act like evolution has been conclusively proven on the molecular level many times % DNA is extremely resilient in its ability to overcome copying errors to the DNA, yet, as stated earlier, evolutionary scientists claim that the copying errors in the DNA that do occasionally slip through are what are ultimately responsible for the sheer and awesome complexity we find in the DNA code of man. Contrary to their materialistic beliefs, mutations do not create stunning masterpieces! As well, the overwhelming “slightly detrimental” nature of all observed mutations to DNA in the laboratory has been thoroughly established by Dr. J.C. Sanford, in his book “Genetic Entropy”. He shows in his book that there is a indeed a slightly negative effect for the vast majority of mutations. These slightly detrimental mutations are not readily apparent at the macroscopic level of the organism. These slightly negative mutations accumulate over time in all higher species since they are below the power of natural selection to remove them from a genome. These “slightly negative” mutations accumulate in a higher species until “genetic meltdown” occurs in a species. Indeed, if mutation rates for higher species have stayed similar to what they currently are, throughout the history of complex life on earth, then genetic meltdown is the most reasonable cause for the numerous mysterious extinctions in the fossil record. Over 90% of extinctions in the fossil record have occurred by some unknown natural mechanism. The average time for “mysterious extinctions” is rather constant at about 4 million years per species in the fossil record (Van Valen; A new evolutionary law, 1973). http://www.nap.edu/openbook.php?record_id=4910&page=117 Mysterious extinctions which are not part of any known major natural catastrophes in the history of the earth. I would like to point out that since the laws of physics have been clearly proven to have remained stable throughout the history of the universe, then, there is no compelling reason to suspect the naturally occurring mutations to DNA have change significantly from their present rate for any prolonged period of time. Thus the “genetic meltdown theory” is surprisingly strong as the solution to the fairly “constant rate” of mysterious extinctions of higher life-forms in the fossil record. I’ll end my paper with a bit of trivia. The capacity of the DNA molecule to store information is so efficient that all the information needed to specify an organism as complex as man weighs less than a few thousand-millionths of a gram. The information needed to specify the design of all species of organisms that have ever existed on earth (a number estimated to be one billion) could easily fit into a teaspoon with plenty of room left to spare for every book that has ever been written on the face of earth. Obviously, I am just barely touching the surface of the complexity that is apparent in the DNA of man. Yet even from this superficial examination, we find truly golden nuggets of astonishing evidence that we are indeed the handiwork of Almighty God. Psalm 139:14 I will praise You, for I am fearfully and wonderfully made;bornagain77
June 10, 2008
June
06
Jun
10
10
2008
01:36 PM
1
01
36
PM
PDT
#46 bornagain77 "something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford. S A T O R A R E P O T E N E T O P E R A R O T A S" Only for sake of precision I add that this famous palyndrome had always been, to the best of our knowledge, always associated to christians. However it is found also in the reversed form: rotas opera tenet arepo sator In particular this is the form in which was written in the buried Pompeian town (this is the reason why we are sure that the palyndrome dates before 79 AD. "The puzzle I listed is only poly-fuctional to 4 elements, as stated earlier the minimum genome is poly-constrained to approximately 500 elements (genes). For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life is absurd in the highest order!" That's right. And this is the reason why the more the science shows that DNA is not "yunk" and is polyfunctional the more darwinian ideas look mere faith in chance.kairos
June 10, 2008
June
06
Jun
10
10
2008
01:18 PM
1
01
18
PM
PDT
Junkyard, I read and reread your post in 44, and it seems to me that you are trying to argue for large unused sequences in a genome. I think you are well aware that this is an evolutionary/materialistic presupposition and as well I think you know this is not the way the cutting edge evidence is going. But that is OK, If you want to continue to hold onto to your "vast swaths" of the DNA belief for - currently non-functional but awaiting future assignment be my guest. Myself I will await further work along the ENCODE lines and see the Theistic ID position validated in stunning fashion for its postulation of a loss of CSI for each divergence of a sub-species from a parent species.bornagain77
June 10, 2008
June
06
Jun
10
10
2008
01:11 PM
1
01
11
PM
PDT
#29 gpuccio "As you can see. it is not single mutations which are a myth (IMO), but “pathways” where each single step is selected for function gain. Maybe I did not express myself clearly, I apologize." I did understand so your point, and obviously I agree. "I have clearly stated that single mutations are perfectly accessible to all living beings, especially bacteria. Indeed, specific single mutations can happen quite often in bacteria." And this is clearly argued by Behe in EoE. "I have also said explicitly that two coordinated mutations, that is two mutations which have to be simultaneously present before there is a function gain, are another matter: here the two probabilities multiply, and for E. coli the probability of any specific set of two mutations becomes about 1 : 10^14, which is much lower. Still, that is in the range of bacteria in a reasonable time (decades), while not so much in the range, for in stance, of mammals. Indeed, Behe puts more or less there his “edge” for undirected evolution." Precisely. And he stated so very clearly. I don't understand how someone who claims to have read EoE could reasonably not to know this. "But I want to be more generous. I can accept that, very rarely, specific 3 mutation sets can be attained in bacteria, and selected if they confer gain function. Here the probability becomes 1 : 10^21, and we are already in a really problematic order of magnitude, but you know, luck happens. In bacteria or protozoa, at least. it could happen, although very very rarely. It’s not even the case to discuss higher forms of life here." That's right, although obviously NDEers are constrained to discuss application to higher forms of life. "But that’s all. If we add more necessary mutations to our set, we are out. That would no more be luck. That would have to be design." Or at least even the most fierce anti-Ider should, to be intellectually honest, admit: "OK, you're right; RM+NS don't work here; I don't know". "Obviously, there is the alternative possibility that the single steps are selected. That would dramatically make everything infinitely simpler (although not necessarily easy). We eould not have anymore to multiply probabilities, because the expansion of each mutation to all the population would make the probability the same for each new mutation (the previous ones having been fixed). In other terms, it could be done, if we really could justify that kind of single step fixation." This is the only point where I don't agree with you. Certainly step-by-step selection would yield more reasonable chances but it' not true that this would "make everything infinitely simpler"; in fact we would still have very low chances. Moreover it's very generous your following statement: "(although not necessarily easy)". Let us consider that, even in the more favourable condition, the Haldane paradox seems to put a very severe constraint to the actual occurrence of any reasonable evolution pathway. That is the myth: that function landscape can be traverse by specific pathways, where you have a “stepping stone” of higher function at each single mutation, or, if we want to be generous, and if we are discussing bacteria, at every 2-3 mutation distance. "There is no trace of the billions of functional intermediates that such a scenario would imply. In other words, that scenario is simply false." That's IMHO the final empirical proof that what seems theoretically impossible was impossible in the real world too.kairos
June 10, 2008
June
06
Jun
10
10
2008
01:04 PM
1
01
04
PM
PDT
#24 dmso74 in #29 and #45 gpuccio and paul giem did reply to your point.kairos
June 10, 2008
June
06
Jun
10
10
2008
12:41 PM
12
12
41
PM
PDT
Ekstasis: "if we took our kids, surrounded them with a tiny bit of pizza and ice cream, and enormous amounts of broccoli and other assorted veggies, would they mutate fast enough to eat the veggies before the “good” stuff runs out and they starve? Maybe the following looks into that- href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2366040">Vegetable acceptance by infantsJunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
12:02 PM
12
12
02
PM
PDT
My apologies for the earlier math, let me rephrase. It took 30,000 generations to make one relatively minor functional change in e. coli. 30,000 generation of advanced primates is roughly equal to 600,000 years. Now, as mentioned, selecting out for multiple changes simultaneously can get very dicey. Advanced primates, transitioning to full humanity, would require thousands or millions of functional changes, particularly in the areas of the brain, would it not? How many 600,000 year cyles do we have to play with in order to fit with actual timeframes, and is this at all within the realm of reason?Ekstasis
June 10, 2008
June
06
Jun
10
10
2008
12:01 PM
12
12
01
PM
PDT
I believe 20,000 generations is not the relevant measure but the number of reproductive events. It just takes one positive reproductive mutation to theoretically permeate the population. The number of reproductive events adds a few zeros to the exponent of the opportunties the bacteria have had to evolve. The number of primate reproductive events or any multicellular animal is quite a small number compared to what has happened in Lenski's labs.jerry
June 10, 2008
June
06
Jun
10
10
2008
11:59 AM
11
11
59
AM
PDT
To illustrate the principle of poly-functionality and thus poly-constraint on a genome; Craig Venter talks about the genitalia bacteria, the smallest bacterium known, in DaveScot's video here: Craig Venter - 18 months to 4th generation biofuels https://uncommondescent.com/biology/craig-venter-18-months-to-4th-generation-biofuels/ Venter, reservedly, talks, in the video, of the interwoven complexity of the bacteria that prevents the genome from being reduced much below approximately 500 genes. (note; this is a somewhat higher and more accurate figure than previous estimates. Such as this previous study indicates: "An earlier study published in 1999 estimated the minimal gene set to fall between 265 and 350. A recent study making use of a more rigorous methodology estimated the essential number of genes at 382." John I. Glass et al., "Essential Genes of a Minimal Bacterium," Proceedings of the National Academy of Sciences, USA103 (2006): 425-30. So if we were to get a proper "beneficial mutation' in a polyfunctional genome of 500 interdepedent genes then instead of the infamous "Methinks it is like a weasel" single function information problem for Darwinists, we would actually be encountering something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford. S A T O R A R E P O T E N E T O P E R A R O T A S Which is translated ; THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS. This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation. This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations. The puzzle I listed is only poly-fuctional to 4 elements, as stated earlier the minimum genome is poly-constrained to approximately 500 elements (genes). For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life is absurd in the highest order!bornagain77
June 10, 2008
June
06
Jun
10
10
2008
11:57 AM
11
11
57
AM
PDT
dmso74 In (18) you are able to handle the concept of multiple versus single mutations. In (27) you are able to handle the concept of two mutations. If you read gpuccio (29), perhaps you will be able to handle the concept of three mutations and four mutations. Then you will be able to understand what Behe was saying in The Edge of Evolution. What is being done here is using ID as a predictive theory. Instead of starting with the "knowledge" that there are no designers in natural history, and that therefore RV&NS had to be able to produce the variety of life we see because nothing else was around, and thus that an exact match exists between change over standard geologic time and the capabilities of RV&NS, ID starts out by allowing for the possibility that RV&NS is not the only creative force in biology. Thus it makes sense to ask the question, how much of what we see can be accounted for on the basis of RV&NS? That requires quantification. That means using numbers, like 1, 2, 3, 4, and so on. It also means using exponents, and probability theory (areas where Darwin was weak ;) ). There are two ways to proceed when developing a theory. The first is to use mathematical constructs of the theory. That is what has been done here. What is being said is that one specific single mutation is within easy reach of a single plate of bacteria, as long as it isn't too devastating. (Even there, it is within reach; it simply won't survive). However, two mutations is much harder, and puts us close to the edge of where mutations can get us during the lifetime of a researcher. Three mutations require considerable luck, and four mutations would be the equivalent of winning the lottery on one try. Five mutations? Fugeddaboudit. Twenty-seven mutations? We have now (at least in E. coli) passed the UPB, even allowing for more time and more bacteria. And we haven't even gotten one protein for the flagellum, let alone 35 (or is it 60 with the promoters, etc.?). The second way to develop a theory is to test it. The straightforward ID prediction is that any experimentally demonstrated change in the genome of E. coli will be limited to two, or at the most 3 separate steps, unless the steps can be demonstrated to be sequentially more fit. That's a scientific prediction, if you will, which would seem to make ID a scientific theory in the Popperian sense. Behe noted that chloroquine resistance required 2 separate mutations to happen. That was within the edge of evolution, but barely. Lenski has apparently demonstrated a multiple-step change. It will be interesting to see precisely how many of those steps it took, and whether any of them were in fact advantageous in the medium. If it took 4 steps, all of which were neutral or slightly disadvantageous, then current ID theory will have to be severely modified or abandoned. On the other hand, it does seem like standard evolutionary theory is at some risk as well. If this turns out to be a 3-neutral-step process, or especially a 2-neutral-step process, then the edge of evolution will be demonstrated to be where Behe says it is, and far too close to where an organism started to account for the variety of life as we know it. Perhaps more importantly, that would be experimental evidence, which is supposed to have more weight in science than theory does. It will be very interesting to see precisely what mutations were required to allow E. coli to utilize citrate in their environment, and how advantageous or disadvantageous those steps were. To all: Let's give up this nonsense of there being no neutral mutations. To claim that CUU instead of CUC in a protein coding sequence is somehow more deleterious (both coding for leucine), or vice versa, except in very special circumstances, is crazy. Furthermore, if all genetic changes are deleterious, then it follows that there must have at one time been, or at least been able to be, a perfect man (and woman). What was his (and her) skin color? Shape of nose? Distribution of body hair? Straightness of hair? Can we really say that more fat around the eyelids is more or less fit? Give me a break. Finally, it is at least theoretcially possible that beneficial mutations could happen in the real world. If a mutant bacterium mutates back to the original, would that not be a beneficial mutation? I can understand the argument that such mutations should be rare, but to call them impossible seems to be going a bit too far.Paul Giem
June 10, 2008
June
06
Jun
10
10
2008
11:57 AM
11
11
57
AM
PDT
BA77: Page3 20=21 Genetic Entropy ; Sanford ... "Are there truly neutral nucleotide positions? True neutrality can never actually be demonstrated experimentally (it would require infinite sensitivity). " Let's take that as a given. "However, for reasons we will get into later, some geneticists have been eager to minimize the functional genome, and wanted to regulate the vast bulk of the genome to “junk DNA”. So mutations in such DNA would be assumed to be entirely neutral". It seems certain that is not the case. No one wants to establish junk-dna as being in some state of unprovable platonic neutrality - only that any benefit or detriment it has is marginal at best. "If a nucleotide carries absolutely zero information, it is then by definition slightly deleterious - as it slows cell replication and wastes energy" The issue isn't how much information it contains. Junk dna could be a function that contained a lot of information, but a function that was hardly ever used if at all, for example because a new environment made it completely unnecessary. So you have an organism dragging around this functional code that is never used or accessed in any way. So if its something that's never used, mutations could make it nonfunctional and the organism would never care or know the difference (thus vestigal organs). Also, no one would consider dead code in a program as an unacceptable waste of energy because of the cost to replicate it. "Keep it in - maybe sometime down the road we'll debug it and be able to use it." "there are really no truly beneficial neutral letters in a encyclopedia, there are probably no truly neutral nucleotide sites in the genome." Is someone trying to make the case that there are letters that are "beneficial" and literally neutral at the same time? At any rate... "Let's see... these sentences here in the entry for zebra - do they really clarify anything? Let's go around the room...opinions? Where's Dave? Didn't he put this in? He's not in today? Better not change anything, till we here from him." Since Dr. Sanford made thus crushing critique in 2005... I would say more like self-evident and maybe slightly vacuous. http://www.genome.gov/25521554: "DNA sequences that do not code for proteins, interact in overlapping ways not yet fully understood..." http://www.boston.com/news/globe/health_science/articles/2007/09/24/dna_unraveled/?page=1: "a biological jungle deeper, denser, and more difficult to penetrate than anyone imagined" BA77: This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code" All the above sources indicated (esp. boston.com) is increasing ignorance concerning the genome.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
11:44 AM
11
11
44
AM
PDT
It seems like a billion years would equal 50 million generations.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
11:41 AM
11
11
41
AM
PDT
Ekstasis: "Now, 30,000 advanced primate generations, at 20 years per generation, equals 600,000 years. So, a billion years equals less than 2,000 generations." ???JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
11:19 AM
11
11
19
AM
PDT
Since everyone is performing these fantastic mathematical computations, I will simply conjur up a point backed up by third grade mathematics, and then return to my seat in the back of the class, and take the short bus home after the bell rings. 30,000 generations of e. coli, and presto, we get a new constructive function, the ability to utilize citrate. Now, 30,000 advanced primate generations, at 20 years per generation, equals 600,000 years. So, a billion years equals less than 2,000 generations. In this drop in the chronological bucket, is it feasible that the features and abilities, particularly mental, could have evolved through undirected genetic variation and natural selection. And for a final point, if we took our kids, surrounded them with a tiny bit of pizza and ice cream, and enormous amounts of broccoli and other assorted veggies, would they mutate fast enough to eat the veggies before the "good" stuff runs out and they starve? Or go the direction of the Great Lizards?Ekstasis
June 10, 2008
June
06
Jun
10
10
2008
10:28 AM
10
10
28
AM
PDT
Of interest to topic: Page3 20=21 Genetic Entropy ; Sanford; Are there truly neutral nucleotide positions? True neutrality can never actually be demonstrated experimentally (it would require infinite sensitivity). However, for reasons we will get into later, some geneticists have been eager to minimize the functional genome, and wanted to regulate the vast bulk of the genome to "junk DNA". So mutations in such DNA would be assumed to be entirely neutral. However actual findings relentlessly keep expanding the size of the functional genome, while the presumed "junk DNA" keeps shrinking. In just a few years, many geneticists have shifted from believing that less than 3% of the total genome is functional, to believing that more than 30% is functional - and that fraction is still growing. As the functional genome expands, the likelihood of neutral mutations shrinks. Moreover, there are strong theoretical reasons for believing there is no truly neutral nucleotide position. By its very existence, a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding and nucleosome binding. If a nucleotide carries absolutely zero information, it is then by definition slightly deleterious - as it slows cell replication and wastes energy. Just as there are really no truly beneficial neutral letters in a encyclopedia, there are probably no truly neutral nucleotide sites in the genome. Therefore there is no way to change any given site, without some biological effect - no matter how subtle. Therefore, while most sites are probably "nearly neutral", very few, if any, should be absolutely neutral. Since Dr. Sanford made thus crushing critique in 2005 the genome of humans has now been shown to be virtually 100% severely poly-functional, by ENCODE, with no "junk DNA" regions. Thus this principle is devastating to evolutionary theory (but boy do they a song and dance around it!) This “complex interwoven (poly-fuctional) network” throughout the entire DNA code makes the human genome severely poly-constrained to random mutations (Sanford; Genetic Entropy, 2005; page 141). This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code. Thus even though a random mutation to DNA may be able to change one part of an organism for the better, it is now proven much more likely to harm many other parts of the organism that depend on that one particular part being as it originally was. Since evolution was forced, by the established proof of Mendelian genetics, to no longer view the whole organism as to what natural selection works upon, but to view the whole organism as a multiple independent collection of genes that can be selected or discarded as natural selection sees fit, this “complex interwoven network” finding is extremely bad news, if not absolutely crushing, for the "Junk DNA" population genetics scenario of evolution (modern neo-Darwinian synthesis) developed by Haldane, Fisher and Wright (page 52 and 53: Genetic Entropy: Sanford 2005)! http://www.genome.gov/25521554 BETHESDA, Md., Wed., June 13, 2007 -" An international research consortium today published a set of papers that promise to reshape our understanding of how the human genome functions. The findings challenge the traditional view of our genetic blueprint as a tidy collection of independent genes, pointing instead to a complex network in which genes, along with regulatory elements and other types of DNA sequences that do not code for proteins, interact in overlapping ways not yet fully understood." http://www.boston.com/news/globe/health_science/articles/2007/09/24/dna_unraveled/?page=1 "The science of life is undergoing changes so jolting that even its top researchers are feeling something akin to shell-shock. Just four years after scientists finished mapping the human genome - the full sequence of 3 billion DNA "letters" folded within every cell - they find themselves confronted by a biological jungle deeper, denser, and more difficult to penetrate than anyone imagined."bornagain77
June 10, 2008
June
06
Jun
10
10
2008
09:21 AM
9
09
21
AM
PDT
Dave Scot: I recall you were involved with hardware design or assemblers or something, so you probably have better intuition even than me as to why you can start changing bits around in machine code and be unscathed. I just think in terms like - if a word in hardware is n bits and your instruction set only requires n-m bits, then this means the top n-m bits of any instruction are being ignored. I'm sure there are many other factors as well. On substantive areas, if you allocate a dynamic array of 0xACDF bits (whatever that is in decimal) and a random bits change increases the size to 0xBCDF, then no harm. Also, I'm sure when you compile something, the compiler generates all sorts of boilerplate machine code, generalized for a number of potential scenarios only a handful of which may ever materialize in one particular program. This code could also be mangled and it not make any difference. Scordova mentioned redundancy in the genome, so if there is redundancy there, in the form of junk-dna or whatever obviously that can be mangled and not make a difference. As far as compilers and high-level languages, I don't think nature is such that if one tiny thing is out of place it just shuts down and refuses to do anything. It will take what you give it and attempt to do something, which is what a computer processor does, I think. Don't know how much light this analyis sheds. The scenario I tested says whatever it says.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
08:36 AM
8
08
36
AM
PDT
Or if anybody is interested in seeing a demo of my program, that could possibly be arranged as well.JunkyardTornado
June 10, 2008
June
06
Jun
10
10
2008
07:34 AM
7
07
34
AM
PDT
1 2 3 4

Leave a Reply