Uncommon Descent Serving The Intelligent Design Community

To dream the impossible dream: the quest for the 50-bit life form

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Aleksandr Oparin and John von Neumann. Courtesy of Russavia, Beelaj and Wikipedia.

In two separate comments (see here and here) on a recent post of mine, Intelligent Design critic Dave Mullenix posed a question to ID supporters, which often comes up on this blog:

[W]hy do you ID people insist that the first living thing was complex? 500 to 1000 bits of information? Try 50 to 100. Think of a single polymer whose only capability is reproducing itself, and which is possibly imbedded in the kind of droplets that form naturally…

A simple self replicating molecule isn’t much compared to modern life, but if it self-replicates and allows evolution, it’s all the start we need and a small polymer would do it. Don’t worry about proteins, they come later. Don’t worry about metabolism – that’s also for advanced life. For first life, reproduction with the possibility of Darwinian evolution is all we need and a short polymer will do the trick.

Dave Mullenix confesses to not yet having read Dr. Stephen Meyer’s Signature in the Cell, although he has purchased a Kindle version of the book. I realize that he is a very busy man, and I also realize that other Intelligent Design critics have voiced similar objections previously, so I’ve written this post in order to explain why the scenario Dave Mullenix proposes will not work.

What motivates the quest for a 50-bit life form?

Dave Mullenix is surely well aware of the research of Dr. Douglas Axe, which has shown that the vast majority of 150-amino-acid sequences are non-functional, and that the likelihood of a single protein – that is, any working protein, never mind which one – arising by pure chance on the early earth is astronomically low. Nor can necessity account for the origin of DNA, RNA or proteins. All of these molecules are made up of biological building blocks – nucleotides in the case of DNA and RNA, and amino acids in the case of proteins. Just as the properties of stone building blocks do not determine their arrangements in buildings, so too, the properties of biological building blocks do not determine their arrangements in DNA, RNA and proteins.

If neither chance nor necessity can account for the appearance of fully functional RNA, DNA and proteins, then evolutionists have no choice but to assume that these molecules arose from something even simpler, which was capable of evolving into these molecules. This is the logic which underlies Dave Mullenix’s proposal regarding the origin of life.

Why a 50-bit life form wouldn’t work

Actually, a similar proposal was made by origin-of-life researcher Aleksandr Oparin in the late 1960s. In his original model, put forward in the 1920s and 1930s, Oparin had assumed that chance alone could account for the origin of the proteins which make cellular metabolism possible. However, the discovery of the extreme complexity and specificity of protein molecules, coupled with the inability of his model to explain the origin of the information in DNA, forced him to revise his original proposal for the chemical evolution of life on earth. Dr. Stephen Meyer continues the story in Signature in the Cell (HarperOne, New York, 2009), pages 273-277:

As the complexity of DNA and proteins became apparent, Oparin published a revised version of his theory in 1968 that envisioned a role for natural selection earlier in the process of abiogenesis. The new version of his theory claimed that natural selection acted on unspecified polymers as they formed and changed within his coacervate protocells.[5] Instead of natural selection acting on fully functional proteins in order to maximize the effectiveness of primitive metabolic processes at work within the protocells, Oparin proposed that natural selection might work on less than fully functional polypeptides, which would naturally cause them to increase their specificity and function, eventually making metabolism possible. He envisioned natural selection acting on “primitive proteins” rather than on primitive metabolic processes in which fully functional proteins had already arisen….

[Oparin] proposed that natural selection initially would act on unspecified strings of polypeptides of nucleotides and amino acids. But this created another problem for his scenario. Researchers pointed out that any system of molecules for copying information would be subject to a phenomenon known as “error catastrophe” unless these molecules are specified enough to ensure an error-free transmission of information. An error catastrophe occurs when small errors – deviations from functionally necessary sequences – are amplified in successive replications.[14] Since the evidence of molecular biology shows that unspecified polypeptides will not replicate genetic information accurately, Oparin’s proposed system of initially unspecified polymers would have been highly vulnerable to such an error catastrophe.

Thus, the need to explain the origin of specified information created an intractable dilemma for Oparin. If, on the one hand, Oparin invoked natural selection early in the process of chemical evolution (i.e. before functional specificity in amino acids or nucleotides had arisen), accurate replication would have been impossible. But in the absence of such replication, differential reproduction cannot proceed and the concept of natural selection is incoherent.

On the [other] hand, if Oparin introduced natural selection late in his scenario, he would need to rely on chance alone to produce the sequence-specific molecules necessary for accurate self-replication. But even by the late 1960s, many scientists regarded that as implausible given the complexity and specificity of the molecules in question…

The work of John von Neumann, one of the leading mathematicians of the twentieth century, made this dilemma more acute. In 1966, von Neumann showed that any system capable of self-replication would require sub-systems that were functionally equivalent to the information storage, replicating and processing systems found in extant cells.[16] His calculations established an extremely high threshold of minimal biological function, a conclusion that was confirmed by later experimental work.[17] On the basis of the minimal complexity and related considerations, several scientists during the late 1960s (von Neumann, physicist Eugene Wigner, biophysicist Harold Morowitz) made calculations showing that random fluctuations of molecules were extremely unlikely to produce the minimal complexity required for a primitive replication system.[18]…

As a result, by the late 1960s, many scientists had come to regard the hypothesis of prebiotic natural selection as indistinguishable from the pure chance hypothesis, since random molecular interactions were still needed to generate the initial complement of biological information that would make natural selection possible. Prebiotic natural selection could add nothing to the process of information generation until after vast amounts of functionally specified information had first arisen by chance.

References

[5] Oparin, A. Genesis and Evolutionary Development of Life, New York: Academic, 1968, pp. 146-147.

[14] Joyce, Gerald F. and Leslie Orgel, “Prospects for Understanding the Origin of the RNA World.” In The RNA World, edited by Raymond F. Gesteland and John J. Atkins, I-25. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press, 1993. See especially pp. 8-13.

[16] Von Neumann, John. The Theory of Self-Replicating Automata. Completed and edited by A. Burks. Urbana: University of Illinois Press, 1966.

[17] Pennisi, Elizabeth. “Seeking Life’s Bare (Genetic) Necessities”. Science 272(1996): 1098-99.
Mushegian, Arcady, and Eugene Koonin, “A Minimal Gene Set for Cellular Life Derived by Comparison of Complete Bacterial Genomes”. Proceedings of the National Academy of Sciences USA 93 (1996): 10268-10273.

[18] Wigner, Eugene. “The Probability of the Existence of a Self-Reproducing Unit.” In The Logic of Personal Knowledge: Essays Presented to Michael Polyani, edited by Edward Shils, pp. 231-235. London: Routledge and Kegan Paul, 1961. [But see here for a critique by physicist John C. Baez. – VJT]
Morowitz, Harold J. “The Minimum Size of the Cell,”in Energy Flow in Biology: Biological Organization as a Problem in Thermal Physics, New York: Academic, 1968, pp. 10-11.

(Emphases mine – VJT.)

In conclusion: there are good reasons for thinking that a 50-bit life-form would never work. Since it would not be capable of accurate self-replication, it would be unable to evolve into larger molecules such as RNA, DNA and proteins. Intelligent Design critics who attempt to overcome the astronomical odds against these molecules forming naturally by hypothesizing a simpler, 50-bit life-form that generated them are, like the man of La Mancha, dreaming the impossible dream.

Let me finish my essay by quoting the beautiful lyrics of the song, The Impossible Dream. The song was composed by Mitch Leigh, and the lyrics were written by Joe Darion. It was written for the 1965 musical, “Man of La Mancha”:

To dream the impossible dream
To fight the unbeatable foe
To bear with unbearable sorrow
To run where the brave dare not go

To right the unrightable wrong
To love pure and chaste from afar
To try when your arms are too weary
To reach the unreachable star

This is my quest, to follow that star
No matter how hopeless, no matter how far
To fight for the right, without question or pause
To be willing to march into Hell, for a Heavenly cause

And I know if I’ll only be true, to this glorious quest,
That my heart will lie will lie peaceful and calm, when I’m laid to my rest

And the world will be better for this:
That one man, scorned and covered with scars,
Still strove, with his last ounce of courage,
To reach the unreachable star.

Comments
dmullenix and Mung: I agree that defining life unequivocally is difficult, and a very interesting question. However, lest we are ever tempted to think that there is some kind of smooth continuium from non-life to life, it is worth keeping in mind that, with only a small number of corner cases, it is typically very easy to distinguish between the two. Imagine a Venn diagram with two circles: living things and non-living things. The two circles would be massive, with literally millions of members in each circle. The two circles would also be essentially separate, but if we homed in with a magnifying class on the edges where the circles are closest together we would see that there is a tiny region where the two circles touch, or almost touch, or perhaps overlap, we can't quite magnify it enough to be sure. At that specific point it is unclear, and teasing out whether there is an overlap, a touch, or a small separation is a very interesting ongoing issue. But step back a moment to look at the big picture, and it is very easy to identify 99.99% of the members of each class -- so easy that a small child can do it. In fact when my kids were tiny toddlers I used to point to certain objects and ask them: "live or not alive?" For almost all the objects around us in the world, the answer is very obvious, even to a small child.Eric Anderson
July 29, 2011
July
07
Jul
29
29
2011
08:23 AM
8
08
23
AM
PDT
dmullenix @79: "ID is the idea that life was designed. That can be investigated scientifically, but so far as I know the field restricts itself to trying to falsify evolution." Not quite. I know this is a nuance, but I think we need to be clear on this. ID is the idea that certain things exhibit the artifacts of intelligent activity. The concept can, and is, applied across a wide spectrum of events, including forensics, archaeology, origin of life, and, yes, the complexity and diversity of life, which is an area where evolution also tries to provide an explanation. To the extent that the two ideas are mutually exclusive (in many senses of the word "evolution" they are not; but to the extent evolution claims to explain all apparent design through unguided processes such as RM+NS, for example, they are) ID does compete with evolution in a negative sense. As does evolution with ID. It is no more valid to say that ID is simply a negative case against evolution than it is to say that evolution is just a negative case against ID. Indeed, given Dawkins' and others' comments about the apparent design of life being obvious, evolution really functions as an attempt to challenge the concept of design.Eric Anderson
July 29, 2011
July
07
Jul
29
29
2011
07:58 AM
7
07
58
AM
PDT
Mung at 76: Life is surprisingly hard to define. http://en.wikipedia.org/wiki/Life “It is still a challenge for scientists and philosophers to define life in unequivocal terms. Defining life is difficult—in part—because life is a process, not a pure substance. Any definition must be sufficiently broad to encompass all life with which we are familiar, and it should be sufficiently general that, with it, scientists would not miss life that may be fundamentally different from life on Earth.” The article then lists some of the descriptions of life’s properties: Homeostasis, organization, metabolism, growth, adaptation, response to stimuli and reproduction. After that, they give some proposed definitions and I like this one: “Life is a self-sustained chemical system capable of undergoing Darwinian evolution.” For first life, self reproduction is generally used, but if that definition is applied to all organisms then mules are dead. “Then what is your “life” the first of?” Life. I’m still wondering if you are a lawyer. Eric Anderson at 78: ID is the idea that life was designed. That can be investigated scientifically, but so far as I know the field restricts itself to trying to falsify evolution.dmullenix
July 29, 2011
July
07
Jul
29
29
2011
03:49 AM
3
03
49
AM
PDT
dmullenix @75: "But you’re right, ID could be investigated scientifically, just as the existence of God can be. You have to stick with observable evidence and the evidence would have to support ID but if you do that and you actually found evidence, you could scientifically support ID to the same degree of confidence that science supports evolution. But nobody’s looking that I know of. So far as I know, nobody even has an idea of where to look." What are you talking about? ID is a very limited concept, namely that intelligent activity sometimes leaves artifacts that can be identified after the fact and that when we see such artifacts in life the inference to the best explanation is that it was designed. It is totally based on observable evidence. People are looking -- almost everywhere we look in life we see the kinds of systems/CSI that ID argues is a reliable indicator of intelligent activity. Perhaps you are confusing ID with a search for the identiy of the designer?Eric Anderson
July 28, 2011
July
07
Jul
28
28
2011
10:38 AM
10
10
38
AM
PDT
"We’ve got to work on that reading comprehension." As someone said recently, "you people are amazing, really!"Ilion
July 28, 2011
July
07
Jul
28
28
2011
09:55 AM
9
09
55
AM
PDT
You'll need to forgive the numerous spelling errors, I neglected to spell check.Mission.Impossible
July 28, 2011
July
07
Jul
28
28
2011
09:48 AM
9
09
48
AM
PDT
...continued... The question that needs to be answered is, where does the information encoded into the DNA come from? Where does the specification come from, the sequence which is read by enzymes and translated into an amino acid chain, or copied into an identical sequence (for replication)? It can't merely be the result of a self-replicating molecule, unless it could be shown that the molecule could account for the specification and the concrete product simultaneously-- that is, the DNA sequences and the protein-based hardware that performs the tasks of translation and transcription, sequencing and folding. We need to not only ask how the specification in the DNA came about, but how the concrete product, symbolically embedded into the DNA, came about at the same time, in order to form the functionally integrated whole. It's not just the complexity of what we observe, but the inter-dependent sophistication: that the hardware is dependent on the DNA, and that the DNA is dependant on the hardware (for replication) and that the DNA codes for the harware which operates upon it, along with coding for disparate other functional systems that are required for the cell to function properly. So not only do we need to know where the coding information in DNA comes from, but we have a set of necessary conditions for it to be useful in any way, and those include the presence of a set of enzymes which can perform the necessary tasks of translating and transcribing the DNA, and then sequencing and folding the protein. We must also account for how that specification became embedded in the DNA, specifying that which would also need to be present in concrete form at the genesis of this first self-replicating cellular organism. All of it needs to be present at one time, and any proposed OOL theory or CSI simulation will need to account for each together. Of note, the specification for any enzyme involved directly in the translation/sequencing/folding process could be considered a sort of meta-data, construing that which is required to use the rest of the data in the strand. The DNA codes for the proteins which comprise the special set of enzymes that are responsible for producing the other proteins in the cell. Even this description of intracellular operations pales in comparison to the integrated functional complexity that is present in even the simplest extant single-celled organism. Why is it again that molecular self-replication doesn't qualify as specified complexity? This is twofold: because a molecular self-replicator can be accounted for via chance and necessity -- a sufficient cause, and because specification needs to conform to an independently given pattern. There needs to be a communication of information via a protocol; and the DNA and the harware which processes it, represent the communication from one to the otheer. They are independent of one another, except via a necessary third party, the protocol, which decipheres one and instantiates the other. Also, the function of molecular self-replication, even on a 500 or 1000 bit molecule, would likely be readily expressed in a much shorter chain (at a much lower bit depth or sequence length) that would likely fall below the threshhold for determining the presence of CSI; so you would need to judge the complexity of the function with regard to the lowest complexity at which it would operate. What it basically comes down to is that any proposed primordial system, the first self-contained self-replicator that would be a candidate for CSI, would need have present the following elements: it would need a storage medium, representative of DNA; it would need the sequence lexicon, which would be the specificaiton for the sequences of proteins which are produced as a result of processing the coded regions of DNA, and it would need hardware which operated on the DNA and translated the DNA codons into the amino acid sequences which would later be folded into proteins. You need all of these things to satisty the requrement for specified complexity and hence for the presence of CSI. m.i.Mission.Impossible
July 28, 2011
July
07
Jul
28
28
2011
09:45 AM
9
09
45
AM
PDT
...continued... Let's say we came across a molecule in the wild, and it seemed to exhibit a high degree of complexity in that the permutaitons of the seqence space were significantly large, and we wanted to determine the process that formed the molecule. We could easily determine that law was responsible by observing the formation of that molecule via law-like processes, such as with crystal formation. This would provide empirical support for that view; we would have direct experience with a law-like process which assembles that molecule. Now suppose we find these molecules all over the place, but we don't ever observe the formation of them. At that point, we'd have an unknown cause for an observation. Now let's say later we discover a process which accounts for the molecule. We could then assume that barring the discovery of a dependent process, this one was sufficient. Later, if we did find another process which could assemble the molecule, or other conditions under which it would form, we'd have a complementary process capable of accomplishing the same thing but perhaps via completely different laws or conditions. Until we discovered the second process, we would logically defer to the first as a sufficient cause. Given a sequenced folded protein we currently observe one process which accounts for it, one observed phenomenon which results in the finished product, and that's the intra-cellular processes of storing, translating, sequencing, and folding an amino acid chain into a protein. Whether there was a law-like process outside (or prior to) the cell would always remain a possibility, but we couldn't posit one until we could observe and document it, that is, demonstrate it empirically. Our explanation for the phenomenon would need to fall squarely on what we've observed; we wouldn't need to put forward as a complementary explanation the notion that it could alternatively be assembed otherwise. That said, for the origin of life we need a process that can assemble and fold proteins; and that process, as it's known, is defined by the machinery in the cell. So the only explanation for what we observe in the cell is the cell itself. We have a paradox. (One way around this is to postulate design. We didn't observe it being designed, but we know that the process of design, which is an attribute of agency, can solve this problem by lopping off huge sections of the search space via active imagination.) It's all inter-dependant. Even more so if we consider that when we observe the processes in the cell, we observe that the DNA codes for proteins, and some of those proteins actually process the DNA in order to produce other proteins. There's a paradox (readily acounted for by design); and since the necessary proteins can't sequence and fold themselves, the only place they can come from, to our knowlege, is inside the cell, via the observed processes which produce them. continued...Mission.Impossible
July 28, 2011
July
07
Jul
28
28
2011
09:42 AM
9
09
42
AM
PDT
...continued... For a molecular self-replicator, chance and necessity are demonstrably at work and are sufficient to explain the associated phenomena. For this type of replicator, chance and necessity are known to be operating, and the process can be empirically studied. Chance and necessity are entirely sufficient explanations for what occurs in molecular self-replication. Even at 500 or 1000 bits, a molecular self-replicator would be acting out law, sufficiently explained by chance and necessity. Why couldn't we consider the replication process of a self-replicating molecule to be a function, and hence, at a sufficient bit depth, consider it sufficient to fulfill the requirement for specified complexity? Because specifications need to conform to an independently given pattern. One example is the data fed into a program which drives a robotic assembler; a set of blueprints by which an engineer builds a structure is another example. In both cases the specification is independant of the concrete product, and a protocol is required to act upon the specification to produce it. The computer program which drives the robotic assembler represents the protocol, and the 'thing' which gets assembled is the product. The engineer that builds the structure is acting as a protocol, and the structure is the product. The product is the result of the specification via the protocol, and not the specification itself. So why is a certain type of self-replicator -- specifically one with representative DNA, protein sequences, and certain hardware -- a requirement for exhibiting the bootstrapping of specified complexity? Inside the cell, translating between the DNA and the proteins that it codes for, we have hardware which acts as a protocol between two representative languages, which are entirely compatlibe. The first language, which codes for proteins -- the DNA -- is merely a set of symbols that provide a data source for an enzyme which translates those symbols into an amino acid chain -- a protein -- which is an item of another language, a sequence of amino acids which will be folded into a protein. The symbol table -- the DNA -- is data storage; it provides specification for the protein sequence by indicating every amino acid via a codon (a three-letter symbol which corresponds to a specific amino in the chain) and the order in which it appears. This is so that the enzyme can perform not only transcription for a replication event, but translation from the codon into the amino that it codes for. These amino acids, coded for in a given strand of DNA, comprise a protein. By chaining these amino acids together, we then have (after folding) a functional protein: something that exhibits high complexity, and exhibits function; but its form is only explicable via the process that created it. The falsification would be an empirical one: that we could observe the process of proteins being sequenced and folded in the wild, such that being machine-assembled wouldn't be a necessary condition -- we could observe the process ocurring via law quite readily. Since the sequence of the protein can only be the result of a mechanical process within the cell, we have the requrements of specified complexity: we can trace back the chain of amino acids and arrive at the DNA sequence that specified them and we can determine the necessary role of the protocol. We have an independent pattern acting as a specification, and we have the product that results from the instantiation of the protocol that deciphers it. continued...Mission.Impossible
July 28, 2011
July
07
Jul
28
28
2011
09:40 AM
9
09
40
AM
PDT
I don't mean to impose. I originally composed this for another thread, but It's no longer accepting comments, so I'm posting it here. It's intended to show what I belive would be required to demonstrate that a simulation capable of generating CSI had been bootstrapped from simpler elements. I also believe that a reasonable OOL scenario would need to account for the same. There were issues with this getting hung up in the moderation queue in the other thread, so I'm hoping this properly goes through. THis is a continuation of some thoughts from my post at #244 which explains in more depth why I belive that in order to determine that CSI has been generated by a blind process, either naturally or via simulation, that multiple, interdependent systems need to be represented. Do to length I'll split it up into more than one post. It rambles a bit. In my defense, it would have taken me much longer to clean it up and condense it than it will take anyone to read it. Quoting myself from here:
Ending point: a self-replicating virtual cell, containing at least each of these: an information storage medium, and an information processing system which operates on the medium, into which the systems themselves are encoded. These items are needed because without an abstraction between information storage and functional implementation, we couldn’t do anything but violence to the concept of information, which needs some sort of encryption and decryption protocol between two sets of elements that can have nothing but an abstract link between them — and this protocol must represent a link between an inert symbolic medium and a functional element into which it translates. In other words, we need one language which describes the element being operated on, and another language which directly represents the element being operated on. This element must be functional, and the system which does the translating from one to the other must itself be encoded in both languages.
Let me define how I'm using some terms in case my usage differes some from alternate or more orthodox usage. Specified Complexity: the presence of both specification and complexity. Specification: a sequence of symbols which conforms to an independently given pattern or function. Complexity: a contingent arrangements of matter exceeding 1000 bits. This level of complexity seems to account for every atom in the universe multiplied by every Planck-time quantum state that's ever occured in its history, squared. Information: the presence of specified complexity. Chance: randomly contingent but otherwise inexplicable events, such as replication errors. Necessity:That which must occur with specific arrangements of matter under a specific set of circumstances. Law: That which governs necessity as expressed in the laws of physics and chemistry. Protocol: A specification which describes a mapping between two compatible languages, OR an entity which preforms the function of translation between the two. continued...Mission.Impossible
July 28, 2011
July
07
Jul
28
28
2011
09:38 AM
9
09
38
AM
PDT
dmullenix:
We’ve got to work on that reading comprehension.
Your failure to put together a coherent argument has nothing to do with my reading comprehension. Here's what you wrote:
Successful reproduction before death is pretty much the standard definition for first life.
And all subsequent life is defined differently? Then what is your "life" the first of?Mung
July 28, 2011
July
07
Jul
28
28
2011
09:10 AM
9
09
10
AM
PDT
Mung at 61 All you folks who have not reproduced yet are not alive if you’re the first living thing. We’ve got to work on that reading comprehension. You too, Ilion in 69. Mung, I read on another board that you’re a lawyer. Is this true? vjtorley at 62: “In Signature in the Cell, on page 496, Dr. Stephen Meyer makes a testable, falsifiable prediction regarding specified complex information: No undirected process will demonstrate the capacity to generate 500 bits of new information starting from a nonbiological source. To falsify that claim, you need to do one of two things: EITHER show that a nonbiological process can generate that level of specified complexity, OR show that the specified complexity associated with (say) the first living cell is actually much less than 500 bits – i.e. that its origin was far more probable than Dr. Meyer claims it was.” The first part has been done. Darwinian evolution can add millions of bits of information to a genome. If we have reproducing polymers, they are engaging in Darwinian evolution. We can also use computer GA programs to show gains in information. Heck, just reproduction with the new genome getting stuck to the end of the original will double the size of a genome and that second half is ready to evolve out of redundancy and into active information. Unless Dr. Meyer is demanding ID style proof ala Thomas Cudworth in the “Why were so many Darwin defenders no shows” thread: “By an evolutionary pathway to the flagellum, I mean a step-by-step recipe for building a bacterium with a flagellum, out of a bacterium with no flagellum, not even a partial flagellum. I want to see the flagellum going up in stages before my very eyes, as I can watch a skyscraper going up in stories before my eyes. I want a morphological description of the bacterium for each intermediate stage, an explanation of the selection advantage of each stage, and a list of DNA bases that had to be altered to get to that stage, and what the substitutions were, and the exact locations where all this took place along the bacterial genome. And of course that implies I need a count of the number of necessary stages (10? 20? 100?), and also I need a full discussion of mutation rates and the time-frame that is being hypothesized, so that I can see whether wildly optimistic estimates of favorable mutations are being employed, etc.” “A 500-page book, minimum, complete with many diagrams of both DNA sections and morphological changes, would be needed to cover the details I’ve asked for.” He calls that “a plausible stepwise pathway” and we’re never going to see anything like that until we invent time travel so we can collect samples, so it’s a safe demand to make. The second part is what we’re discussing in this thread, once ID gets over it’s apparently unshakable belief that the first living entity was a complex cell. vjtorley at 63: “Would anyone care to comment? I strongly suspect that it would be impossible to construct a bacterium with a gene set of 7,000 or even 20,000 nucleotides.” You’re probably right, but any bacterium you see today is a late model honed by billions of years of evolution. 7,000 nucleotides would probably build a much simpler cell that would do pretty well in an early world where it didn’t have to compete with modern bacteria. Eric Anderson at 65: “I don’t have the impression that Meyer has been hesitant to say what his personal opinion on the age of the Earth is.” Try to find another instance of his giving his opinion on the matter. “I did not say the hearing was unfair. I said Meyer expected the hearing to be fair,” I think the ID team expected a cake walk. Evolutionary scientists boycotted the meeting and it looked like clear sailing ahead until one attorney showed up and started asking questions. You’re right about his tone. The DI was trying to slip one past the Kansas schools with the help of a board that had been captured by fundamentalists and that was not appreciated. vjtorley at 66: “the authors claimed that the length of the most primitive possible genome would be about 7,000 to 20,000 nucleotides.” While looking up “error threshold” on Wiki, I found this: “Relaxed error threshold (Kun et al., 2005) - Studies of actual ribozymes indicate that the mutation rate can be substantially less than first expected - on the order of 0.001 per base pair per replication. This may allow sequence lengths of the order of 7-8 thousand base pairs, sufficient to incorporate rudimentary error correction enzymes.” That article is at http://en.wikipedia.org/wiki/Error_threshold_(evolution) and the Kun 2005 article (from “Nature Genetis) is at http://www.nature.com/ng/journal/v37/n9/pdf/ng1621.pdf “Finally, one must keep in mind that this kind of research has little relevance for the study of the origin of life, since it is impossible to identify any of the abovementioned diverse solutions with the one adopted by the more primitive cells (63). This is especially true in the cases where a bacterium-centered approach is followed, as described in this paper.” In other words, we don’t know what the early organisms were like – they were probably very different from modern life. They certainly weren’t bacteria. vjtorley at 68: “I take it from your last post that you would regard Intelligent Design as a scientific theory, regardless of whether it is true or not. You acknowledged that a young Earth would require us to postulate a Designer. In other words, under at least one scenario (one which I don’t happen to accept, I might add), Intelligent Design would be verified! Interesting.” Such a scenario would rule out Darwinian evolution and when you get right down to it, that IS what ID is about. All of the ID papers (Dembski’s filters and meaningless huge search spaces, Behe’s irreducible complexity, Axe’s proteins – everything I can think of off hand seeks not to establish ID, but to falsify evolution. Even Stephen Barr at First Things has noticed this and he’s sympathetic to ID, or at least he’d like to be. http://www.firstthings.com/onthesquare/2010/02/the-end-of-intelligent-design And for that matter, even a 6000 year old earth wouldn’t falsify evolution. Many YECs believe that Noah only took a few instances of each group of species on the ark – two wolves, say, and two cattle and then all the various members of the dog family and cattle family somehow evolved from them in just a few thousand years. Evolution on steroids! Even if ID somehow did falsify evolution, that wouldn’t automatically prove ID. There could be some other natural process at work. To prove ID, you have to actually investigate it like Darwin investigated evolution and find evidence that actually supports ID, not just falsifies evolution. But you’re right, ID could be investigated scientifically, just as the existence of God can be. You have to stick with observable evidence and the evidence would have to support ID but if you do that and you actually found evidence, you could scientifically support ID to the same degree of confidence that science supports evolution. But nobody’s looking that I know of. So far as I know, nobody even has an idea of where to look. I see nullasalus has covered a lot of this in #70. Nullasalus at 72: None of your examples (brute facts, Boltzmann brains, brute laws) are in any way, “particular models of evolution”.dmullenix
July 28, 2011
July
07
Jul
28
28
2011
03:29 AM
3
03
29
AM
PDT
vjtorley, what about the energy problem? No one seems to bring it up. What role does the Late heavy bombardment have on OOL? Some say life goes back to 4.2bya, others 3.5bya. The earth was smashed at about 4bya. What in your opinion, are the implications?junkdnaforlife
July 27, 2011
July
07
Jul
27
27
2011
06:50 PM
6
06
50
PM
PDT
"Eric, I have to disagree: the age of the earth is extremely relevant to ID. If the earth is young, or if the biosphere is young (i.e. a few thousand or tens of thousands of years) then Darwinian evolution simply can’t account for the variety of life. Some kind of Designer would have to be postulated. Darwinian evolution only works for a very old earth scenario. Which of course is the one that many independent sources of data support." Dr. Liddle, thank you for your response. I don't mean to sound condescending, but this is wrong on multiple accounts. 1- ID is not a theory that arises only once we decide the time is too short on the Earth for Darwinian evolution to do its job and so we look around for an alternate theory. True, an important implication of ID is that to even mimic minimal crude design you need massive resources and massive amounts of time. In that sense ID contradicts the evolutionary creation story, but not because the earth is 4.6BY old as opposed to 10KY old. I'll spot you the entire age of the universe -- shoot even 100BY -- you still won't find chance and necessity producing anything close to the diversity and complexity of life we see around us today. There is simply no reason to think otherwise, other than wishful thinking. The age of the earth is but a rounding error in the probabilistic calculation. 2- More important than the negative case against evolution, ID posits that we can draw an inference to design as the best explanation, not based on what we don't know, but precisely based on what we do know about the cause and effect relationships we regularly observe. 3- "Darwinian evolution only works for a very old earth scenario." I agree that the old earth gives Darwinists comfort, because they are easily impressed with all those zeroes. Unfortunately, there is no rational basis to think that even the billions of years could come close to creating what we see around us. Let's be clear: there is no direct observation of that creative power at play, there is no lab result that gives evidence that such creative power exists. It is only by gazing back at the distant past (vaguely through the mists of time and without running any actual numbers) that we can *imagine* that given all those billions of years Darwinian evolution could do the work of creation.Eric Anderson
July 27, 2011
July
07
Jul
27
27
2011
04:58 PM
4
04
58
PM
PDT
Good point, Nullasalus. But I can’t think of any. Can you? I can think of plenty. Brute facts: These things exist and began to exist, and there is no explanation. Nothing needs to be postulated. There are infinite universes and we live in one where various species got Boltzmann'd up by chance. There are brute laws that lead to a variety of things coming to exist without fail, and biological forms are among these things. The list can go on. Likewise, ID does not get falsified simply because the universe is old. In fact, some particular ID scenarios absolutely require an older universe.nullasalus
July 27, 2011
July
07
Jul
27
27
2011
04:54 PM
4
04
54
PM
PDT
Good point, Nullasalus. But I can't think of any. Can you? Not that there is a lot of point, because the evidence that the earth is about 4 and half billion years old is pretty convincing.Elizabeth Liddle
July 27, 2011
July
07
Jul
27
27
2011
04:26 PM
4
04
26
PM
PDT
Eric, I have to disagree: the age of the earth is extremely relevant to ID. If the earth is young, or if the biosphere is young (i.e. a few thousand or tens of thousands of years) then Darwinian evolution simply can’t account for the variety of life. Some kind of Designer would have to be postulated. No, one wouldn't - not in terms of logical possibility. A particular model of evolution would be knocked down as a result, but other possibilities would remain.nullasalus
July 27, 2011
July
07
Jul
27
27
2011
04:15 PM
4
04
15
PM
PDT
Mung @ 61, Thank you for sparing me having to make essentially the same comment.Ilion
July 27, 2011
July
07
Jul
27
27
2011
03:41 PM
3
03
41
PM
PDT
Hi Elizabeth, I take it from your last post that you would regard Intelligent Design as a scientific theory, regardless of whether it is true or not. You acknowledged that a young Earth would require us to postulate a Designer. In other words, under at least one scenario (one which I don't happen to accept, I might add), Intelligent Design would be verified! Interesting. Thanks anyway. I might add that by the same logic, if it could ever be demonstrated that life on Earth appeared almost immediately after (a) the time when the Earth became capable of sustaining it or (b) the time of the last major cataclysm that could have wiped out all life on Earth (e.g. collision with another large body) then that would also establish the existence of a Designer who created the first living thing (say, four billion years ago). ================================== UPDATE regarding the PLoS paper: Here's another comment I received on the paper, from a skeptical reader: "This doesn't really strike me as new. Kondrashov and others did lots of work showing that, if you reach a point where any further mutations are lethal, this will (obviously) halt further degeneration, provided the population doesn't go extinct. This approximates truncation selection. The issue with this is that most mutations are deleterious but low-impact, and thus accumulate. You may still reach an equilibrium point at which any further mutation is lethal, but you certainly can't have a net information gain." Thoughts, anyone?vjtorley
July 27, 2011
July
07
Jul
27
27
2011
12:03 PM
12
12
03
PM
PDT
Eric, I have to disagree: the age of the earth is extremely relevant to ID. If the earth is young, or if the biosphere is young (i.e. a few thousand or tens of thousands of years) then Darwinian evolution simply can't account for the variety of life. Some kind of Designer would have to be postulated. Darwinian evolution only works for a very old earth scenario. Which of course is the one that many independent sources of data support. That doesn't rule out an ID of course, but it means (IMO) that the Darwinian theory is viable, given an initial self-replicator capable of Darwinian evolution.Elizabeth Liddle
July 27, 2011
July
07
Jul
27
27
2011
11:23 AM
11
11
23
AM
PDT
Hi everyone, Here's some more feedback on the PLoS paper at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0021904 . As I pointed out above, the authors claimed that the length of the most primitive possible genome would be about 7,000 to 20,000 nucleotides. One of the papers they cited was this one by Gil et al.: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC515251/?tool=pubmed Microbiol Mol Biol Rev. 2004 September; 68(3): 518–537. doi: 10.1128/MMBR.68.3.518-537.2004 PMCID: PMC515251 Copyright 2004, American Society for Microbiology "Determination of the Core of a Minimal Bacterial Gene Set" Someone has kindly drawn my attention to the following passage at the end of the paper:
At any rate, we should accept that there is no conceptual or experimental support for the existence of one particular form of minimal cell, at least from a metabolic point of view. In this sense, our conclusions must be regarded as provisional. Different approaches, ours among others, should converge in several solutions (35, 49). Finally, one must keep in mind that this kind of research has little relevance for the study of the origin of life, since it is impossible to identify any of the abovementioned diverse solutions with the one adopted by the more primitive cells (63). This is especially true in the cases where a bacterium-centered approach is followed, as described in this paper. Any attempt to universalize the conclusions would necessarily include the comparison with archaeal genomes, more specifically the smallest ones (84). (Emphasis mine - VJT.)
If one looks at the PLoS paper, it is immediately apparent that L, the length of the genome for the most primitive organism, is critical to the authors' argument. They assume it is around 10,000. Of the two sources they cite, one (Gil) warns against using his team's research in connection with the origin of life. The other source (Kun A, Santos M, Szathmary, E .2005. Real ribozymes suggest a relaxed error threshold. Nature Genetics 37: 1008–1011) is not available online, but the abstract contains the following sentence:
Incidentally, this genome size [7,000 nucleotides] coincides with that estimated for a minimal cell achieved by top-down analysis, omitting the genes dealing with translation. (Emphasis mine - VJT.)
I for one would like to see some more experimental work on the question of the size of the genome for the first living cell.vjtorley
July 27, 2011
July
07
Jul
27
27
2011
11:03 AM
11
11
03
AM
PDT
dmullenix @59: I don't have the impression that Meyer has been hesitant to say what his personal opinion on the age of the Earth is. The age of the Earth is, however, (i) largely irrelevant to ID, and (ii) a hot button for most evolutionists who are trying to making a point or divide and conquer the various viewpoints that are support ID. So, yes, he would be cautious in talking about the age of the Earth in those situations where it is irrelevant or may be construed in an inappropriate fashion. I did not say the hearing was unfair. I said Meyer expected the hearing to be fair. I'm not familiar with the entire hearing, nor do I have an overall impression of the hearing as a whole -- I'm just looking at the transcript you linked to. A witness has a right to know the purpose of the hearing and why they are there. Also, the witness can ask clarifying questions if needed. Go back and read the transcript -- this time not from the viewpoint of gloating about how such an ingenious questioner made Meyer admit something he didn't want to, but from a standpoint of general tone and human interaction -- and I think you will see that, while Meyer was being a bit stubborn because he didn't want his response to be taken in the wrong context or seen for more than it should be, the questioner was being a real jerk.Eric Anderson
July 27, 2011
July
07
Jul
27
27
2011
10:17 AM
10
10
17
AM
PDT
Mung @53: I realize that some people view a crystal is an example of a self-replicating molecule. Even if it is, it doesn't teach us anything about the origin of life, of course, but humor me a minute: Is a crystal really self-replicating? Do we start with a crystal whose structure is somehow ascertained, copied and replicated in some fashion, or are we simply dealing with what essentially amounts to a dissolved substance precipitating out of a solution? If I let some salt water evaporate in a dish and precipitate out the salt, and then let some more salt water evaporate later, have the original salt crystals "self-replicated" in any meaningful sense of the word? Of course not. They've just had new deposits placed upon them, which for chemical reasons may line up to form nice shapes. Sorry, but I don't see crystal formation as even in the same category as the kind of self-replication we're looking for (not to mention the elephant in the room -- CSI).Eric Anderson
July 27, 2011
July
07
Jul
27
27
2011
09:58 AM
9
09
58
AM
PDT
Hi everyone, While we're on the subject of falsifiable predictions, I notice that the paper I cited in #60 above ( http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0021904 ) claims that the length of the most primitive possible genome would be about 7,000 to 20,000 nucleotides. That shocked me when I read it, because according to http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/G/GenomeSizes.html , the smallest genome of any organism yet found is 490,885 base pairs (Nanoarchaeum equitans). The authors defend their low estimate by citing this paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC515251/?tool=pubmed Microbiol Mol Biol Rev. 2004 September; 68(3): 518–537. doi: 10.1128/MMBR.68.3.518-537.2004 PMCID: PMC515251 Copyright 2004, American Society for Microbiology "Determination of the Core of a Minimal Bacterial Gene Set" Rosario Gil, Francisco J. Silva, Juli Pereto, and Andres Moya. Would anyone care to comment? I strongly suspect that it would be impossible to construct a bacterium with a gene set of 7,000 or even 20,000 nucleotides.vjtorley
July 27, 2011
July
07
Jul
27
27
2011
09:38 AM
9
09
38
AM
PDT
dmullenix (#45) Thank you for your post. In response to my claim that proposed naturalistic origin-of-life scenarios fail to account for the origin of specified complexity in the first place, you write:
Whoa! Let’s stop and look at that for a moment. What is the specification here? The ability to reproduce itself at least once before it is destroyed. If it reproduces itself, it by definition meets the specification. The exact pattern isn't important, so long as it meets the specification of reproducing itself before it’s destroyed.
You're right to say that a self-replicating molecule possesses some degree of specificity. However, if the molecule is short and can be reached by an easy chemical pathway, the specified complexity will be low - well below the 500-bit threshold which many ID proponents have claimed that undirected natural processes cannot breach. In Signature in the Cell, on page 496, Dr. Stephen Meyer makes a testable, falsifiable prediction regarding specified complex information:
No undirected process will demonstrate the capacity to generate 500 bits of new information starting from a nonbiological source.
To falsify that claim, you need to do one of two things: EITHER show that a nonbiological process can generate that level of specified complexity, OR show that the specified complexity associated with (say) the first living cell is actually much less than 500 bits - i.e. that its origin was far more probable than Dr. Meyer claims it was.vjtorley
July 27, 2011
July
07
Jul
27
27
2011
09:16 AM
9
09
16
AM
PDT
Successful reproduction before death is pretty much the standard definition for first life.
And all you folks out there who have not reproduced yet are not yet alive, and if you don't reproduce before you die you will never have lived.Mung
July 27, 2011
July
07
Jul
27
27
2011
09:04 AM
9
09
04
AM
PDT
Hi everyone, An interesting paper which claims to resolve the error catastrophe problem has been published recently. Nick Matzke and Dave Mullenix might be interested in this one. I'd like to mention it in all fairness, because although it undercuts arguments I defended in this post relating to the error catastrophe, the paper does represent the latest research in the field. Here it is: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0021904 Saakian DB, Biebricher CK, Hu C-K (2011) "Lethal Mutants and Truncated Selection Together Solve a Paradox of the Origin of Life." PLoS ONE 6(7): e21904. doi:10.1371/journal.pone.0021904 Background Many attempts have been made to describe the origin of life, one of which is Eigen's cycle of autocatalytic reactions [Eigen M (1971) Naturwissenschaften 58, 465–523], in which primordial life molecules are replicated with limited accuracy through autocatalytic reactions. For successful evolution, the information carrier (either RNA or DNA or their precursor) must be transmitted to the next generation with a minimal number of misprints. In Eigen's theory, the maximum chain length that could be maintained is restricted to nucleotides, while for the most primitive genome the length is around . This is the famous error catastrophe paradox. How to solve this puzzle is an interesting and important problem in the theory of the origin of life. Methodology/Principal Findings We use methods of statistical physics to solve this paradox by carefully analyzing the implications of neutral and lethal mutants, and truncated selection (i.e., when fitness is zero after a certain Hamming distance from the master sequence) for the critical chain length. While neutral mutants play an important role in evolution, they do not provide a solution to the paradox. We have found that lethal mutants and truncated selection together can solve the error catastrophe paradox. There is a principal difference between prebiotic molecule self-replication and proto-cell self-replication stages in the origin of life. Conclusions/Significance We have applied methods of statistical physics to make an important breakthrough in the molecular theory of the origin of life. Our results will inspire further studies on the molecular theory of the origin of life and biological evolution. ================================= I would invite readers to comment on this paper. Here's a short comment sent to me by a scientist: "This paper is not a model of the origin of a genetic code per se (the term 'code' only appears once, and only in reference to Shannon optimal codes and error thresholds). It does give a model of the origin of a fairly long information carrying molecule (DNA, RNA, etc.) despite the Eigen paradox that predicts only short molecules can survive without some kind of error correction. This paper's solution is that the lack of reproduction for bad sequences saves the day." My own take on the paper is that the error catastrophe paradox should no longer be used as a knock-down argument against the possibility of life starting small. ID advocates would be better advised to focus on the key puzzle surrounding the origin of life: namely, the origin of specified complex information. By the way, Dave, if you're wondering where I got that sequence of letters in post #33 above, try whistling it. Sound familiar? It's Bach's Minuet in G major, transposed to the key of C, which is why it starts on G instead of D. You were on the right track. Cheers.vjtorley
July 27, 2011
July
07
Jul
27
27
2011
08:54 AM
8
08
54
AM
PDT
Mung at 49: Me: “Remember, “life” is when the first molecule reproduces itself before it is degraded and that takes into account any failed attempts. Mung: “Translation: Life is what I say it is regardless of any evidence to the contrary.” Me: Successful reproduction before death is pretty much the standard definition for first life. I added the part about that taking into account any failed attempts to answer your objection. Life is in quotes because I’m defining it. Eric Anderson at 50: Steven Meyer is notoriously coy about giving an age to the earth. That’s the main reason I and many others thought he was a YEC since otherwise why hide his opinion? I used the term “admission” because of his previous reluctance to disclose his opinion and I’m a little surprised the questioner got it out of him. In light of his answer, perhaps his reluctance is because he doesn’t want to jeopardize the Discovery Institute’s funding. In what way was the hearing unfair? Were Dr. Meyer and the other witnesses prevented from testifying? Wasn’t that the hearing where the religious conservatives on the Kansas school board were trying to force the Discovery Institute’s “Critical Analysis of Evolution” lesson plan into the science curriculum? Did Dr. Meyer expect the Board to roll over and play dead? Or was it unfair because they ultimately rejected the DI’s plan? Mung at 55 “If dmullenix has the basic correct response, what need is there of a model explaining how you might gradually get above the error catastrophe? Basically Dave did a bunch of hand-waving and asserted without factual basis that there is no “error catastrophe” problem. And then you come in and say he’s right but he’s wrong. Go figure. Dave asserted that the error catastrophe problem is not relevant to simple enough polymers. This doesn’t affect the simple polymer theory of OOL. So at what point does it become relevant?” Me: When the organism gets complex enough so the genome has to specify lots of non-genomic material that is essential for reproduction. The "simple polymer" is the genome for first life. If it gets clobbered, it's a small loss and you just try again. If the next try produces a duplicate of the original polymer, the genome lives though another round. If it's slightly different but still reproduces itself and the genome makes an accurate copy of itself on the next try then you've got evolution in action. But if that genome is specifying a lot of other "finely tuned" molecules that are necessary for more efficient reproduction then small errors in the genome can sabotage their function and you're into error catastrophe zone. “At some point the length of the genome must increase, and when it does error threshold does in fact become an issue.” Me: Why? So long as it's a naked genome, slight errors can be tolerated. In fact, they can lead to evolution.dmullenix
July 27, 2011
July
07
Jul
27
27
2011
03:55 AM
3
03
55
AM
PDT
The abiotic synthesis of RNA is an ongoing project and progress is being made on several fronts. For published examples see; 1. Chemoselective Multicomponent One-Pot Assembly of Purine Precursors in Water. J. Am. Chem. Soc (2010) 2. Phosphate-Mediated Interconversion of Ribo- and Arabino-Configured Prebiotic Nucleotide Intermediates. Angew. Chem. Int. Ed. (2010) 3. A Stereoelectronic Effect in Prebiotic Nucleotide Synthesis. ACS Chem. Biol. (2010)Starbuck
July 26, 2011
July
07
Jul
26
26
2011
09:56 PM
9
09
56
PM
PDT
Mung: "There is no magical Darwinian evolution kicks in and error catastrophes can’t stop it." There you go again! thinking that 'science' can't have Magick.Ilion
July 26, 2011
July
07
Jul
26
26
2011
03:31 PM
3
03
31
PM
PDT
1 2 3

Leave a Reply