Uncommon Descent Serving The Intelligent Design Community

You Don’t Need Darwin to Explain the Degradation of Information

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In today’s Washington Post, one reads:

If Darwin was right, for example, then scientists should be able to perform a neat trick. Using a mathematical formula that emerges from evolutionary theory, they should be able to predict the number of harmful mutations in chimpanzee DNA by knowing the number of mutations in a different species’ DNA and the two animals’ population sizes.

“That’s a very specific prediction,” said Eric Lander, a geneticist at the Broad Institute of MIT and Harvard in Cambridge, Mass., and a leader in the chimp project.

Sure enough, when Lander and his colleagues tallied the harmful mutations in the chimp genome, the number fit perfectly into the range that evolutionary theory had predicted.

COMMENT: Darwin’s theory does not require harmful mutations but only beneficial mutations — competition for scarce resources would then provide the necessary sieve. There is no requirement in Darwin’s theory for mutations that are inherently lethal of maladaptive. Indeed, the accumulation of such mutations says nothing about the emergence of biological innovation; it merely points to the degradation of information. The same problem arises with vestigial structures (like cave fish with functionless eyes). It’s not the loss of information/function that requires explaining, but its origination in the first place.

Comments
I think this quote summarizes it best: "Design is not merely an argument but also a scientific theory. Specified complexity in particular provides an information-theoretic apparatus for understanding the design features of the physical world. Whereas the work of a design argument is done as soon as one uncovers a designing intelligence, this is only the start for a theory of intelligent design. To analyze the information in a design structure, to trace its causal history, to determine its function and to ascertain how it could have been constructed are just a few of the questions that a theory of intelligent design addresses. Intelligent design far exceeds the design arguments of the past."Gumpngreen
October 3, 2005
October
10
Oct
3
03
2005
09:05 AM
9
09
05
AM
PDT
Very well then...forensics, criminology, SETI, cryptography. That is just a couple examples of where design arguments are used in the real world. In his books Dembski states he would like to see ID used in a variety of scientific disciplines. Now the design arguments in actual use are usually not as rigorously defined compared to Dembski's work from what I've seen. For example, I was watching a science program where Japanese/Indonesian scientists claimed to have found an ancient temple that is under the water along an island coastline. The problem was that this discovery conflicted with current historical narratives. Though the structure contained large blocks with right angles, several other scientists who investigated later thought the "temple" was the result of natural processes (geology, wave motion). The original scientists used a design argument and several pieces of evidence (small, internal rock cuts comprised of right angles) in their defense. Since their design arguments were weaker than Dembski's methods the final result was pretty much inconclusive, with no clear "winner" as defined by the program. When the program ended I was left thinking that the temple would make an interesting test case for ID (hey, Bill, like scuba diving?). Your other objections are covered in depth in Dembski's books, and this isn't the place to rewrite them. Though... "It is a threat to science if done the way it currently seems to be done, by fighting court battles because of religious motivations." You do realize that in the court cases like Dover the ID side is the defendant? It's not like they WANT to be dragged into court. And you're right, the prosecution apparently does have religious/philosophical motivations...Gumpngreen
October 1, 2005
October
10
Oct
1
01
2005
09:22 AM
9
09
22
AM
PDT
Gumpngreen wrote: "Dembski considers your primary objection to be what he calls a 'gatekeeper' objection." I suppose it is. I don't think, based on my limited reading, that ID qualifies as science. I don't care about Karl Popper or other philosophical arguments because I have my own intuitive "science detector." It works this way: Real science engages the real world when ever it can. Miller and Urey engage the chemicals of life, fossil hunters engage fossils, programmers write genetic algorithms... Dembski's ideas might be used to engage the real world in other ways, but they are not being used to do so. For example, if Dembski can really detect specified complexity then he should get some people to go out into the real world and actually measure the amount of specified complexity in, perhaps, animal communications. There is a controversy about whether dolphins have a language: Engage that controversy, do dolphins have a language? Shouldn't Dembski's concepts have a value there? http://www.dauphinlibre.be/langintro.htm Compare the specified complexity of dolphin language, bird songs, whales, octopi, etc.. Make it at least a real scalar value, (if not a multidimensional one), by testing the concept against the real world. If you don't, you're arguing about how many angels can dance on the head of a pin. Once you do that the real world will challenge your ideas with its reality, just like a good theory about mitochondrial DNA can get shot down by one little fact. Gumpngreen wrote: "...objections are made in attempts to find fault with design because of the threat that design is claimed to pose to 'science'…" It is a threat to science if done the way it currently seems to be done, by fighting court battles because of religious motivations. Gumpngreen wrote: "... philosophies improperly equated with being science." My philosophy is simple: engage the real world and stop sounding like you're arguing about angels dancing on pins. Gumpngreen wrote: "These objections are not made because the theoretical or empirical case for design is scientificially substandard." A lot of scientists say it is substandard and I'm inclined to take their word for it because it agrees with my own subjective evaluation. Gumpngreen wrote: "I suggest you try reading Dembski’s books before you attempt further critiques." I should, you're right. But I'm not that motivated too. In the end my opinion will not matter. Before I take more interest in ID than I do now, I have to see it engage the real world. I have to see scientists using it on something other than a negative argument against evolution.Norman Doering
September 30, 2005
September
09
Sep
30
30
2005
08:21 PM
8
08
21
PM
PDT
PaV wrote: “…if you, the Designer, were to build a system that was completely “error-proof” (please carefully understand this term within the given context), then it would be “unchanging”, and hence, less than perfect in a “changing” environment! Norman wrote: No. You can set up L-systems to evolve according to some asthetic criterion and get quite a lot of change you never expected. It’s done quite often and sometimes to study plant biology. It’s not unchanging — the change is just limited.PaV
September 30, 2005
September
09
Sep
30
30
2005
08:19 AM
8
08
19
AM
PDT
Ah, I see. I was so focused on your first post in this thread (the original point I thought you were complaining about) that I didn't realize your focus was on a later point. Try being more specific next time. Dembski considers your primary objection to be what he calls a "gatekeeper" objection. These objections are made in attempts to find fault with design because of the threat that design is claimed to pose to "science"...in particular, philosophies improperly equated with being science. These objections are not made because the theoretical or empirical case for design is scientificially substandard. As in, your objection is not a scientific objection. When it comes to Dembski's own religious beliefs the answer to your objection is in two words: "The Fall" I suggest you try reading Dembski's books before you attempt further critiques.Gumpngreen
September 30, 2005
September
09
Sep
30
30
2005
08:09 AM
8
08
09
AM
PDT
PaV wrote: "In other words, what might a “perfectly” designed system look like. That is, our first hunch about a “perfectly” designed genetic system might be, as you’ve stated, one in which there are no errors. That’s really your point, Norman, isn’t it?" As for as that part of my argument goes, yes. If there were such a thing as a benevolent monotheistic God this is not the world it would make. This is not a benevolent world. PaV wrote: "...if you, the Designer, were to build a system that was completely “error-proof” (please carefully understand this term within the given context), then it would be “unchanging”, and hence, less than perfect in a “changing” environment! No. You can set up L-systems to evolve according to some asthetic criterion and get quite a lot of change you never expected. It's done quite often and sometimes to study plant biology. It's not unchanging -- the change is just limited. Like I said before, you can't write a word processor or spreadsheet with turtle or with L-system languages, but you can experience the potential for more benevolent change than you could ever deal with in your life time.Norman Doering
September 29, 2005
September
09
Sep
29
29
2005
10:15 PM
10
10
15
PM
PDT
I said - and retracted: “...if there were no harmful mutations that would be a sign of “design”” DaveScot wrote: "Well, I’m glad you retracted that silly canard. You are using a religious presumption that the designer must be perfect." Yes. It is generally assumed that Designer in ID is God. However, I now see where you're going - it wasn't clear to me: DaveScot wrote: "ID is a scientific theory, Norman. It makes no presumptions whatsoever about the character of any designer or designers." I disagree. I think you have to say something about the designer or else the theory has no consequence and cannot be tested. If you can't test it in some way, it's not science. Also, there's the question of how do you even define "intelligence"? If you wanted to you could define -- in fact you must -- intelligence in ID as only "that unknown which creates apparent Specified and Irreducible complexity." You cannot assume that the intelligence has desire, forethought, intention, emotion, etc.. That would be anthropomorphic. If you do that then the claims for ID are only negative against a certain, possibly a misconception of, evolutionary theory. It offers nothing else and so falls into a god of the gaps argument. DaveScot wrote: "Languages like Turtle can make meaningless pictures too." What do you mean by "meaning"? Some would say that "life is meaningless." A lack of meaning is not a fatal error. DaveScot wrote: "Also, Turtle can’t make any pictures without intelligent guidance." That's not entirely true. You could set up turtle to make random drawings. In fact, one of the things people do with L-systems (a 3D turtle-like language) is use genetic algorithms to evolve things that look like plants. The only intelligent guidance there is setting up the system and at best that's similar to the fine tuning argument for intelligence. Is Intelligent Design distinct or overlapping with the fine tuning argument? Darwinian evolution and the fine tuning argument using a non-involved god are compatible. If so, ID would not be an alternative theory and all it's negative arguments are cast aside. DaveScot wrote: "In fact Turtle itself wouldn’t exist without an intelligent designer." Unless it evolved. But it didn't because we know that from history.Norman Doering
September 29, 2005
September
09
Sep
29
29
2005
10:01 PM
10
10
01
PM
PDT
I just found this on a recent post on this blog (Missense Proteins): The narrow range of tolerance of deviations from optimum characteristics and the significant effects of mutations give rise to a substantial degree of epistasis for fitness. Moreover, mutations simultaneously affect function, stability, aggregation and degradation. For these reasons, mutations might be selectively beneficial on some genetic backgrounds and deleterious on others. Fortuitously, this fits in ideally with my point in the last post.PaV
September 29, 2005
September
09
Sep
29
29
2005
06:48 PM
6
06
48
PM
PDT
Norman, it seems to me--if I can butt into this discussion--that you're hung up on the presence of mutations; specifically, "harmful mutations”. So, for example, in the case of thalassemia (which I have), the homozygous presence of this "mutation" is very deadly; in fact, it's more deadly than sickle-cell anemia (of which it is a variant). Now, first, how do we know that it is a mutation? Because it's the exemption and not the rule? However, if it can be handed down from parent to child--as it has in my family for generations--then why does it persist? Why hasn’t this “mutation” been shed by the various DNA correction-mechanisms that exist in humans? Is it because this "mutation" is, in fact, not "harmful", but adaptive? That is, given the right conditions--the presence of malaria-carrying mosquitoes (just like for sickle-cell)--this trait actually is a veritable "life-saver". In other words, if we are to choose between alive-and-anemic and not-anemic-but-dead, the choice is apparent. So, yes, there is an advantage--given certain circumstances--to having this gene present in the gene pool. (N.B. Darwinists will talk all about this in terms of "fitness", but as 'taciturnus' pointed out already, this discussion is somewhat pointless) The point to be taken here is that when we start talking about "harmful" and "mutations" (and especially when we combine the two), we need to be careful. Do we really understand enough about how genetics works to use these terms precisely? As a take-off point to my answer, here's an exchange between you and Dave Scott: Norman says: “I said if there were no harmful mutations that would be a sign of “design”” Dave Scott replies: "Well, I’m glad you retracted that silly canard. You are using a religious presumption that the designer must be perfect." My answer would be that in order to use the terms “mutation” and “harmful” properly, in terms of biological systems, we have to have a very good idea of what “constraints” this system might be designed for. In other words, what might a “perfectly” designed system look like. That is, our first hunch about a “perfectly” designed genetic system might be, as you’ve stated, one in which there are no errors. That’s really your point, Norman, isn’t it? But there is this problem with the presence of no errors: it is UNCHANGING. Now, pardon me for the caps, but this, I think, is a very important point: if you, the Designer, were to build a system that was completely “error-proof” (please carefully understand this term within the given context), then it would be “unchanging”, and hence, less than perfect in a “changing” environment! Thus, we’re forced to ask: what does a “perfectly-designed” system mean? Well, it depends on what it’s being designed for. And if you want organic life to continue on in ever-changing environments, then the “perfect” design needs to have a “system of adaptations” already “built-in”. So a very wise Designer would throw in the capacity to allow certain changes to take place in the “information system” under certain adverse conditions. We see this in the case of bacteria; and, in the case of thalassemia, we see it in humans. In the case of bacteria, it might work by allowing bacteria to simply change protein patterns around and see what survives, and then capitalize on the survivors. In the case of humans, it might work in the same way, or it might work by already having in place certain protein permutations that would, as occasion might warrant, become more prevalent. (That is, in the case of global warming, when surrounded by malarial insects, it’s a good thing to have thalassemic and sickle-cell anemic-carrying individuals; and through a process of “adaptation” (that can involve what we understand as “natural selection”) this “harmful” “mutation” becomes more common (its gene frequency increases) in the race of individuals.) There’s a nuance in this line of argumentation. And I’m not claiming that this is the way things are. I only say that to term certain genetic sequences “harmful mutations” presupposes more than we probably know right now. And then to take the further step of saying the presence of these “harmful mutations” is “proof” that it wasn’t “designed”, is really presupposing much more than the evidence can now support. Hope this helps.PaV
September 29, 2005
September
09
Sep
29
29
2005
06:27 PM
6
06
27
PM
PDT
ps- my cut and paste didn't render the exponents correctly. That's 10 to the 18th power for example.MGD
September 29, 2005
September
09
Sep
29
29
2005
11:58 AM
11
11
58
AM
PDT
This is interesting: " Recently, scientists from the University of Bath (U.K.) and from Princeton University worked to quantify the error-minimization capacity of the genetic code. Early work indicated that the naturally occurring genetic code withstands the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5000) of randomly generated genetic codes with codon assignments different from the universal genetic code.16 This initial work overlooked the fact that some types of substitution mutations occur more frequently than others in nature. For example, an A-to-G substitution occurs more frequently than does either an A-to-C or an A-to-T mutation. When researchers incorporated this correction into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes. They also found that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.17 Nature's universal genetic code is truly one in a million—or better! The genetic code's error-minimization properties are actually more dramatic than these results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code's capacity occurred outside the distribution.18 Researchers estimate the existence of 1018 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This finding means that of 1018 possible genetic codes, few, if any, have an error-minimization capacity that approaches the code found universally in nature. Obviously concerned about the implications, some researchers have challenged the optimality of the genetic code.19 The teams from Bath, Princeton, and elsewhere, however, have effectively responded to these challenges.20" from: http://www.reasons.org/resources/fff/2002issue08/index.shtml#deciphering_design also: http://www.idthink.net/biot/code/index.html "And they look like religious blinders to me." Norman, you describe yourself as an athiest/agnostic. Could it be that your religious beliefs are a determining factor in your beliefs? Or are you also a hypocrite?MGD
September 29, 2005
September
09
Sep
29
29
2005
11:55 AM
11
11
55
AM
PDT
Languages like Turtle can make meaningless pictures too. Also, Turtle can't make any pictures without intelligent guidance. In fact Turtle itself wouldn't exist without an intelligent designer. Maybe there's more to your analogy than I thought! Thanks! You've provided a great bit of evidence for design! ;-)DaveScot
September 29, 2005
September
09
Sep
29
29
2005
11:30 AM
11
11
30
AM
PDT
Norman says: "I said if there were no harmful mutations that would be a sign of “design”" Well, I'm glad you retracted that silly canard. You are using a religious presumption that the designer must be perfect. ID is a scientific theory, Norman. It makes no presumptions whatsoever about the character of any designer or designers. Let's stick to science and keep your religious beliefs about perfect designers out of it, okay? Thanks in advance.DaveScot
September 29, 2005
September
09
Sep
29
29
2005
11:26 AM
11
11
26
AM
PDT
Actually Darwin didn't posit random mutation as a significant source of variability at all. He was quite Lamarckist and believed that inheritance of acquired characters drove evolution. Origin - chapter 1, subsection: sources of variability, paragraph 2 "But I am strongly inclined to suspect that the most frequent cause of variability may be attributed to the male and female reproductive elements having been affected prior to the act of conception. Several reasons make me believe in this; but the chief one is the remarkable effect which confinement or cultivation has on the functions of the reproductive system; this system appearing to be far more susceptible than any other part of the organization, to the action of any change in the conditions of life." Origin - chapter 5, subsection: effects of use and disuse, paragraph 1 From the facts alluded to in the first chapter, I think there can be little doubt that use in our domestic animals strengthens and enlarges certain parts, and disuse diminishes them; and that such modifications are inherited. Under free nature, we can have no standard of comparison, by which to judge of the effects of long-continued use or disuse, for we know not the parent-forms; but many animals have structures which can be explained by the effects of disuse. http://www.literature.org/authors/darwin-charles/the-origin-of-species/chapter-05.html It's absolutely amazing how many people either never read, completely misunderstood, or purposely misrepresent Darwin's theory.DaveScot
September 29, 2005
September
09
Sep
29
29
2005
11:20 AM
11
11
20
AM
PDT
"Even your example of male mitochondria appears to be some rare error, not common." It's not so rare that it doesn't totally destroy the ability to determine the passage of time by number of mutations in mitochondrial DNA. It was rare enough to escape notice until now is all. As for falsification - evolution makes the theoretical presumption that bacteria billions of years ago were using DNA. No ancient DNA exists. How may that claim be either verified or falsified? If you can't answer then by your own definition it's pseudoscience, innit? HAHAHAHAHAAHA! I won't go quite that far and bring it up just because I like to see neoDarwinian narrative apologists hoist by the own petards. It makes it a theoretical science when it can be neither verified nor falsified by experiment or observation. Repeat after me, Norman, enough times until it sinks in: Modern biology is the study of living tissue. It's an experimental science. Historical biology is the study of imprints in rocks. It's a theoretical science.DaveScot
September 29, 2005
September
09
Sep
29
29
2005
11:02 AM
11
11
02
AM
PDT
Gumpngreen wrote: "Changing the subject…? YOU were the one who brought up computers as an analogy to DNA in the first place." Only partly right, I brought up the subject of computer languages, not the internal workings of computers. (And I don't know much about error correction in chips or BIOS and it seems irrelevant to the subject and my point.) You and Dave started talking about error correction which is irrelevant to the DNA and language types point I was making and misleading since errors obviously happen in our DNA. Even your example of male mitochondria appears to be some rare error, not common. My point was about DNA necessarily being like machine code or assembler, and not like "turtle" or even the L-system languages where mistakes can almost never be really harmful or fatal. I said if there were no harmful mutations that would be a sign of "design" but -- now that I think about it -- why couldn't an error preventing language evolve in the first few hundred million years of evolution? It would be a case of evolving towards evolvability -- evolving so that your line of DNA could change and adapt rapidly and thus beat not-programmed-to-evolving lines. However, a genetic language that couldn’t make mistakes is not what we have. Error correction or not -- there are changes and variability in human DNA, and a lot of them are harmful, a lot more are neutral, and some beneficial -- your mitochondria example would be an example of a harmful and rare mutation. You can argue for all the error correcting features you want, but that will not change the fact that variation creeps in. To argue about error correction in the face of measured errors is to be wearing blinders. And they look like religious blinders to me. I said evolution of a sort would occur in an error limited language but probably only in a very limited and narrow way because if you rid the possibility of errors in a language you also rid it of some of its creative power. Languages like “Turtle” can make pretty pictures but not word processors and spreadsheets. So, I still stand by my point and nothing you've said on that seems to effect it. I also tried to introduce the concept of "search space" which is, I think, exactly relevant to Dembski's claim -- and none of you have even picked up on that. I doubt if you even know what I'm talking about. To you it seems... what, nonsense? Yet, it's actually the critical point. So, it looks to me like you guys are avoiding the relevant, which don't even seem to understand, to side track into the irrelevant and misleading. I may not be winning your respect -- but you're not winning mine either.Norman Doering
September 28, 2005
September
09
Sep
28
28
2005
10:07 PM
10
10
07
PM
PDT
Changing the subject...? YOU were the one who brought up computers as an analogy to DNA in the first place. Not to mention, your first post/original point in this thread was in relation to the number of generations required for speciation from a common primate ancestor. I already posted something relevant to that and pointed it out. Using an ad hominem attack as a smokescreen for the deficiencies in your arguments won't buy you any points.Gumpngreen
September 28, 2005
September
09
Sep
28
28
2005
08:57 PM
8
08
57
PM
PDT
norman- how on earth do you falsify man to mud macroevolution? you don't. you can't. that would make mud to man macroevolution isn't "real science" which is the problem. the evidence isn't there. these changes and new discoveries prove this fact more everyday. this is a problem for the supposed common ancestor of man and chimps! that means that's a major problem for chimp to man evolution period, which also means there's a major problem for mud to man evolution.jboze3131
September 28, 2005
September
09
Sep
28
28
2005
07:44 PM
7
07
44
PM
PDT
Gumpngreen wrote: "... DaveScot isn’t a Biblical Creationist, he is an agnostic." Well, he may say so, but both of you are still changing the subject and going off on an irrelevant tangent and you are more insulting than informative. Look like creationist tactics to me. "And I imagine he posted that article by Dawkins since it discussed information theory in relation to computers." I imagine he did -- but why is it relevant? What is the point in relation to the original subject?Norman Doering
September 28, 2005
September
09
Sep
28
28
2005
07:14 PM
7
07
14
PM
PDT
jboze3131 wrote: "isnt it funny how all these new discoveries can overturn much of what we thought we know about this subject and change biology..." It's called falsifiability, a trait that real science has. One has to make predictions about the real and measurable world.Norman Doering
September 28, 2005
September
09
Sep
28
28
2005
06:56 PM
6
06
56
PM
PDT
isnt it funny how all these new discoveries can overturn much of what we thought we know about this subject and change biology and the study of it in general- yet no matter how much evidence comes about, they can never find it in their brains to change any aspect of man to mud macroevolutionary theory? so, that means they must force the new emerging facts into that dogmatic mindset no matter what the facts tell us. im starting to think an intelligent designer could come out and give a worldwide press conference from design central, and theyd try to take that evidence and cram it into their mud to man worldview.jboze3131
September 28, 2005
September
09
Sep
28
28
2005
03:26 PM
3
03
26
PM
PDT
paternal mitochondria link (no subscription required) http://www.newscientist.com/article.ns?id=dn2716DaveScot
September 28, 2005
September
09
Sep
28
28
2005
03:23 PM
3
03
23
PM
PDT
Gumpngreen Evidently more than just knowledge of computer architecture is deficient in Norman's case. Yours is rather impressive though. There are a good number of erudite posters here. Interestingly not a one of them has been a neoDarwinian narrative apologist. If they exist they don't show up to make comments on this blog. I'm still trying to find a place where they DO show up. Panda's Thumb was a total bust. So far it's a null set.DaveScot
September 28, 2005
September
09
Sep
28
28
2005
03:12 PM
3
03
12
PM
PDT
Also, your original point related to chimps to man is pretty much discussed here: https://uncommondescent.com/index.php/archives/360 Oh, and DaveScot isn't a Biblical Creationist, he is an agnostic. And I imagine he posted that article by Dawkins since it discussed information theory in relation to computers.Gumpngreen
September 28, 2005
September
09
Sep
28
28
2005
01:37 PM
1
01
37
PM
PDT
Norman: http://sciencenow.sciencemag.org/cgi/content/full/2004/514/1 “Mitochondrial Eve,” the hypothetical mother of all modern humans who lived about 150,000 years [sic] ago, might be lying about her age. A key assumption in determining how long ago she lived—that molecules of mitochondrial DNA do not swap segments with one another—is false, researchers now say. Their findings call into question a multitude of findings in evolution, early human migration, and even the relations between languages. ............ The mitochondria in our cells, organelles that provide the ATP power supply, contain small amounts of DNA. You may have heard that we inherit this mitochondrial DNA only from our mothers. Now, scientists have found evidence that male mitochondrial DNA can be inherited, and might be mixed in with the rest of the mitochondrial DNA. Since the implications are that this is going on all the time in our cells that would render it untrustworthy as a genealogical tracer and dating method."Gumpngreen
September 28, 2005
September
09
Sep
28
28
2005
01:32 PM
1
01
32
PM
PDT
DaveScot wrote: "If you can’t figure it out on your own I’m afraid this isn’t the place to provide you the missing education. Here’s a beginning from a source you might trust: http://www.skeptics.com.au/articles/dawkins.htm" I see, a warning from Richard Dawkins about how creationists distort things. I've already noted that you guys keep changing the subject rather than dealing with the original point. I wasn't making any points about redundancy, error detection, correction, and unexecuted code. But you want to. I see no need to because there are, no matter how good the detection and correction in our cellular systems, mutations that are measured in our population. Thus, anything you have to say about those systems of error correction can't change what is already measured and would be irrelevant in the context of this discussion. Or, to put it simpler, trying to prove mutations and errors are rare isn't going to work when we can measure the rate of mutation in our own population. We don't need to know about the cellular systems and how mutations creep in when we can measure the mutation rate. They even used this rate as a clock to come up with a "Mitochondrial Eve." So, again, instead of calling me ignorant, just make your point if you have a relevant one to make.Norman Doering
September 28, 2005
September
09
Sep
28
28
2005
12:33 PM
12
12
33
PM
PDT
The neutral mutation in species with obligatory sexual reproduction is probably specious. A notable feature of such species is they all become extinct after some period of time. Greater than 99% of all such species that ever lived are no longer with us. This may be handily explained with a computer programming analogy to what's commonly called "unexecuted code". Unexecuted code is simply that which hasn't been tested. Most commonly it's error handling code in which the error that triggers it has never been encountered. Less often but still frequent is it's code that handles situations the programmer envisioned but the software testing never duplicated. The analogous situation with DNA is that much of it has no known function in the normal life cycle of the organism. In other words, it doesn't appear to get executed. Mutations in these stretches of code appear to be neutral. But what if they're only neutral in the short term? Imagine that these stretches of DNA may be used in some manner by descendants of the organism in which the mutation occurred. Accumulation of such neutral mutations would then make the species less and less robust going forward in time and eventually drive the species to extinction because, as we all should know, random DNA mutations that are actually expressed usually kill the holder of it in the embyronic stage and otherwise almost always have a harmful effect. One might try to make the case that so-called "junk DNA" is evolutionary baggage what was once functional but lost its function and will never be used again. This thinking is in direct opposition to the theory of natural selection. Replicating a DNA molecule is a big time/energy consuming job for cellular machinery. Stretches of it that have no function should, in theory, be quickly removed by random deletion events and when the deleted code has no function it increases reproductive efficiency of the cell thus giving it an edge over competitors carrying the extra burden. This begs the question of the c-value paradox. One might reasonably assume that as complexity of an organism increases so too does the length of its DNA code. In fact, this is generally not the case and the situation is known as the c-value paradox - there is little correlation between species complexity and genome size. Here's a little table illustrating the ranges by kingdom/phyla size grouping: http://www.genomesize.com/Cvals.jpg Here's a list of individual organisms sorted by genome size. http://www.cbs.dtu.dk/databases/DOGS/abbr_table.bysize.txt I wonder what a water lillies, pine trees, and amoebas are doing with 10x the amount of DNA that humans possess? What lurks in all that unexecuted code? Why do these organisms carry it when according to standard theory it's a survival disadvantage? Given that the universal common ancestor on Earth might have been designed and/or evolved somewhere else and was transported to this planet (panspermia) why couldn't that ancestor have been something like amoeba dubia and in its vast genome resided the core specifications for all the irreducibly complex structure we see in living things today, unexpressed, unexecuted, until the appropriate conditions triggered eventual expression millions or billions of years later? So I wonder what wonders lurk in the unexcuted DNA code ofDaveScot
September 28, 2005
September
09
Sep
28
28
2005
09:38 AM
9
09
38
AM
PDT
Well...Dave is making comments about such occurrences being the cause of built-in adaptive mechanisms. And if you're going to compare DNA and computers: DNA uses modular design, genetic algorithms, compression, encryption, multiple secure backups, exception handling, self-compiling, gradients and switches that allow its operations to be context-sensitive, feedback loops, and self-generated ‘test patterns’ that allow the system to tune itself. While the system is designed for preventing mutations and maintaining stasis within certain boundaries, mutations can occur. In fact, "mutations" or self-modifications are purposely introduced during certain functions, but these changes are kept under tight constraints. Due to recent DNA research, we now know that in order for mutations to be beneficial they have to be precise and made in exact multiple locations. This is called pleiotropy, where a mutation to one gene results in a cascade of changes since each gene is expressed in multiple fashions. In order to maintain their evolutionary models for the evolution of DNA, some scientists have invented a new single-data gene termed a "generalist" in order to make their models plausible. This gene has never been observed in nature as far as I know. In E. coli, replication proceeds along DNA at a rate of about 1000 nucleotides per second and a wrong base is incorporated about once every 10e5 steps. But it contains proofreading activity which can catalyze the hydrolysis of the phosphoester bond. This proofreading activity itself makes an error about once every 100 hydrolyses. Together, this gives an error rate of about 10e7. And for reference, the 1 in 10 million error rate doesn't take into consideration error correction by separate DNA repair enzymes, which result in an overall error rate of about 10e10. The human genome contains about 3.2x10e9 base pairs which means that on average one error is made during genome replication. Example error rates in human designed technology for comparison: 1 in a million - Acceptable voice quality through a T1 channel 1 in a million - Microwave signaling error rates 1 in a billion - Modern hard drive's acceptable error rates DNA: 1 in 10 billion Oh, and I pulled most of the above from a little article I'm working on...if you see any errors please point them out. In one of my previous jobs I worked with several hardware engineers (I'm more software oriented) on reducing the error rate in a particular company's satellite communications hardware that was used by the US, UK, Australia and some other countries. By the end of the project, we had the best hardware in the entire defense industry (maybe not in gee-whiz features...) but it was still nowhere near that achieved by DNA.Gumpngreen
September 28, 2005
September
09
Sep
28
28
2005
08:33 AM
8
08
33
AM
PDT
"Why? What point do you want to make about redundancy, error detection and correction, and unexecuted code?" If you can't figure it out on your own I'm afraid this isn't the place to provide you the missing education. Here's a beginning from a source you might trust: http://www.skeptics.com.au/articles/dawkins.htmDaveScot
September 28, 2005
September
09
Sep
28
28
2005
08:20 AM
8
08
20
AM
PDT
DaveScot wrote: "If you want to go beyond the trivially stupid in making analogies..." But that's exactly how everyone here argues, trivial metaphors and anologies. "... better start talking about redundancy, error detection and correction, and unexecuted code, among other things." Why? What point do you want to make about redundancy, error detection and correction, and unexecuted code?Norman Doering
September 28, 2005
September
09
Sep
28
28
2005
06:05 AM
6
06
05
AM
PDT
1 2 3

Leave a Reply