Uncommon Descent Serving The Intelligent Design Community

Postscript to ID, QM, and Info

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

HawkmothOne of the comments on the previous post caused me to do some further analysis, which I had said I wouldn’t post, but have reconsidered. The comment was:

This seems like an odd tact on Dennis’ part and I don’t understand the point.
If, Dennis’ position (I’m going to call it “Agnostigner” – someone who is agnostic about the Designer) is correct and the Designer is irrelevant, then what does ID bring to any table scientifically?
If the Designer is irrelevant, what does the explanation of design tell us about the world/universe? Does it impact any other scientific explanation in anyway and if so, how?

So let’s start by analyzing the “odd tact” of Dennis, which seemed odd to me as well, until I realized it was a version of the demarcation problem.

Read more…

Comments
Elizabeth, your demon, where does it get the information as to how to set up the simulation to model a deterministic universe? If it's not getting it from observation of the universe don't you have a slight problem to resolve?Mung
July 25, 2011
July
07
Jul
25
25
2011
09:07 PM
9
09
07
PM
PDT
Elizabeth Liddle:
So what do we mean by information? Let’s define information, for present purposes as “data that results in reduction of uncertainty” (and where “data” is defined, literally, from that Latin, as “what is given”).
To be perfectly frank, it sounds like you made that up. Did you? As we've discussed before, to have a reduction in uncertainty there must be uncertainty about something. In order for there to be uncertainty about something there must be an expectation. Neither of these make sense without a mind.Mung
July 22, 2011
July
07
Jul
22
22
2011
05:59 PM
5
05
59
PM
PDT
Because all models are wrong, although some are less wrong than others. More importantly, all models are incomplete, but sometimes a simpler,though more incomplete model is more useful than a more complex, though more complete one. - Elizabeth Liddle
That would include a model of a deterministic universe, which would be required in order to program a computer simulation of a deterministic universe. So where does the demon get the perfect model it needs?Mung
July 22, 2011
July
07
Jul
22
22
2011
05:37 PM
5
05
37
PM
PDT
Elizabeth, where does your magical demon get the information it requires to program a computer simulation of a deterministic universe? That simulation most be built according to some model mapped appropriately to the deterministic system under study. There is no generic "deterministic system simulation" running around out there, even in the world of demons. So where did the information come from to program the appropriate model into the demon's computer? If the demons observes the state of the entire universe at time t1, what is there in that observation that lets the demon know that the universe under observation is deterministic? What makes a universe deterministic and where does your demon get that information? You're off in fantasy land.Mung
July 22, 2011
July
07
Jul
22
22
2011
05:22 PM
5
05
22
PM
PDT
Mung:
Elizabeth Liddle:
You could argue that for a deterministic universe, an external demon (“Laplace’s demon”) could look at the topography of the early universe, enter that data into a huge computer, and predict every subsequent event.
You could argue that, but why would you? Assuming the demon can observe the current state, where does the demon get the information needed to predict the future states?
In a deterministic universe he just runs the original state through a computer simulation. That's the joy of deterministic systems - if you truly know all the starting conditions, you have all you need.
The early universe therefore contains enough information for the demon to construct the entire history of the universe with complete certainty.
But that’s a non-sequitur. Doesn’t that concern you in the slightest?
It would if it was but it isn't.
And it’s horribly circular.
Nor is it circular.
You say that if the universe has all the information required for an external demon to look at it and predict from its current state all future states, an external demon could look at the current state of the universe and predict all future states.
In a deterministic universe, yes.
And then you say that because the demon can look at the current state of the universe and predict all future states, it follows that “the early universe therefore contains enough information for the demon to construct the entire history of the universe with complete certainty.”
Yes.
To take a very straightforward example that we would both, I’m sure, regard as “information” – namely this post
Given that you are speaking nonsense, why would we agree that your post contains information?
Given that assumption, we wouldn't. But it is not given. So we might.Elizabeth Liddle
July 22, 2011
July
07
Jul
22
22
2011
01:32 PM
1
01
32
PM
PDT
Ilion, I disagree. If you want to rule out who designed the designer by saying it's the same as asking who is the Big Banger, then that presupposes all the evidence of design points to a source outside the universe. However there is no scientific reason take such a position. It can not be ruled out that some or all of the design originated within this universe, thus the comparison is fails. Consequently the possibility of multiple designers must be clearly articulated as a fundamental tenet of Intelligent Design - otherwise the implication is that a singular designer must be responsible for all that ID detects, which simply isn't true.rhampton7
July 22, 2011
July
07
Jul
22
22
2011
10:55 AM
10
10
55
AM
PDT
Try feeding a document into a shredder and tell me how much information you are left with. Alternatively, ask Rob - he made the original claim.Elizabeth Liddle
July 22, 2011
July
07
Jul
22
22
2011
09:45 AM
9
09
45
AM
PDT
Elizabeth Liddle:
You could argue that for a deterministic universe, an external demon (“Laplace’s demon”) could look at the topography of the early universe, enter that data into a huge computer, and predict every subsequent event.
You could argue that, but why would you? Assuming the demon can observe the current state, where does the demon get the information needed to predict the future states?
The early universe therefore contains enough information for the demon to construct the entire history of the universe with complete certainty.
But that's a non-sequitur. Doesn't that concern you in the slightest? And it's horribly circular. You say that if the universe has all the information required for an external demon to look at it and predict from its current state all future states, an external demon could look at the current state of the universe and predict all future states. And then you say that because the demon can look at the current state of the universe and predict all future states, it follows that "the early universe therefore contains enough information for the demon to construct the entire history of the universe with complete certainty."
To take a very straightforward example that we would both, I’m sure, regard as “information” – namely this post
Given that you are speaking nonsense, why would we agree that your post contains information?Mung
July 22, 2011
July
07
Jul
22
22
2011
09:45 AM
9
09
45
AM
PDT
Heat qua work can both create and destroy information, and, in the end, will destroy any information previously created.
How can information, an immaterial thing, be destroyed by work?Mung
July 22, 2011
July
07
Jul
22
22
2011
09:09 AM
9
09
09
AM
PDT
UPD: yes, I'm following along. The post below is rather lengthy, but inspired by something in Rob's original article, and, I think relevant, if with a long lead, to our own discussin. Rob: In your original article you write:
A slightly more esoteric problem with the Big Bang, is that really hot explosions rarely make cool machines--like people. The laws of entropy would suggest that heat is really bad for information, and really hot stuff can't cool unless something else absorbs the entropy. Or we can say it the other way, information is the inverse of entropy, and where is the information hiding in the Big Bang?
I’d like to take you up on your statement (or inference) that “heat is really bad for information” because this is critical, and I think there is a hidden equivocation that needs sorting out (not that I am accusing you of deliberate equivocation!), namely equivocation between “heat” and “hot stuff” or rather "thermal energy". And I suggest that once we resolve that issue, your question is readily answered. If we define “heat” as “work”, a la Lord Kelvin, the answer to the question as to whether information is destroyed or not by that work (leaving aside for now the question as to how we define “information”) depends on what the work done is – and that work can include, I suggest, creating information. To take a very straightforward example that we would both, I’m sure, regard as “information” – namely this post: in order to generate the information in this post, work has been done, and energy utilised with less than 100% efficiency, thus slightly cooling the universe. That work includes the work of keeping my brain and body going (fuelled by my bowl of breakfast cereal)as well as the work of the various mechanisms involved in changing states within my computer; ditto in the servers that sit between me and you; and ditto in your computer, fuelled by various kinds of energy sources. Information has been created by work, fuelled by energy (stored in hot things) and, as a result, the total entropy of the universe has decreased a little – we have moved slightly nearer to “heat death”, in which entropy is infinite, and no energy is available to do any work at all, because the temperature of the entire universe is at a uniform temperature. In other words heat does not “destroy information”; heat is, rather, the cost of information creation. When information is created, some cold thing somewhere will get a little warmer, and some warm thing somewhere, will get a little colder. Ultimately, of course, all information will be destroyed i.e. information-destroying work will be done, undoing any information-creating work that was done earlier, but that is very different from saying that “heat destroys information”. Heat qua work can both create and destroy information, and, in the end, will destroy any information previously created. Hot stuff, i.e. stuff with high thermal energy, exists in a highly non-uniform universe, i.e. a low entropy universe, in which thermal energy passes from hot things (things in a high energy state) to cooler things (things in a lower energy state), doing work as it does so, aka creating heat. In a highly entropic universe, in contrast, one near heat death, there is just as much thermal energy as there was in the low entropic early universe, but it is now almostly perfectly evenly distributed, severely limiting thermal energy flow, and thus the work (heat) that can be done. So it’s not that “really hot stuff can’t cool unless something absorbs the entropy” so much as that “when really hot stuff cools, thermal energy is transferred and work is done”. I am suggesting that that work can include both the creation of information and its destruction. And that the really important thing about the early universe is not that it was “hot” i.e. had lots of concentrated thermal energy, but that it inflated unevenly, resulting in a non-uniform energy distribution that enabled work to be done. Not so much a Big Bang, as a Big Volcano, down which thermal energy must flow, creating, as it does so, intricately structured canyons, ravines, caverns, valleys, and later meanders and meadows, and plains until eventually the mountain is gone, and all is flat. So what do we mean by information? Let’s define information, for present purposes as “data that results in reduction of uncertainty” (and where “data” is defined, literally, from that Latin, as “what is given”). You could argue that for a deterministic universe, an external demon (“Laplace’s demon”) could look at the topography of the early universe, enter that data into a huge computer, and predict every subsequent event. The early universe therefore contains enough information for the demon to construct the entire history of the universe with complete certainty. However, this does not work in reverse – given a flat, “heat dead” universe – the demon cannot reconstruct the history of the universe with any degree of confidence at all. The dead universe contains no information regarding its previous states. This is because even in a deterministic universe, one event can have many consequences, but equally, one consequence can have many causes, making perfect prediction possible for the demon, given the state of the universe at any given time, but perfect post-diction impossible (although partial post-diction may be). In other words time, in a deterministic universe, presents the demon with an “inverse problem”: given the present it can perfectly pre-dict the future, but given the future, it cannot perfectly post-dict the present. So, as the initial non-uniformity of the early universe decays into the maximal uniformity of the end-state universe, the information present in the early universe that enabled the demon to construct the entire history of the universe from that information alone, is constantly being lost. Now, we know (or at least current evidence suggests) that we don’t actually live in a deterministic universe. So our hypothetical demon does not, in fact, start off with enough information to predict the entire history universe. In fact, at nanosecond 1, the number of possible histories that the demon must posit is near infinite. However, as time goes by, that initial information is constantly being supplemented by stochastic events that, while being predictable in aggregate, statistically, are not predictable individually, and we know from chaos theory that tiny events (a butterfly in Peking; the neutron that killed Schroedinger’s cat) can have big consequences, and that with every new piece of information, large numbers of possibilities (including a live cat in the case of Schroedinger) are eliminated. So our demon starts off with very little information; as the universe goes on, many of its originally posited scenarios are ruled out, while others become much more probable. Thus, as time goes on, the number of possible future histories reduces, and each successive universe-state contains more information about its own future (decreasing uncertainty for the demon). Unfortunately, simultaneously, with each new state of the universe, the number of possible past histories increases, leaving the demon little net gain. At the penultimate moment in the life of our universe, given the state of the universe at that moment, the demon can predict with absolute confidence what the next moment will bring: heat death. However, it can post-dict with no certainty at all what scenarios preceded the current moment; any one of its initial scenarios could have preceded this moment. So to answer your question: “where is information hiding in the Big Bang”: the answer is that it is hiding in the non-uniformity of the energy distribution. And in a deterministic universe, that would be a colossal amount of information. But in a non-deterministic universe, it isn’t very much information at all – information continues to be created, by stochastic processes, through out the time course of the universe. Moments after the Big Bang, the demon has minimal information about the future history of universe, i.e. maximal uncertainty, although maximal information about what has just occurred (minimal uncertainty), but as time goes on, that uncertainty rapidly decreases, and the information increases. However, time also erases information, as we have seen, and so there will be a point in the history of the universe where the total information contained is maximal for the demon; it can rule out many future universes and many past universes, leaving a relatively small number of possible entire histories. However, as time continues, the number of past universes that can be ruled out increases, even though the number of possible future universes diminishes, resulting in a net increase of uncertainty, as the number of un-ruled-out histories reduces. So, let’s make some simplifying assumptions: let t be the total time that elapses from Big Bang to Heat Death, and let’s divide t into A Very Large Number of intervening states, the time that elapses between one and the next being Delta(t). And let’s call the Big Bang moment Alpha, and the Heat Death moment Omega. At state Alpha (Big Bang), the Demon is complete certain about state 1 of the history of the universe (1 bit) but completely uncertain (0 bits) about every subsequent state. In other words, it knows only that of the possible histories, the one for THIS universe starts with state Alpha. At state Alpha + Delta(t), the Demon may have some slight increase in uncertainty about the previous state Alpha (information has been lost, perhaps a few Alphas are possible given Alpha + Delta(t)), but a much bigger reduction in uncertainty about future states (perhaps already it is clear that heavy metals will form), so the total information in the universe at Alpha _Delta(t) is 1 bit minus a tiny bit, for the past history, and several extra bits for the future. So the universe state Alpha Delta(t) has more information for the demon than Alpha (more ruled-out histories), so it is on an upward trajectory. Now, fast forward to near Heat Death, Omega –Delta(t): now the demon is almost completely certain about the final state of the universe (near 1 bit), but has almost total uncertainty about its prior history (only a few extra bits). All it can infer is that the final state is Omega; it can infer little about any other prior state. Morevover, if Alpha and Omega are the only two possible starting and ending conditions for any universe, then those states give the demon no reduction of uncertainty at all, the demon loses even those initial starting and end 1 bit information states, and the universe goes from zero information for the demon through considerable information and back to zero again. In other words, events create information, whether they are chance events (quantum things) or Designed events (this post). Heat does not destroy information, it is merely the cost of its creation. This means that we need not posit a Designer to account for the presence in the universe of information, all we need posit is what we know to have existed in the early universe, which is a state of non-uniformity, which could be the result of stochastic events OR design. But given that initial non-uniformity, plus the non-deterministic nature of the universe, we can easily infer that information will steadily increase for the first part of the universe, and steadily reduce back to zero during the second part. Which makes living cells perfectly possible within either scenario, as long as it's somewhere in the middle region. If not, why not? :) LizzieElizabeth Liddle
July 22, 2011
July
07
Jul
22
22
2011
08:35 AM
8
08
35
AM
PDT
... except that "the Intelligent Designer" is an obsession of the anti-IDists. The IDists are about identifying the Design ... you know, in much the same way that the County Coroner doesn’t seek to identify The Murderer, but only The Murder.Ilion
July 21, 2011
July
07
Jul
21
21
2011
10:51 PM
10
10
51
PM
PDT
From what I understand, ID Theory does not have a means of detecting one versus multiple sources of design, nor does it offer a prediction. So wouldn't it be more honest to replace the phrase "Intelligent Designer" with the phrase "One or more Intelligent Designers"?rhampton7
July 21, 2011
July
07
Jul
21
21
2011
04:52 PM
4
04
52
PM
PDT
hehe. See Rob, I had a point :)Mung
July 21, 2011
July
07
Jul
21
21
2011
04:06 PM
4
04
06
PM
PDT
Dr Sheldon, thank you for a great post. Loved it. - - - - - - -
I’m not sure what you were going to do with your quote. The reference is to work done in the 1940?s and 1950?s on the nature of the “information” molecule in the cell. People like Gamow, Schroedinger and Polanyi argued that in order to not lose its memory, it had to be inert to changes in pressure, temperature, pH, and all things chemical. One of those things was the “chemical potential” which is another way of saying “chemical entropy”. That is, replacing nucleotide A with T or C or G, cannot result in a lower energy state, or else chemical processes would drive the DNA toward that composition. (BTW, this requirement is a lot harder to achieve than you might think.) This chemical inertness is exactly opposite to a Darwinian functional optimization process. There is nothing for Darwin to optimize on the DNA itself. So DNA is immune from natural selection–as it must be! And that is Abel’s point about the cybernetic cut.
Dr Liddle, are you following along here?Upright BiPed
July 21, 2011
July
07
Jul
21
21
2011
02:42 PM
2
02
42
PM
PDT
Robert Shelton,
The “appeal” argument may not be the best one to use, Abel’s papers on “functional information” includes a whole pile of jargon to convey this concept more precisely. Let me say it a few more ways. To move a rock, I need leverage. If the rock is perfectly smooth, and I can’t find a place to put a lever on it, I can’t move it no matter how big a lever or how big my muscles are. If I want a hawkmoth to have a long tongue, then a slightly longer tongue must be an advantage, or there is nothing to select for. Even if a foot-long tongue is really useful, but 1? to 11? tongues had no change in usefulness, then 12? tongues won’t happen. Flat fitness landscapes have no gradients, no bumps, nothing for natural selection to apply leverage to move. anywhere. They are a smooth rock. Second order selection, where nothing happens when I modify X, but only after X is itself modified by Y, cannot provide the leverage. Even non-linear first order selection is a problem. Only linear, first-order, smoothly varying fitness landscapes have even a ghost of chance at diffusive progress. It doesn’t matter that there’s this really great “devil’s tower” in the landscape, if everything around it is flat (or even worse, rocky). On the other hand, if a designer knows about this great location, he can set up a mechanism to get there–sails, wheels, roads, etc–that violate no physical laws, but have a purpose. This is what ID says. ID says purpose + natural selection can get that hawkmoth a 12? tongue in no time at all. We can even ask, “what piece of protein machinery would I modify to achieve this?” and start looking for devolopment paths, homeobox genes, etc that would get there from here. The Darwinist, if she were honest, would be looking for incremental changes to some system that eventually would cause this modification. And since nothing about homeobox genes are incremental, this wouldn’t even be a likely place to start. So right away, ID and Darwin give different strategies for understanding this biological novelty. Okay, one last try. Because Darwin wants things to move without purpose through the fitness landscape, every motion is local and short range, every step is random, and progress must be diffusive. Diffusion distance goes as the square root of the number of steps. So it is always slow. By contrast, purpose is a long range interaction. It can be linear in the number of steps. It may even be super-linear if bootstrapping strategies are used. Darwin restricts the researcher to looking for slow, smoothly varying, diffusive changes to populations, whereas ID includes all that Darwin offers, plus the advantages of linear and superlinear progress through the fitness landscape. The data increasingly support superlinear progress, and Darwinists are running out of “just-so” stories to explain why diffusion can act like purpose. That’s the advantage of ID.
It's not quite as simple as just a straight-forward "flat landscape" issue or a 1" is good, 2"-11" doesn't change anything, 12" is great scenario. For example, one of the things we find quite often in various populations is exaggerated selection based on normal variation. Basically, a minor change becomes advantage and is selected for, but because of the nature of the change, some of the offspring have exaggerated forms of the advantage. So long as the exaggeration is not a disadvantage, the variation in the trait stays and offspring keep popping up with more and more exaggerated features. In many cases this can be exacerbated by a change in selective pressure. A good example of this is the eruption of size some organism populations. Being even a little larger tends to provide an advantage over predation. In populations where having a little bit of size increase becomes an advantage, offspring tend to vary across a range of larger sizes. That along can account for organisms such as the hawkmoth with an elongated tongue, but there's another factor that can contribute to this as well. In a number of cases changes that convey some physical advantage - say size increase - can quickly become a mating preference trait. Once the selective pressure shifts from predator protection to sexual preference, the exaggeration of the change can occur rather rapidly. I'm not suggesting that's what happened in the case of the hawkmoth. The point is that landscapes are almost never flat; environments are always changing and organisms are always changing. Even little changes can create opportunities for new selective pressures, which then turn a relatively flat fitness landscape into a very rugged one.Doveton
July 21, 2011
July
07
Jul
21
21
2011
01:45 PM
1
01
45
PM
PDT
Diffusion distance goes as the square root of the number of steps. So it is always slow. By contrast, purpose is a long range interaction. It can be linear in the number of steps. It may even be super-linear if bootstrapping strategies are used.
Is this perhaps another way of conveying Dembski's concept of compressibility?Mung
July 21, 2011
July
07
Jul
21
21
2011
01:33 PM
1
01
33
PM
PDT
Mung, I'm not sure what you were going to do with your quote. The reference is to work done in the 1940's and 1950's on the nature of the "information" molecule in the cell. People like Gamow, Schroedinger and Polanyi argued that in order to not lose its memory, it had to be inert to changes in pressure, temperature, pH, and all things chemical. One of those things was the "chemical potential" which is another way of saying "chemical entropy". That is, replacing nucleotide A with T or C or G, cannot result in a lower energy state, or else chemical processes would drive the DNA toward that composition. (BTW, this requirement is a lot harder to achieve than you might think.) This chemical inertness is exactly opposite to a Darwinian functional optimization process. There is nothing for Darwin to optimize on the DNA itself. So DNA is immune from natural selection--as it must be! And that is Abel's point about the cybernetic cut.Robert Sheldon
July 21, 2011
July
07
Jul
21
21
2011
01:32 PM
1
01
32
PM
PDT
Doveton, The "appeal" argument may not be the best one to use, Abel's papers on "functional information" includes a whole pile of jargon to convey this concept more precisely. Let me say it a few more ways. To move a rock, I need leverage. If the rock is perfectly smooth, and I can't find a place to put a lever on it, I can't move it no matter how big a lever or how big my muscles are. If I want a hawkmoth to have a long tongue, then a slightly longer tongue must be an advantage, or there is nothing to select for. Even if a foot-long tongue is really useful, but 1" to 11" tongues had no change in usefulness, then 12" tongues won't happen. Flat fitness landscapes have no gradients, no bumps, nothing for natural selection to apply leverage to move. anywhere. They are a smooth rock. Second order selection, where nothing happens when I modify X, but only after X is itself modified by Y, cannot provide the leverage. Even non-linear first order selection is a problem. Only linear, first-order, smoothly varying fitness landscapes have even a ghost of chance at diffusive progress. It doesn't matter that there's this really great "devil's tower" in the landscape, if everything around it is flat (or even worse, rocky). On the other hand, if a designer knows about this great location, he can set up a mechanism to get there--sails, wheels, roads, etc--that violate no physical laws, but have a purpose. This is what ID says. ID says purpose + natural selection can get that hawkmoth a 12" tongue in no time at all. We can even ask, "what piece of protein machinery would I modify to achieve this?" and start looking for devolopment paths, homeobox genes, etc that would get there from here. The Darwinist, if she were honest, would be looking for incremental changes to some system that eventually would cause this modification. And since nothing about homeobox genes are incremental, this wouldn't even be a likely place to start. So right away, ID and Darwin give different strategies for understanding this biological novelty. Okay, one last try. Because Darwin wants things to move without purpose through the fitness landscape, every motion is local and short range, every step is random, and progress must be diffusive. Diffusion distance goes as the square root of the number of steps. So it is always slow. By contrast, purpose is a long range interaction. It can be linear in the number of steps. It may even be super-linear if bootstrapping strategies are used. Darwin restricts the researcher to looking for slow, smoothly varying, diffusive changes to populations, whereas ID includes all that Darwin offers, plus the advantages of linear and superlinear progress through the fitness landscape. The data increasingly support superlinear progress, and Darwinists are running out of "just-so" stories to explain why diffusion can act like purpose. That's the advantage of ID.Robert Sheldon
July 21, 2011
July
07
Jul
21
21
2011
01:12 PM
1
01
12
PM
PDT
Thanks for the response Robert! I do see a few problems with your thesis however. Here's one as an example:
Let's see how this works. Darwin says, "No, the tongue of the hawkmoth wasn't made a foot long just to get the nectar of the star orchid, rather, the hawkmoth accidently discovered that a longer tongue got more nectar, so it evolved toward longer tongues." Ignoring the special pleading for Lamarckian evolution, what Darwin is saying is that functionally useful stuff will be selected by Natural Selection. But what happens when the functionally useful stuff has an intermediate step? What if the hawkmoth could survive by appealing to human beings who bred it in captivity? How is "human appeal" a functional thing that natural selection would select for? It's completely arbitrary, and not a linear thing, like "the more red the wings, the more appeal it will have".
First off, survival itself isn't really a key component of evolution - passing on traits is. While the former can be correlated with the latter, the latter is still the focus. All sorts of organisms survive as a result of using a combination of traits in a variety of environments; the key in evolution is having traits that both allow the group in which you belong to have more offspring who, on average, out-compete other organisms for resources or at avoid predators, or at moving across environments, etc... So what about the appeal idea? From an evolutionary perspective, the question wouldn't be whether appeal lead to survival alone, but whether appeal provided an advantage in a given environment. For example, suppose some percentage of humans began setting up backyard environments to attract the moth. The the appeal trait would in fact be selected for given the newly established environments that the moth was not merely well-suited thrive in, but encouraged to thrive in. The moth populations would soar, all other environmental to trait conditions being equal, the moth "appeal" trait would be fixed. Not a problem for evolution. This also explains why DNA combinations are selected for - certain arrangements of DNA do indeed functionally excel in given environments and thus are replicated more often.Doveton
July 21, 2011
July
07
Jul
21
21
2011
12:07 PM
12
12
07
PM
PDT
Likewise, the DNA has to be functionally inert, or it wouldn't store information very well. Chemists knew this had to be true of any information molecule long before DNA was discovered to hold information.
Mung
July 21, 2011
July
07
Jul
21
21
2011
11:39 AM
11
11
39
AM
PDT

Leave a Reply