Uncommon Descent Serving The Intelligent Design Community

What are the speed limits of naturalistic evolution?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

What are the speed limits of naturalistic evolution? We know from experience it takes time to evolve a species. Would naturalistic evolution be fast enough in geological time to turn a cow into a whale, an ape-like creature into a human? What are the speed limits of evolution?

To give an illustration of just how hard it might be to evolve a population, consider that there are about 6.5 billion people on the planet geographically dispersed. Suppose a single advantageous mutation (say a single point mutation or indel) occurred in a single individual in one of those 6.5 billion people. How long would it take for that mutation to propagate such that every human on the planet had this mutation? It would be a pretty long time. Sufficient numbers of people would have to have their descendants exchange genes for centuries. And this measly change is but one nucleotide in 3,500,000,000 base pairs!

The Darwinists will argue, “but that wasn’t the way it was in the past, it was easier to evolve things millions of years ago.” Perhaps. Evolving a large geographically dispersed population is a colossal problem for Darwinian evolution as you can see. Thus (using DarLogic) since Darwinian evolution is true (cough), we must assume this implies populations in the past were much smaller and “well-stirred” (meaning geographic barriers are dismissed and every individual has the same chance of mating with anyone else in the population). Bear in mind also, the population can’t be too small either, since evolution needs a certain number of individuals to be generating a sufficient number of beneficial mutations.

Haldane

So given optimal conditions, how fast could we evolve a population? Haldane (pictured above), suggested that on average, 1 “trait” per 300 generations could be fixed into a population of mammals. In the modern sense, we can take this “trait” to even be a single nucleotide [in the traditional sense we look for phenotypic traits, but the problem of evolving single nucleotide in the genome still remains, thus for the sake of analysis a single nucleotide can be considered something of a “trait”].

But such change is obviously too slow to account for 180,000,000 differences in base pairs between humans and chimps. [chimps have about 180,000,000 base pairs more DNA than humans, if anyone has better figures, please post]. This poses something of a dilemma for the evolutionary community, and this dilemma has been dubbed, “Haldane’s dilemma”. If Haldane’s dilemma seems overly pessimistic, ponder the example I gave above even for a smaller population (say 20,000 individuals within a 200 mile radius ). In light of this, 1 nucleotide per 300 generations might not seem like a stretch. If anything, Haldane’s dilemma (even by his own admission) seems a bit optimistic!

Various solutions have been explored to Haldane’s dilemma, such as multiple simultaneous nucleotide substitutions. But such “solutions” have their own set of fatal problems. One could make a good case, Haldane’s dilemma has never been solved, nor will it ever be….

Motoo Kimura
And if Haldane’s dilemma were not enough of a blow to Darwinian evolution, in the 1960’s several population geneticists like Motoo Kimura demonstrated mathematically that the overwhelming majority of molecular evolution was non-Darwinian and invisible to natural selection. Lest he be found guilty for blasphemy, Kimura made an obligatory salute to Darwin by saying his non-Darwinian neutral theory “does not deny the role of natural selection in determining the course of adaptive evolution”. That’s right, according to Kimura, adaptive evolution is visible to natural selection while simultaneously molecular evolution is invisible to natural selection. Is such a position logical? No. Is it politically and intellectually expedient? Absolutely….

The selectionist viewpoint is faced with Haldane’s dilemma. But does the neutralist viewpoint (Kimura) have an alternative mechanism that will get around Haldane’s selectionist dilemma? It seems not, as neutral theory has other sets of problems.

What has since resulted has been a never ending war between the selectionists and neutralists. The selectionists argue that natural selection shaped the majority of molecular evolution and the neutralists argue natural selection did not. Each warring camp finds fatal flaws in the ideas of their opponent. The neutralists rightly argue from first principles of population genetics that selection did not have enough resources to evolve billions of nucleotides, and the selectionists rightly point out that large amounts of conserved sequences fly in the face of neutralist theories. The net result is that both camps demonstrate that they are both dead wrong.

To make matters worse, there are even more dilemmas to deal with such as Nachman’s U-Paradox. Looming on the horizon, and even more problematic would be the fact DNA might only be a FRACTION of the actual information that is used to create living organisms. This idea of the organism (or at least a single cell) as being the totatlity of information versus only the DNA was suggested by one of the finest evolutionary biologist on the planet, Richard Sternberg. He argues his case in the peer-reviewed article Teleomorphic Recursivity. And by the way, these discussions of selectionist speed limits assumes the multitude of Irreducibly Complex structures in biology are somehow visible to natural selection….

What then will we conclude if we find functionality in those large regions of molecules which evolved independent of natural selection? How do we account for designs that cannot possibly be the result of natural selection? Can we attribute them to the random chance mechanisms of neutral theory? Unlikely. Evo-devo might offer some relief, but the proponents of Evo-Devo do not yet seem to realize that even if they are right, the ancestral life forms might have to be in a highly improbable, specified state, exactly the kind of state that suggests front-loaded intelligent design.

I’m opening this thread to continue a discussion of these and other topics which I also raised at PandasThumb in this thread. I found a commenter named Caligula who gave very substantive criticisms to my ideas in a precise and technical manner, and which I found worthy of giving a fair and civil hearing here at UD. I also invited the authors at PT to air their objections here (with the exception of PvM who has been banned). If Caligula and I must take the discussion outside of UD, we will be glad to, but I thought the topics would be of interest and educational to readers of both weblogs.

With that, I’ll just let the conversation continue in the comment section as we try to answer the question, “what are the speed limits of naturalistic evolution?”

Salvador
PS Two books relevant to this discussion by ID proponents are Genetic Entropy by respected Cornell geneticist John Sanford.

Genetic Entropy

and The Biotic Message by Electrical Engineer and pioneer of Discontinuity Systematics, Walter ReMine.

Biotic Message

Comments
[...] In What are the speed limits of naturalistic evolution?, I pointed out: And if Haldane’s dilemma were not enough of a blow to Darwinian evolution, in the 1960’s several population geneticists like Motoo Kimura demonstrated mathematically that the overwhelming majority of molecular evolution was non-Darwinian and invisible to natural selection. Lest he be found guilty for blasphemy, Kimura made an obligatory salute to Darwin by saying his non-Darwinian neutral theory “does not deny the role of natural selection in determining the course of adaptive evolution”. That’s right, according to Kimura, adaptive evolution is visible to natural selection while simultaneously molecular evolution is invisible to natural selection. Is such a position logical? No. Is it politically and intellectually expedient? Absolutely…. [...]Prominent NAS member trashes neo-Darwinism | Uncommon Descent
July 18, 2007
July
07
Jul
18
18
2007
08:18 AM
8
08
18
AM
PDT
Here's a paper by Grant and Flake from 1974 that addresses "soft selection". http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=434284 Go to the "Full Text" section and click on "complete text". Remine, in his Appendix, completely dismantles the argument Grant and Flake make.PaV
January 26, 2007
January
01
Jan
26
26
2007
07:02 PM
7
07
02
PM
PDT
Also, for the general case that ReMine formulates, the limit is based on reproductive excess rates and the assumption of serial substitution (i.e. no two substitutions overlap during the process)Atom
January 26, 2007
January
01
Jan
26
26
2007
06:06 PM
6
06
06
PM
PDT
PaV: In his clarification paper ReMine provides all the formulas necessary to derive Haldane's formula for the specific case and calculate the numbers. You seem pretty adept at understanding this topic, would you take a swing at calculating the numbers?Atom
January 26, 2007
January
01
Jan
26
26
2007
06:04 PM
6
06
04
PM
PDT
Sal: "I’m tempted to say “soft selection” is thus superflous (no added insight) to the idea of changes in gene frequency. At issue is whether it leads to faster substitution rates, and whether Haldane’s model includes population behavior that can be characterized as soft selection." Here's a reference to "soft selection": http://www.blackwellpublishing.com/ridley/a-z/Hard_&_soft_selection.asp In a nutshell, as you'll find on Ridley's citation above, if you have N adults in a population, with each female giving 2F eggs on average, then FN-N progeny must be eliminated to keep the population at a constant size. So, then, the question is this: does "hard selection" (the kind that is due completely to lesser fitness) elimnate all the FN-N, or does "soft selection" (the kind that is due to 'density-dependent' factors such as disease and competition for food and space which whittle down 'reproductive excess')? Obviously, it's a combination of both. Prescinding from this proposed division between hard and soft, why don't we just look at the big picture here. Haldane finds out about Kettlewell's experiment. At last, he thinks, evolution that can be seen right before our eyes. He then reasons that since this is so, we can rightly assume this is as 'fast' as evolution can go. He then develops an equation to address Biston betularia and suggests that this represents a fitness of 0.31, meaning that only 31 out of 100 organisms will survive to the juvenile state (which is a quite high loss of reproductive capacity) corresponding to 43 generations. I think the starting point here is that nature itself is setting an upper limit on the speed of evolution, which is 43 generations, since no other examples of this type occur. (As I pointed out to Caligula above, the Galapagos finches happens so fast that it cannot be evolution at work). The problem then becomes this: how does Haldane go from 43 generations (fitness= 31 out of 100), to 300 generations (fitness =90 out of 100). Here's all he says in the paper: "I doubt if such high intensities of selection (speaking of Kettlewell's experiment) have been common in the course of evolution. I think n=300, which would give I=0.1, is a more probable figure." This is not a lot to go on. But that's where Walter Remine's latest paper comes in. In it he re-interprets Haldane using his own concept of "reproductive excess"; but he, I don't believe, has not given us numbers yet. I think he's working on that as we speak.PaV
January 26, 2007
January
01
Jan
26
26
2007
04:52 PM
4
04
52
PM
PDT
Reading through ReMine's paper, I notice he sets out to characterize the Cost of Substitution of a new trait by focusing on the new trait reproductive excess, without having to take into account the reproduction of the old trait. This simplification simply asks "to get a trait from one individual to one million, how much reproductive excess is needed per generation, given N generations?". From there, he shows that this can usefully set a speed limit on evolution (or actually on any form of replication.) Dealing with soft-selection he writes: "These optimally low costs require the trait to have a constant growth rate throughout all generations. Since nature does not provide this constancy, real cases will always have higher costs. Also, if the new trait decreases even momentarily, then the total cost increases, because some costs will be incurred more than once. Some theorists believe cost problems can be solved by a non-constant growth rate—such as frequency- or density dependent fitness, as employed in soft selection. But that does not reduce the problem, at least not for single substitutions, as shown above. Rather, constancy is required to minimize the problem, as it allows the lowest possible total cost for a substitution of any given duration." (From http://saintpaulscience.com/CostTheory1.pdf page 5)Atom
January 26, 2007
January
01
Jan
26
26
2007
02:13 PM
2
02
13
PM
PDT
The posts seemed mixed up right now. I assume the following quote--not seen before--is from Caligula: "It is this density effect that soft selection is based on. When e.g. 6 offspring are produced per couple, and the ecological niche is filled by the population, unavoidably 4 of 6 the offspring will die. Now, Haldane would apparently assume that these 4 offspring somehow always die before selection is applied. But this hardly holds universally. More often selection (e.g. predation) and background mortality (e.g. sheer starvation) apply in parallel. And when they do, selection reduces background mortality, by reducing density." I think you're looking at it from one point of view only, and not from the point of view that Haldane was looking at it. What I mean is that you're looking at how selection is effecting background mortality and not the other way around; i.e., how background mortality effects selection, which is exactly what Haldane did. In passing, he stated the obvious (given that most predation, diseases, etc., affect individual organisms randomly, and thus NOT selectively): that density-dependent factors "slow down" evolution, i.e., the randomness of death that DDF involve slow down the selection of a particular loci. Haldane, impressed with this example from nature of "evolution in action", wants an answer to the question of 'how fast can evolution act'. His answer: 300 generations per loci. (And, by extension, more generations than this if DDF are involved.)PaV
January 26, 2007
January
01
Jan
26
26
2007
07:36 AM
7
07
36
AM
PDT
Soft selection is an important topic. I don't even have it explicitly stated as "theory" in my population genetics book by Hartl and Clark. Any one have a a recommendation? If I may offer a quick speculation as to why. Say a population of 30 has the following:
10 individuals with trait A at loci #1 10 individuals with trait B a tloci #2 10 individuals with trait C at loci #3
Each set of 10 individuals is disjoint from the other set. Let's suppose all the individuals with Trait B at Loci 2 fail to reproduce to the next generation. Whatever happens to individuals with trait A and C might be charactersized as soft selection or change in gene frequencies, which is well described by existing PopGen literature. I'm tempted to say "soft selection" is thus superflous (no added insight) to the idea of changes in gene frequency. At issue is whether it leads to faster substitution rates, and whether Haldane's model includes population behavior that can be characterized as soft selection. Another thing: how can soft selection be modelled? Trait B not reproducing can alternatively be modelled as Trait B being deleterious or causing a reduction in fitness with respect to a genotype! Some of these discussions remind of how in math and physics we regularly picked the most convenient coordinate system to solve problems. There was the problem of which perspective was the most practical, not that any perspective (as long as faithful to the problem at hand) was wrong. One thing I sense is that we are confronted with a similar problem here. Is soft-selection another "coordinate system" for something we have already dealt with or has been implicitly explored in existing literature????scordova
January 25, 2007
January
01
Jan
25
25
2007
01:42 PM
1
01
42
PM
PDT
Sal, you're welcome. What those who don't have the .pdf paper don't realize is that it is a digitalized copy of his original paper, which is a bit grainy to start with. So a copy of the pdf, well that must have been some kind of bad. Hope your eyes get better! ;)PaV
January 25, 2007
January
01
Jan
25
25
2007
01:12 PM
1
01
12
PM
PDT
Caligula: "A locus (pl. loci) is a location in coding DNA where diploid organisms have two gene alleles, one in each strand." It's clear from Haldane's paper that a 'loci', to him, was an "allel" (sic. his spelling); I just wanted to side-step the whole SNP's versus an "allel" type discussion. "There is no biologically sound reason to always assume that, when population size remains constant, (most of) reproductive excess is randomly removed before selection is applied. Rather, density-dependent deaths like starvation may well take place in parallel with selection." Indeed, Haldane acknowledges that selection and density-dependent factors can act in parallel. He points out that even density-independent factors and selection can act in tandem, and giving an example of this for Biston betularia. However, your saying, "There is no biologically sound reason to always assume that, when population size remains constant, (most of) reproductive excess is randomly removed before selection is applied," is completely puzzling to me since, as I read the paper, the very reason that Haldane uses juveniles is--as I pointed out in my last post--so that his equations will highlight evolution working at its fastest. IOW, if he included the diminishment due to density-dependent factors into his equations, then the "intensity of selection" would have to be less, which in turn means "n" would have to be even higher than 300, which here you've already argued is too high. Hence, my puzzlement. So I think you should be happy that he keeps it out (of course, I'm also sure this made his equations a lot easier to handle!). "However, I must defend Walter’s cause here. I can’t remember if 20 years is the correct time interval. But since it is the average beak size that oscillates, my guess is that the frequencies of competing alleles oscillate at one or more loci, without any complete substitution ever taking place." I think I'm in relative agreement with you on this point. I think there is, in fact, an "oscillation" taking place. However, we're dealing--from what has been reported--with "fixation", since I don't remember the Grants saying that, e.g., 55% of those formerly with small beaks how have larger size beaks, but that a complete transition from smaller to larger had taken place; i.e., 100% of the smaller were now larger. This means that "evolution" is not taking place, but something else entirely, since, for 10 generations, as I calculated it in the prior post, means 95% of one generation of such finches would die off--a fact that would have been splashed all over the headlines of newspapers the world over. Even if we take n=20 (generations=years), fitness=22%. IOW, 78% of the finches would have to die off each year. Again, this is stuff for headlines, wouldn't you agree? I thank you for your questions PaV, but it really is time for me to say goodbye to this forum." Sorry to see you go; I was ready to propose that we leave Haldane behind and head right to Remine's paper since he deals with the kinds of "density-dependent" factors that Haldane leaves off the plate. I, thus, thought we might, using Remine's paper, circumvent some of the uncertainty that Haldane's paper doesn't explicityl resolve. As to PT: well, you can't have a civil discussion over there, so you won't find me going there.PaV
January 25, 2007
January
01
Jan
25
25
2007
12:36 PM
12
12
36
PM
PDT
I would very much appreciate a simple example of "soft-selection" worked out with numbers, since you feel it solves Haldane's Dilemma. In my mind, I seem to think that differential reproduction and replacement is what the dilemma deals with, so soft-selection doesn't help, but maybe I'm understanding it wrong. Perhaps you can clarify what I misunderstand. Say you start with 10 replicators. That is your population. One of them gets a variant trait and now will begin to spread that trait throughout the population, if the trait helps it to do so by upping its replication rate, by allowing it to produce a net of more copies than its 9 other competitors. Let's say the replicator with the good trait has 4 offspring, and they all have the trait, and all reach the reproduction stage. Without the trait you are only likely to have 2 offspring reach that stage. In my mind, this would allow the replicators with the trait to eventually replace those without it. Now, when you bring in soft-selection, from my undertsanding what you're saying is that instead of the replicator in question "breeding true" the trait, perhaps half his copies have the trait and half don't. Those with it survive better and reach replication age, and the process repeats. This, to me, would actually hinder the spread of the trait, not speed it up. Perhaps you can claify what you mean. If you instead mean that the replicator has 6 copies, 2 of which would be eliminated by environment or other causes, and only those with the trait would remain (they are "soft-selected".) I still end up with a net of 4 copies with the trait, ready to spread in the population. So unless soft-selection can raise the relative number of copies per replicator with a trait (i.e. improve fitness) contra competitor replicators, I don't see how it can help the problem, let alone solve it. Which is why I request your clarification.Atom
January 25, 2007
January
01
Jan
25
25
2007
11:40 AM
11
11
40
AM
PDT
PaV, Thanks for the link to Haldane's paper!!!! My copy is a 2nd Hand PDF of scanned xerox copy. I could hardly read it! Salscordova
January 25, 2007
January
01
Jan
25
25
2007
11:25 AM
11
11
25
AM
PDT
Caligula, many lurkers here (like myself) are learning from your posts and enjoying the open calm discussion of the problem at hand. Your prescence here is greatly appreciated, I would love it if you continued.Atom
January 25, 2007
January
01
Jan
25
25
2007
11:19 AM
11
11
19
AM
PDT
PaV: "for the fixation of what, in Haldane’s day, they called a loci" A locus (pl. loci) is a location in coding DNA where diploid organisms have two gene alleles, one in each strand. Humans are currently estimated to possess 20,000-25,000 loci. The term locus is still in wide use (at least in Finland, it is introduced in high school biology). A substitution is the fixation of a new beneficial allele at a locus. "We have to remember that Haldane was focusing ONLY on juveniles" And he wouldn't have allowed us to go any further. Haldane uses a common oversimplification where viability is tested entirely during the juvenile phase. After that, survivors get the whole "cake", i.e. they get to reproduce in peace and refill the population size with their offspring. However, Haldane's discussion on density does involve two successive phases: 1. "background mortality" applied randomly, and 2. viability test or selection. This may apply well to moths, but it is hardly considered to apply universally. There is no biologically sound reason to always assume that, when population size remains constant, (most of) reproductive excess is randomly removed before selection is applied. Rather, density-dependent deaths like starvation may well take place in parallel with selection. As for your example on finches. I think such attempts, if successful, would mainly cause headache for e.g. Walter ReMine. If we truly did know from direct observation that substitutions easily take place in mere 10 generations, ReMine's "magic number" would be devastated; it is Walter who insists that the 300 generation limit is universal and unavoidable, not modern population genetics. However, I must defend Walter's cause here. I can't remember if 20 years is the correct time interval. But since it is the average beak size that oscillates, my guess is that the frequencies of competing alleles oscillate at one or more loci, without any complete substitution ever taking place. You should verify it from research, of course. I thank you for your questions PaV, but it really is time for me to say goodbye to this forum. You can reach me at PT if you wish to.caligula
January 25, 2007
January
01
Jan
25
25
2007
08:48 AM
8
08
48
AM
PDT
PaV: Kettlewell....finches
I would argue that even these examples are suspect. In these cases there was not overtake, the "favored" trait did not have a monopoly. How long would it have taken, if ever, for overtake to happen???? Darwinists have used this to try to illustrate rapid change, change in freuqency yes, but not a fixation event. These are inappropriate counter-example to Haldane's dilemma. Haldane's dilemma involves fixation, these cases were not fixation events.scordova
January 25, 2007
January
01
Jan
25
25
2007
07:50 AM
7
07
50
AM
PDT
Sal, I do have Haldane's paper. It can be downloaded free in .pdf format. http://www.blackwellpublishing.com/ridley/classictexts/haldane2.pdf I do have a degree in biology, but I've never used it. Population genetics is something I've looked at trying to figure out how evolution might solve the macroevolution process, but no more. But I must add, Haldane did presume an "intensity of selection" of 0.1, as Caligula points out; which, in turn, means he selected the figure of 300 generations for the fixation of what, in Haldane's day, they called a loci. Caligula says that the 0.1 intensity is way too low. Yet, I think we have to value Haldane's perspective somewhat. We have to remember that Haldane was focusing ONLY on juveniles, so that whatever juveniles survived this first 'fitness test', would then be subject to predation, and any other density-dependent conditions that would exist. Also, Haldane seems quite influenced in his re-calculation by the, then, just published Kettlewell experiment--which, of course, has questions of its own. However, the point to be taken here is that Haldane took note of this "evolution in action" experiment of Kettlewell: IOW, evolutionists tell us that "evolution acts too slowly for us to take note of it", but now here's evidence of "evolution in action", and so Haldane calculates how "slow" evolution normally works using, I believe, the Biston betularia experiment as an upper limit. So 43 generations--the number from his paper--represents an "upper limit" for how fast evolution works. Obviously, then, anywhere else that evolution might be at work, it must be working more slowly than the Kettlewell experiment, or else we would "see" it. If we go one step further, we can analyze the Galapagos Finches example of "evolution at work". There, if I remember correctly, in a twenty year period, the finch beak size went from large to small, and then back to large (relatively speaking). So that represents 20 generations. But we had--using Haldane's language--TWO loci changed; i.e., first the population moved in one direction, and then it moved back in the other direction. So, that means 10 generations pery loci fixation. Using Haldane's formula, that means their "fitness" is e^-3, or 5%, meaning that all but 95% of the population would have to 'die off' each generation for the 'loci' to be fixed. I think the zoologists there on the islands would have noticed that 95% of a species of finch had died. So, evolution, per Haldane, cannot realistically explain this other example of "evolution in action", I'm afraid. In the meantime, we can negotiate, perhaps, with Caligula on some general figure for fitness. Finally, as a note in the history of science, I was reading a paper the other day of a prominent microbiologist who referenced the great excitement occurring in the early 50's when Monod and others began studying bacteria and discovered the "logarithmic" phase of bacteria. For them, too, this was "evolution in action". Then, comes Kettlewell's experiment, and by the end of the 50's Darwinism/Modern Synthesis was doctrine, since there was so much 'evidence' of it at work. Nevertheless, "Haldane's Dilemna" remained unanswered. But perhaps Caligula can ride to the rescue. We await.PaV
January 25, 2007
January
01
Jan
25
25
2007
07:29 AM
7
07
29
AM
PDT
PaV: Apparently I somehow deleted a large portion in the middle of my last writing before posting. You seem to be confusing D=30 (the number of population sizes lost to selective deaths) with Haldane's eventual "magic number" n=300 (substitution time in generations). So the process takes 300 generations, during which 30 times the populations size is lost to selective deaths. Were Haldane's "magic number" really n=30, ReMine's respective "magic number" would be 16,667 and not 1,667. With n=300, I=0.1. Haldane's treatment of density consists of a single example, where larvae in an overcrowded population are harvested by parasites, against which they apparently can't evolve. During this phase, the larvae are assumed not to be subject to any kind of selection. Then, early adult (or late juvenile) moths are subject to predation (selection) but not effects of density. Haldane omits a qualitatively different density-dependent factor in such a treatment: ecological niches are finite. A growing population will eventually meet a limit where their environment can't provide viable room for more individuals (especially with territorial animals). I.e. excess ("background") mortality will follow. It is this density effect that soft selection is based on. When e.g. 6 offspring are produced per couple, and the ecological niche is filled by the population, unavoidably 4 of 6 the offspring will die. Now, Haldane would apparently assume that these 4 offspring somehow always die before selection is applied. But this hardly holds universally. More often selection (e.g. predation) and background mortality (e.g. sheer starvation) apply in parallel. And when they do, selection reduces background mortality, by reducing density. In summary: Haldane seems to assume that background mortality is somehow fixed, and whatever selection takes place, comes at the risk of extinction. Modern population genetics argues that selection reduces density, thus reducing background mortality, and effectively making "room" for itself.caligula
January 24, 2007
January
01
Jan
24
24
2007
11:48 PM
11
11
48
PM
PDT
It is rare but there is a truly decent comment at PT regarding the issues being raised. Passing reference has been made to Nachman's paradox. The following post by David Wilson was so impressive, I thought I would reference it here: Comment 156785 on Nachmans' U paradox. I could not post portions of it here since it was math symbol intensive..... Salscordova
January 24, 2007
January
01
Jan
24
24
2007
06:28 PM
6
06
28
PM
PDT
PaV, You seem quite knowledgeable about this topic. I presume you have Haldane's paper. I would like to keep this discussion going. Do you have a background in Population Genetics (PopGen). Salscordova
January 24, 2007
January
01
Jan
24
24
2007
06:07 PM
6
06
07
PM
PDT
The intensity of selection calculation I gave above is wrong. Haldane defines the "intensity of selection" in two ways: as ln s-optimal/S-beginning, and, after his calculations, as 30/n. Thus, if n=30, I=30/30=1.0. So, the "intensity of selection" is not 0.43, but 1.0. We would all agree that's quite "intense". In fact, I don't see how it could get any higher since 1.0 means every carrier of the allele is killed.PaV
January 24, 2007
January
01
Jan
24
24
2007
06:02 PM
6
06
02
PM
PDT
Caligula, thank you for your response. "If you look at “Discussion” in Haldane(1957), you see that he estimates intensity of selection with I=30/n. He first calculates that if n=43 => I=0.67, and then suggests conservative values n=300 => I=0.1. Intensity of selection, when the fitness of the optimal genotype is assumed 1.0, as it is here, can be directly subtracted from 1.0 to get an estimate of the average fitness of the population at the beginning of his scenario." Fitness is defined as e^-30/n. Intensity of selection is I=ln(s-optimal)/S(population at beginning). Only if "s" is small can you substitute s-optimal minus S-beginning for intensity. Here optimal is assumed, as you say, as 1.0. You divide that by e^-30/n and take the natural log, and you get I=30/n. In his example, he is using S=1/2, while s-optimal is 1.0. This is then ln(1/0.5)= ln 2=.69 If I=.69=30/n, then "n"=43 generations. BUT, the "intensity of selection" is NOT 0.1, but 0.31, and, as Haldane states, of the order seen in Biston betularia. As I calculated above, for 30 generations, I=1/2.33=0.43. This is NOT a small "intensity of selection." (. . . Rather, I’m saying Haldane is implicitly adding together the cost of selection and background mortality, which is likely why his intensity of selection is so low. Modern population genetics argues that background mortality may well decrease when selection increases, because selective deaths decrease density. (Most density factors don’t apply to larvae, after all!) Indeed, I don't believe Haldane is lumping background mortality with the "cost of selection". In fact, he is explicitly using "juveniles" so as to 'back-out' "density-dependent" factors. Why does he do this? He says: "Negative density=dependent factors must, however, slightly lower the overall efficiency of natural selection in a heterogeneous environment. If as the result of larval disease due to overcrowding the density is not appreciably higher in a wood (type of tree) containing mainly carbonaria than in a wood (type of tree) containing the original type, the spread of the gene C by migration is somewhat diminished." So, basically, he's saying that when density-dependent factors are involved--which they would be in the case of adults--this makes NS work more slowly. Likewise, if selection is quite high, it would in turn slow down background mortality. In the example he gives, he has 90% of the Biston betularia larvae being killed by a parasite. Likely "background mortality" would be diminished by at least 50%, or so, in such a situation. But here Haldane is concerned with how "fast" evolution can work, so he uses juveniles so as to not have the "background mortality" slow it down.PaV
January 24, 2007
January
01
Jan
24
24
2007
05:48 PM
5
05
48
PM
PDT
Caligula, it seems that you remain too busy to read or address my arguments,
I have reviewed this thread which I began Friday. A discussion of this complexity will not go quickly and I mentioned that numerous times. The other issues presented here are relevant to your points. I had addressed to issue of the 6% figure at least three times now, with the last one making a citation of a PNAS paper saying the 1.5% figure is probably in error, not to mention the genbank numbers suggest 6% as well. I have invited a correction of this figure... You criticized my interpretation of the Nachman paradox earlier. It appears you have reversed one of your posisitions on the the paradox. Was my original interpretation correct after all? I spent a few hours reviewing the equations, to begin a defense of my interpretation. Thus silence on my part does not mean I was too busy to address your objections. I spent a few hours looking into junkDNA and deeply conserved regions to argue that the non-coding regions may have significant functional regions in addition to the fact deep conservation exists. I say this because your remark about me being too busy is not completely fair. These issues will not be resolved easily and require thought and study. And in fact I have begun responding to your points. I appreciate your willingness to acknowledge when there is an open question. I'm not so sure that soft selection cure Haldane's dilemma. At PT, you said ReMine did not address soft selection. Elsewhere here I was about to copy ReMine's response to the Soft Seleciton "fix". I am also awaiting Mike Dunford's input on the issue.scordova
January 24, 2007
January
01
Jan
24
24
2007
03:48 PM
3
03
48
PM
PDT
PaV: Before I leave, I think your post requires a response. If you look at "Discussion" in Haldane(1957), you see that he estimates intensity of selection with I=30/n. He first calculates that if n=43 => I=0.67, and then suggests conservative values n=300 => I=0.1. Intensity of selection, when the fitness of the optimal genotype is assumed 1.0, as it is here, can be directly subtracted from 1.0 to get an estimate of the average fitness of the population at the beginning of his scenario. (With multiple loci, you can just sum up the coefficients, because multiplicative fitness and additive fitness are approximately the same when Iopposite to what Haldane says. Rather, I'm saying Haldane is implicitly adding together the cost of selection and background mortality, which is likely why his intensity of selection is so low. Modern population genetics argues that background mortality may well decrease when selection increases, because selective deaths decrease density. (Most density factors don't apply to larvae, after all!)caligula
January 24, 2007
January
01
Jan
24
24
2007
02:33 PM
2
02
33
PM
PDT
It seems that you remain too busy to read or address my arguments
I can understand how you may perceive it that way, but I had mentioned in advance I thought the thread would take a while (perhaps weeks) to address the issues. This has only been the first week. The issue of soft selection related to this thread is being discussed, and as that discussion reaches resolution, your arguments about soft-selection might get addressed.
I thank you for the time you had to spare for this discussion.
Thank you as well for your time as well, Caligula.scordova
January 24, 2007
January
01
Jan
24
24
2007
01:34 PM
1
01
34
PM
PDT
LOL @ DSAtom
January 24, 2007
January
01
Jan
24
24
2007
08:49 AM
8
08
49
AM
PDT
Whoa! This article somehow jumped back up near the top of the page. I'm just guessing here but that might've happened as a result of me changing the timestamp on the article from 1/18/06 to 1/23/06. Weird, huh? ;-)DaveScot
January 24, 2007
January
01
Jan
24
24
2007
08:17 AM
8
08
17
AM
PDT
I should have pointed out as well that Haldane was discounting "density-dependent" forces(which slows things down further), and his model is based on juvenile viability.PaV
January 24, 2007
January
01
Jan
24
24
2007
08:04 AM
8
08
04
AM
PDT
Caligula: "Haldane’s limit 0.1 is quite small, isn’t it? Surely, a population can compensate a much higher rate of mortality than 10% with reproductive excess?" I believe you're misrepresenting what Haldane asserts. His 10% was not added mortality, but was a fitness level. His mathematics point out that for diploids, depending not so much on the 'intensity of selection', but more on the initial frequency of the 'loci', you get a number from 10 to 100 added deaths; he took 30 as some reasonable number of deaths, or equivalently, generations. Now, in the discussion section he defines "fitness" (population fitness) as I=e^-30/n, where n=the number of generations. If you reduce "n", then the fitness of the population is lowered,and, hence, its susceptibility to extinction thereby increases. Using n=30, Haldane's number, I=e^-1, which is equal to 1/2.33=.43, or 43% fitness. That means that in one generation 43 out of 100 of the species will die. Haldane used 30 generations--probably a reasonable number (the number varies depending on whether the 'loci' is dominant or recessive). Even using the lowest figure of 10 generations, then fitness = I=e^-10/n, which then implies that even at 10 generations, for the 'loci' to be fixed, each generation has to be reduced by 43%. As to "soft selection", Haldane says that "density-dependent" selective forces reduce the speed at which evolution by NS can take place. I think you're implying the opposite.PaV
January 24, 2007
January
01
Jan
24
24
2007
08:01 AM
8
08
01
AM
PDT
Sal, It's time to say farewell for the time. It seems that you remain too busy to read or address my arguments, and I feel I have said what I have to say about yours. The thread seems to have been recreated on the front page, and I was told that I contribute nothing there and I'd better be banned. I complained that GilDodgen tries to present a mathematical argument without any math in it, which was declared uncivil behavior. I thank you for the time you had to spare for this discussion.caligula
January 24, 2007
January
01
Jan
24
24
2007
06:33 AM
6
06
33
AM
PDT
At PT, I have asked you to give me a pointer to whatever other sources you may have for this belief,
Caligula, I know there has been a flurry of posts, so you probably missed that I gave my answer several times. I just want to put to rest before the readers, that I have responed to your querry at least 2 times. I recognize there probably has been a misunderstanding somewhere, on the other hand I wish to reassure the readers that I have not evaded the issue. I gave a link to Genbank, chimps have 180,000,000 more base pairs than humans. Homo Sapein Genome size = 3,400,000,000 Pan Troglodytes= 3,577,500,000 See: Genbank The difference between pan an homo is about 180,000,000 just based on base pair count, and that's about 6%. This indicates comparisons of nucleotides that are non-coding. I argue the non-coding 6% need to be accounted for. Also, see this article from PNAS: Divergence between samples of chimpanze and human DNA is 5% counting indels
Five chimpanzee bacterial artificial chromosome (BAC) sequences (described in GenBank) have been compared with the best matching regions of the human genome sequence to assay the amount and kind of DNA divergence. The conclusion is the old saw that we share 98.5% of our DNA sequence with chimpanzee is probably in error. For this sample, a better estimate would be that 95% of the base pairs are exactly shared between chimpanzee and human DNA. In this sample of 779 kb, the divergence due to base substitution is 1.4%, and there is an additional 3.4% difference due to the presence of indels. The gaps in alignment are present in about equal amounts in the chimp and human sequences. They occur equally in repeated and nonrepeated sequences, as detected by REPEATMASKER (http:ftp.genome.washington.eduRM RepeatMasker.html).
If I have misinterpreted something here and 6% is wildly off, I welcome a correction. For the sake of argument I can accept a 1.5% difference, but let's not throw out all the other data points just yet that point to a 6% difference. Sal PS I thank you for your meticulous examination of everything and your willingness to keep this backpage thread alive on this important topic.scordova
January 23, 2007
January
01
Jan
23
23
2007
11:58 AM
11
11
58
AM
PDT
1 2 3 5

Leave a Reply