Home » Comp. Sci. / Eng., Darwinism, Evolution, Intelligent Design, Science » Gambler’s ruin is Darwin’s ruin

Gambler’s ruin is Darwin’s ruin

The same day I first watched “Expelled” in theaters, I also watched the movie “21″. The movie “21″ is based on the true story of MIT students who made a fortune in Las Vegas casinos through the use of mathematics.

The real story behind the movie began with an associate of Claude Shannon by the name of Dr. Edward O. Thorp of MIT. In the Early 60′s, Thorp published a landmark mathematical treatise on how to beat casinos. His research was so successful that Las Vegas casinos shut down many of their card tables for an entire year until they could devise counter measures to impede Thorp’s mathematics.

Thorp is arguably the greatest gambler of all time. He extended his gambling science to the stock market and made a fortune. His net worth is in the fractional to low billions. He is credited with some independent discoveries which were the foundation to the Black-Scholes-Merton equation relating heat transfer thermodynamics to stock option pricing. The equation won the Nobel prize and was the subject of the documentary: The Trillion Dollar Bet.

Thorp would probably be even richer today if Rudy Gulliani had not falsely implicated him in the racketeering scandal involving Michael Milken. Thorp, by the way, keeps a dartboard with Gulliani’s picture on it… :-)

The relevance of Thorp’s math to Darwinism is that Thorp was a pioneer of risk management (which he used to create the world’s first hedge fund). In managing a hedge fund or managing the wagers in casinos, one is confronted with the mathematically defined problem of Gambler’s Ruin. The science of risk management allows a risk manager or a skilled gambler to defend against the perils gamblers ruin. Unfortunately for Darwinism, natural selection has little defense against the perils of gambler’s ruin.

Even if an individual has a statistical advantage over a casino game, it is possible the individual can lose. Let’s say a skilled player has a 1% advantage on average over the casino. He wanders into the casino, looks for a favorable opportunity and wagers $500,000.00.

If he has a 1% statistical advantage, that means he has a 50.5% chance of winning and a 49.5% chance of losing. Even though he has a slight edge, he still has a very substantial chance of losing. It would be unwise to bet $500,000.00 if that is his life savings!

The movie “21″ romanticized the advantage skilled players have. The movie “21″ portrayed the MIT students as people who could sit at card tables and bilk casinos like ATM machines. That’s not how it works as testified by one of the more noteworthy members of the real MIT team by the name of Andy Bloch. Bloch reported that during his tenure as manager of the MIT team, the team was once in the red for 9 months before recovering. Skilled players lose big bets not quite 50% of the time. It is not unusual, on average, to have a losing streak of 8 hands in a row every 256 rounds. Ben Mezrich reported in his book, Bringing Down the House, an incident where the Big Player of the MIT team lost 3 hands in a row in 45 seconds of play for a sum total of $120,000.00! It happens…

A skilled player with a 1% advantage might expect to play 50,000 hands before his expected value exceeds the effect of one standard deviation of bad luck. That means he might have to play a looooong time before he realizes a profit….

What does this have to do with Darwinism? Darwin argued that

Natural selection acts only by taking advantage of slight successive variations; she can never take a great and sudden leap, but must advance by short and sure, though slow steps.”

But that is complete nonsense mathematically speaking because of the problem of gambler’s ruin. It is not surprising that Darwin could not see the flaw in his argument because he could not even do high school algebra even after substantial effort. The lack of basic math and logic pervades his flawed theory.

The problem is that a selectively-advantaged traits are still subject to random events. The most basic random event is with whether a parent will even pass down a gene to a child in the first place! Added to that problem is the nature of random events in general. A genetically advantaged individual may die by accident, get consumed by a predator, etc.

And the problem gets worse. Even if selectively advantage traits get spread to a small percentage of the population, it still has a strong chance of being wiped out by the sum total of random events. The mathematics of gambler’s ruin helped clarify the effect of random “selection” on natural selection.

Without going into details, I’ll quote the experts who investigated the issues. Consider the probability a selectively advantaged trait will survive in a population a mere 7 generations after it emerges:

if a mutant gene is selectively neutral the probability is 0.79 that it will be lost from the population
….
if the mutant gene has a selective advantage of 1%, the probability of loss during the fist seven generations is 0.78. As compared with the neutral mutant, this probability of extinction [with natural selection] is less by only .01 [compared to extinction by purely random events].
….

Theoretical Aspects of Population Genetics
Motoo Kimura and Tomoko Ohta

This means is that natural selection is only slightly better than random chance. Darwin was absolutely wrong to suggest that the emergence of a novel trait will be preserved in most cases. It will not! Except for extreme selection pressures (like antibiotic resistance, pesticide resistance, anti-malaria drug resistance), selection fails to make much of an impact.

The contrast between a skilled gambler and natural selection is that a skilled player can wager small fractions of the money he sets aside for his trade. If a skilled gambler has $50,000, he might wager $100 at a time until the law of large numbers causes his statistical advantage to be asserted. He can attempt many many trials until his advantage eventually prevails. In this manner a skilled gambler can protect himself against the mathematics of gamblers ruin.

But natural selection is a blind watchmaker. It does not know how to perform risk management like a skilled player or the great math wizard, Edward Thorp. For natural selection to succeed in the way Thorp succeeded in the great casinos of Nevada and Wall Street, it has to hope the same mutant appears spontaneously many many times in many individuals. But for complex genes, this doesn’t happen. Truly novel and beneficial mutations are rare. They don’t repeat themselves very often, and when they arise, they will likely be wiped out unless there is fairly intense selection pressure (like we see in pesticide resistance or anti-biotic resistance or anti-malaria drug resistance, or malaria resistance associated with sickle cell anemia).

A further constraint on selective advantage of a given trait is the problem of selection interference and dilution of selective advantage if numerous traits are involved. If one has a population of 1000 individuals and each has a unique, novel, selectively-advantaged trait that emerged via mutation, one can see this leads to an impasse –selection can’t possibly work in such a situation since all the individuals effectively cancel out each other’s selective advantage.

This illustrates that there has to be a limit to the number of innovations appearing in a population simultaneously for selection to work. The emergence of advantageous mutations in a population has the net effect of diluting the selective advantage of all the traits.

If trait A has a large selective advantage in relation to trait B, trait A dilutes the selective advantage of trait B. Thus trait B is exposed more and more to gambler’s ruin because of the existence of trait A. For example an individual with better eyesight (trait A) might prevail over an individual with higher intelligence (trait B). An otherwise good trait (intelligence) is lost because another trait (good eyesight) interferes with the ability of that trait (intelligence) to be maintained…

Thus one can see the problem of many “slight advantageous traits” being necessarily “slight” because of the problem of interference. But “slight” implies they are subject to gambler’s ruin, and thus unlikely to be preserved as Darwin asserted. Thus Darwin was dead wrong….

John Sanford gives a more rigorous treatment in his book Genetic Entropy where he gives more exact numbers on the limits of selective advantage based on problems such as interference. Sanford shows that a 1% selective advantage is fairly generous, and is usually less than 1%. [I emphasize the word "usually"].

Most ironic is that Fisher’s analysis of the effect of gambler’s ruin essentially trashes his own theorem, Fisher’s Fundamental Theorem of Natural Selection. Fisher’s Malthusian notions of “fitness” in his fundamental theorem do not account for the effect of random events taking out selectively advantaged traits. The fundamental theorem assumes evolution is noise free with respect to fitness, that advantageous traits always result in more offspring. We know empirically and theoretically this cannot possibly be true even on the approximate model of Mendelian inheritance.

For reasons such as those I laid out, many believe molecular evolution had to be mostly invisible to selection. Attributing even 5% of molecular evolution to Darwinism would be extremely generous. See: Kimura’s Neutral Theory.

Kimura gave an obligatory salute to Darwin by claiming adaptational features (like morphology) are exempt from his math. I’ve seen nothing supporting Kimura’s obligatory salute to Darwin. It seems his neutralist ideas apply quite well to realms beyond the molecular. NAS member Masotoshi Nei has finally been bold enough to assert most everything else about evolution, not just molecular evolution, is under much less selection pressure than previously assumed. I think Nei is right.

Yesterday afternoon I showed Kimura’s books to an ID-friendly senior in biology. His jaw dropped. He had studied molecular genetics, but our conversation yesterday helped him make the connections he had not made before. The math clearly indicates Darwin couldn’t possibly be right, and by way of extension, neither can Richard Dawkins.

These fairly obvious considerations were not lost upon Michael Lynch:

the uncritical acceptance of natural selection as an explanatory force for all aspects of biodiversity (without any direct evidence) is not much different than invoking an intelligent designer

Michael Lynch
The Origins of Genome Architecture, p 368

Notes:

1. I created a Microsoft Excel Spreadsheet is provided for illustration of these concepts. I used a random number generator to simulate the progress of 10 equally skilled gamblers in a casino. Press the “F9″ to redraw the graph. One can see that even “selectively” advantaged individuals can lose. The important thing to grasp is that “slight selective” advantages do not look very different from random walks except in the long run. The problem for natural selection in the wild is that there usually is no “long run” for a newly emerged trait if it suffers from gamblers ruin. The “long run” exists for skilled and intelligent risk managers like Edward Thorp, it does not exist, statistically speaking, for most selectively advantageous traits.

A copy of my spreadsheet can be accessed here.

Sometimes pressing “F9″ will cause most of the gamblers to win, and other time it will cause most of them to lose. This underscores the strong effect of random events even when one possess an inherent statistical advantage such as a gambling skill or a selectively advantaged trait.

2. Here is a nice pic of Bill with a standard casino die.

In the 1970’s, casinos had to redesign their craps tables in order to foil skilled dice throwers who exploited slightly non-random behaviors of dice. Las Vegas laws were passed that prevented skilled players from using there specially designed tosses which would exhibit non-random, statistically advantageous behavior.

Some people still claim to be able to influence dice so as to create non-random outcomes in a legal way. However, even skilled crap shooters need principles of risk management and precautions against gambler’s ruin to succeed.

[UPDATE:

1. 5/5/08 World Renowned Geneticist Joe Felsenstein responds to my essay here: Gambler's Ruin is Darwin's Gain.

2. 5/5/08 HT: ICON-RIDS:

Natural Selection is daily and hourly scrutinising, throughout the world, the slightest variations; rejecting those that are bad, preserving and adding up all that are good.

C.DARWIN sixth edition Origin of Species — Ch#4 Natural Selection

This is an even better quote showing how wrong Darwin was in light of these discussions.

See: this comment

3. Thanks to pantrog of PT for his editorial correction about sickle cell anemia. That was my editorial mistake not seeing it in the first place. My error was pointed out here here.

4. 5/8/08 One could easily modify the spreadsheet to stop progress when zero is hit, except if I did this, one would not easily see all the lines since most of them abort early thus giving a misleading impression of large scale progress. See this comment:
Comment about Spreadsheet

5. I wrote: If he has a 1% statistical advantage, that means he has a 50.5% chance of winning and a 49.5% chance of losing. To clarify, the outcomes are complicated by double-downs, splits, and blackjacks, etc. so the notion of "win" in this thread is effective average win over time per round....I didn't want to get into these deep specifics earlier as it was peripheral to the thread...

6. 5/31/08 In response to various comments by those at UD and PandasThumb, I created another spreadsheet with some improvements. See the improvements at: ruin_olegt_mod1.xls. The princple changes were in response to suggestions by a very fine physicist by the name of Olegt who sometimes posts at TelicThoughts and PT. The new simulation has more rounds and actually prevents a player from playing once he is ruined.

]

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

175 Responses to Gambler’s ruin is Darwin’s ruin

  1. Further notes:

    3. HT: Atom and others for insisting I re-visit bio-physicist Lee Spetner’s writings. Spetner’s writings strongly influenced the above essay. Spetner wrote the book Not by Chance just around the time he retired from Johns Hopkins. Nobel Laureate Christian Anfinsen gave a strong endorsement to Spetner’s book. Anfinsen retired from Johns Hopkins around the time of giving the endorsement. Incidentally, Paul McHugh, after criticizing Darwin, shortly thereafter retired from Johns Hopkins. I guess they picked ideal moments to express dissent. :-)

    Not by Chance is one of 2 ID-sympathetic books endorsed by Nobel Laureates. [the other endorsement being Nobel Laureate Richard Smalley's endorsement of Origin of Life by Fuz Rana and Hugh Ross. Even though Ross publicly criticizes ID, he must surely believe an intelligence made life.]

    4. Claude Shannon and Edward Thorp were the first reported team of scientists to try to defeat roulette by using a hidden computer to estimate trajectories and architectural flaws in roulette tables. They were aguably the first real MIT “team”.

    5. The real “Ben Campbell” of the movie “21? was Jeffrey Ma who made a cameo in the movie as a dealer. There were actually 2 MIT teams: the Amphibians and the Reptiles. The Reptiles presumably evolved from the Amphibians, and at MIT the Reptiles were the offshoot of the Amphibians. The Reptiles were the subject of the book that inspired the movie… Edward Thorpe was not “Mickey” in the movie because real MIT teams existed from the 90’s to the early 2000’s, long after Thrope was already filthy rich in Wall Street…”Mickey” had to be someone other than Thorp.

    6. Interestinly Thorp played briefly in Nevada. He was bankrolled and accompanied by an ex-mobster by the name Manny Kimmel. An MIT academic with an ex-mobster? Truth is sometime starnger than fiction. Thorp dropped out of the casinos after he was poisoned with a tranquilizer. He feared for his life, left the casinos and became a true gambler on wallstreet.

  2. Thanks, Sal. Very interesting and well laid-out post. There is, of course, an ongoing effort to deny that these principles apply to Darwinian theory, or if they apply, then evolutionary theory can overcome them. Nevertheless, the mathematical problems of the Darwinian paradigm bear repeating, and the more pressure that can be brought to bear the better. After all, if Darwinism is sound, it should be able to withstand, nay, it should even be supported by mathematical scrutiny; if not, it deserves to fall.

  3. 3
    JunkyardTornado

    The contrast between a skilled gambler and natural selection is that a skilled player can wager small fractions of the money he sets aside for his trade. If a skilled gambler has $50,000, he might wager $100 at a time until the law of large numbers causes his statistical advantage to be asserted. He can attempt many many trials until his advantage eventually prevails. In this manner a skilled gambler can protect himself against the mathematics of gamblers ruin.
    But natural selection is a blind watchmaker. It does not know how to perform risk management like a skilled player or the great math wizard, Edward Thorp.

    Was surprised given the subject of the article that the “house advantage” was not mentioned once. How much skill does it take to put a 0 and 00 on a roulette wheel.

    In natural selection would it not be the case that in the long run, mutations that confer any sort of advantage at all will predominate over mutations that confer no advantage. Certainly it is understandable how even mutations conferring a 1% advantage will be wiped out in huge numbers. But if you look at the mutations that have been preserved, the “house advantage” would seem to dictate that those conferring any sort of advantage should predominate drastically.

    Truly novel and beneficial mutations are rare. They don’t repeat themselves very often, and when they arise, they will likely be wiped out unless there is fairly intense selection pressure
    It seems that arguments against evolution often come down to the observed rate of mutation. So, does this mean there is some mutation rate that would have made the whole process viable? Is it possible that a huge amount of random genetic variation (a good bit of it pure garbage) was generated at the very start of the biological process on this planet, and it is that random genetic information that has been in the process of being sorted out ever since.

    A further constraint on selective advantage of a given trait is the problem of selection interference and dilution of selective advantage if numerous traits are involved. If one has a population of 1000 individuals and each has a unique, novel, selectively-advantaged trait that emerged via mutation, one can see this leads to an impasse –selection can’t possibly work in such a situation since all the individuals effectively cancel out each other’s selective advantage.

    So would this mean that if in a population of 1000 individuals each had a harmful dibilitating selectively-disadvantaged mutation, the disdavantages would be cancelled out and these mutations would be rendered neutral. If so, these traits would maintain the same ratio in relation to each other in the population, but the species would dwindle to extinction.

    As far as beneficial mutations rendered “neutral”, if the entire population experiences a sharp peak, (IOW all traits are resulting in an increase in reproduction) then there are more chances for even “neutral” mutations to be preserved (right?)

  4. Was surprised given the subject of the article that the “house advantage” was not mentioned once. How much skill does it take to put a 0 and 00 on a roulette wheel.

    If the player’s advantage is 1% the house advantage is -1%. The house advantage was implicilty stated in my essay.

    Skilled gamblers, like those who were part of the MIT team did not play games that had conditional independence (like roulette), but games where the probability of an outcome was influenced by the observation of past events. Their minds were sufficiently keen to be able to recognize when table games offered an avantage.

    For example, a skilled gambler can look at a table and watch the cards being dealt. In certain games if a high proportion of fours, fives, and sixes are have been dealt in relation to tens and aces, the player can have an edge of as much as 5% over the house. The house edge in that case is -5%.

    What the MIT team did was to station spotters at the tables. The spotters played $25 a hand. When the dealer had dealt out a disproportionate number of low value to mid value cards (2,3,4,5,6,7), the conditional probability density was highly favorable.

    The spotter would signal the big player who would then wander over the table, pretend he was drunk, and lay down a $5,000, $10,000….$80,000 bet. I think the record for the MIT team was when Semyon Dukach laid down an $80,000 bet.

    With a 5% advantage, the expected value for an $80,000 bet is:

    $80,000 * .05 = $4,000

    The problem is with an $80,000 bet you generally win or lose, you never get the expected value. The expected value is realized over many trials through averaging….

    Risk managment entails picking the right percentage of the total reserves in cash and wagering a fraction of the reserves on each bet or portfolio position…

    Wager too much and gambler’s ruin results. Wager too little, and too little money is made. The appropriate proportion was determined by MIT professor John Kelly. Thorp was the first gambler to:

    1. figure out how to achieve an advantage by avoiding games with conditional independence and choosing games with conditional dependence

    2. use Kelly risk managment techniques

    Thorpe realized some roulette wheels had design flaws which favored certain numbers. He and Claude Shannon built a hidden wearable computer that they brought to the casino to analyze and exploit roulette wheel flaws….

  5. 5

    Scordova, great article.

  6. Great article indeed. I thumbed it up on stumbleupon.

    I’m glad someone has done the research on the odds of a beneficial mutation taking over the population. But these statistics start out assuming a beneficial mutation has taken place.

    Is there any research you could point me to to show the statistics of a mutation being beneficial? Or is that too broad of a term?

  7. Thermodynamics and heat flow are two different areas of physics and mechanical engineering, with separately derived sets of equations for problem solution. Heat flow problems were classically defined for solids early on, but are also defined for other states of matter. The Black-Scholes-Merton equation is derived from the laws of heat flow. Thermodynamics is concerned with change of state, kinetic energy, potential energy, and energy conversion.

  8. Or is that too broad of a term?

    Yes, I’d say more precision is needed. We are looking for examples of mutations that are not only beneficial in relation to fitness but also in relation to the progressive/positive creation/significant (> UPB) modification of existing CSI. But that’s a different thing than the generally used “beneficial mutations”. If there is a generally-accepted term that encapsulates what you are looking for I’m not aware of it. It’s not CSI in general since that could be negative in relation to fitness. For example, if I were to tack a spoiler (like on a vehicle) and a retractable anchor onto a bird I think that would not be too beneficial…

    In Behe’s new book, the majority of the examples he discussed involved destructive albeit positively selected mutations, but not all. Behe also discussed the antifreeze glycoprotein gene in Antarctic notothenioid fish. In short, he says that it looks reasonably convincing as an example of Darwinian evolution, but that it’s a relatively minor development, and probably marks the limit of what Darwinian processes can reasonably be expected to do in vertebrate populations. So what we’re primarily looking for is the limitations on “constructive” positively selected beneficial mutations.

    The Edge of Evolution is an estimate and it was derived from the limited positive evidence for Darwinian processes that we do possess. This estimate would of course be adjusted when new evidence comes into play or abandoned altogether if there is positive evidence that Darwinian processes are capable of large scale constructive positive evolution (or at least put in another category if it’s ID-based). The bulk of the best examples of Darwinian evolution are destructive modifications like passive leaky pores (a foreign protein degrading the integrity of HIV’s membrane) and a leaky digestive system (P. falciparum self destructs when it’s system cannot properly dispose of toxins it is ingesting, so a leak apparently helps) that have a net positive effect under limited/temporary conditions (Behe terms this trench warfare). I personally believe that given a system intelligently constructed in a modular fashion (the system is designed for self-modification via the influence of external triggers) that Darwinian processes may be capable of more than this, but we do not have positive evidence for this concept yet. But that’s foresighted non-Darwinian evolution in any case, and even if there are foresighted mechanisms for macroevolution they might be limited in scope.

  9. I’m glad someone has done the research on the odds of a beneficial mutation taking over the population.

    This was done in the 1960s – N.T.J. Bailey gives the results in one of his books.

    Is there any research you could point me to to show the statistics of a mutation being beneficial?

    Yes – there was a review last year. It’s of the order of 1% or so, but the estimates vary, and it’s difficult to get really good estimates, because mutations are rare in themselves.

  10. Scordova
    Great demonstration of the Doom of Darwin.

    Now encourage you to turn the argument around and argue the Design Defense.

    Take the foundational Design Principle of:
    “Preserve the Design”.

    Apply the benefits of the Founder’s Effect with sexual reproduction and DNA repair.

    Then show by the laws of large numbers or repeated replication etc. that the Design is Preserved relative to harmful mutations.

    (Within the limits of Sanford’s Genetic Entropy, of increasing accumulation of harmful mutations.)

    Thus to the first order, I posit the following ID hypothesis:

    If the probability of a mutation being preserved is m (under neo-Darwinian evolution)
    Then the probability D of the Design under mutation being preserved is D=(1-m).

    As you showed that the mutation probability m being preserved goes to zero, the design preservation probability D=(1-m) will tend to 1.

    That is the general concept. Now turn it over to you mathematical types to work up the quantitative proofs and demonstrations.

  11. JH:


    In natural selection would it not be the case that in the long run, mutations that confer any sort of advantage at all will predominate over mutations that confer no advantage. Certainly it is understandable how even mutations conferring a 1% advantage will be wiped out in huge numbers. But if you look at the mutations that have been preserved, the “house advantage” would seem to dictate that those conferring any sort of advantage should predominate drastically.

    Don’t forget about genetic drift. Genetic drift is the tendency of populations to become more homogenous over time — minority variations are wiped out by stochiastic processes. In order for a new mutation to take over a whole population, it has to be so advantageous that it overcomes the general tendency of populations to weed out such mutations.

    Bottom line: genetic drift is another way the deck is stacked in favor of the original design, not the mutant.

  12. Don’t forget about genetic drift. Genetic drift is the tendency of populations to become more homogenous over time

    No, genetic drift is the tendency for allele frequencies to change randomly, because of finite sampling. The effect is for populations to become less homogeneous over time. Different “minority variants” become fixed in different populations.

  13. 13

    Is there any research you could point me to to show the statistics of a mutation being beneficial?

    “I have seen estimates of the incidence of the ratio of deleterious-to-beneficial mutations which range from one in one thousand up to one in one million. The best estimates seem to be one in one million (Gerrish and Lenski, 1998). The actual rate of beneficial mutations is so extremely low as to thwart any actual measurement (Bataillon, 2000, Elena et al, 1998). Therefore, I cannot …accurately represent how rare such beneficial mutations really are.” (Sanford; Genetic Entropy page 24)

    The fate of competing beneficial mutations in an asexual population (Philip J. Gerrish & Richard E. Lenski)

    “Clonal interference is not the only dynamic that inhibits the progression of beneficialmutations to fixation in an asexual population.Asimilar inhibition may be caused by Muller’s ratchet (Muller, 1964; Haigh, 1978), in which deleterious mutations will tend to accumulate in small asexual populations. As shown by Manning and Thompson (1984) and by Peck (1994), the fate of a beneficial mutation is determined as much by the selective disadvantage of any deleterious mutations with which it is linked as by its own selective advantage.”

    http://myxo.css.msu.edu/lenski.....Lenski.pdf

    Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? (Thomas Bataillon)

    Abstract

    ……It is argued that, although most if not all mutations detected in mutation accumulation experiments are deleterious, the question of the rate of favourable mutations (and their effects) is still a matter for debate.

    http://www.nature.com/hdy/jour.....7270a.html

    High Frequency of Cryptic Deleterious Mutations in Caenorhabditis elegans ( Esther K. Davies, Andrew D. Peters, Peter D. Keightley)

    “In fitness assays, only about 4 percent of the deleterious mutations fixed in each line were detectable. The remaining 96 percent, though cryptic, are significant for mutation load…the presence of a large class of mildly deleterious mutations can never be ruled out. ”

    http://www.sciencemag.org/cgi/...../5434/1748

    ” Bergman (2004) has studied the topic of beneficial mutations. Among other things, he did a simple literature search via Biological Abstracts and Medline. He found 453,732 “mutation” hits, but among these only 186 mentioned the word “beneficial” (about 4 in 10,000). When those 186 references were reviewed, almost all the presumed “beneficial mutations” were only beneficial in a very narrow sense- but each mutation consistently involved loss of function changes-hence loss of information.”

    Trying to find an actual “hard” number for the “truly” beneficial mutation rate is, in fact, what Dr. Behe tried to do in his book “The Edge of Evolution”.

    Dr. Behe states in Edge of Evolution on page 135.

    Generating a single new cellular protein-protein binding site (in other words, generating a truly beneficial mutational event that would explain the generation of the complexity we see in life) is of the same order of difficulty or worse than the development of chloroquine resistance in the malarial parasite.

    That order of difficulty is put at 10^20 replications (births) of the malarial parasite, by Dr. Behe.

    Thus, the actual rate for “truly” beneficial mutations, that would account for the complexity we see in life, is far in excess of one-hundred-billion-billion mutational events.

    Thus, this one in a million number, that is often bantered about for “truly” beneficial mutations, is actually far, far too generous for the evolutionists to be using for their hypothetical calculations.

    In fact, from consistent findings such as these, it is increasingly apparent that Genetic Entropy is the overriding foundational rule for all of biology, with no exceptions at all, and that the belief in “truly” beneficial mutations is nothing more than wishful speculation on the naturalists part that has no foundation in empirical science whatsoever:

    The foundational rule of Genetic Entropy for biology can be stated something like this:

    All adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the original integrated complex information in the parent species genome.

    Professional evolutionary biologists are hard-pressed to cite even one clear-cut example of evolution through a beneficial mutation to DNA that would violate the principle of genetic entropy. Although evolutionists try to claim the lactase persistence mutation as a lonely example of a beneficial mutation in humans, lactase persistence is actually a loss of a instruction in the genome to turn the lactase enzyme off, so the mutation clearly does not violate genetic entropy. Yet at the same time, the evidence for the detrimental nature of mutations in humans is clearly overwhelming, for doctors have already cited over 3500 mutational disorders (Dr. Gary Parker).

    “Mutations” by Dr. Gary Parker

    http://www.answersingenesis.or.....ations.asp

    Mutations: The Raw Material for Evolution?

    http://www.icr.org/articles/print/3466/

    “It is entirely in line with the al nature of naturally occurring mutations that extensive tests have agreed in showing the vast majority of them to be detrimental to the organisms in its job of surviving and reproducing, just as changes ally introduced into any artificial mechanism are predominantly harmful to its useful operation” H.J. Muller (Received a Nobel Prize for his work on mutations to DNA)

    “But there is no evidence that DNA mutations can provide the sorts of variation needed for evolution… There is no evidence for beneficial mutations at the level of macroevolution, but there is also no evidence at the level of what is commonly regarded as microevolution.” Jonathan Wells (PhD. Molecular Biology)

    “The neo-Darwinians would like us to believe that large evolutionary changes can result from a series of small events if there are enough of them. But if these events all lose information they can’t be the steps in the kind of evolution the neo-Darwin theory is supposed to explain, no matter how many mutations there are. Whoever thinks macroevolution can be made by mutations that lose information is like the merchant who lost a little money on every sale but thought he could make it up on volume.” Dr. Lee Spetner (Ph.D. Physics – MIT)

  14. 14

    cont..

    The human genome, according to Bill Gates the founder of Microsoft, far, far surpasses in complexity any computer program ever written by man. The data compression (multiple meanings) of some stretches of human DNA is estimated to be up to 12 codes thick (Trifonov, 1989)! No line of computer code ever written by man approaches that level of data compression (poly-functional complexity). Further evidence for the inherent complexity of the DNA is found in a another study. In June 2007, a international team of scientists, named ENCODE, published a study that indicates the genome contains very little unused sequences and, in fact, is a complex, interwoven network. This “complex interwoven network” throughout the entire DNA code makes the human genome severely poly-constrained to random mutations (Sanford; Genetic Entropy, 2005; page 141). This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code. Thus even though a random mutation to DNA may be able to change one part of an organism for the better, it is now proven much more likely to harm many other parts of the organism that depend on that one particular part being as it originally was. Since evolution was forced, by the established proof of Mendelian genetics, to no longer view the whole organism as to what natural selection works upon, but to view the whole organism as a multiple independent collection of genes that can be selected or discarded as natural selection sees fit, this “complex interwoven network” finding is extremely bad news, if not absolutely crushing, for the population genetics scenario of evolution (modern neo-Darwinian synthesis) developed by Haldane, Fisher and Wright (page 52 and 53: Genetic Entropy: Sanford 2005)!

    http://www.genome.gov/25521554

    BETHESDA, Md., Wed., June 13, 2007 -” An international research consortium today published a set of papers that promise to reshape our understanding of how the human genome functions. The findings challenge the traditional view of our genetic blueprint as a tidy collection of independent genes, pointing instead to a complex network in which genes, along with regulatory elements and other types of DNA sequences that do not code for proteins, interact in overlapping ways not yet fully understood.”

    http://www.boston.com/news/glo.....ed/?page=1

    “The science of life is undergoing changes so jolting that even its top researchers are feeling something akin to shell-shock. Just four years after scientists finished mapping the human genome – the full sequence of 3 billion DNA “letters” folded within every cell – they find themselves confronted by a biological jungle deeper, denser, and more difficult to penetrate than anyone imagined.”

  15. Your spreadsheet doesn’t reflect the Gambler’s ruin problem exactly: a path which reaches zero should stay at zero.

  16. 16

    As well, what is crushing to the “beneficial mutation scenario” is “slightly deleterious mutations” are far below the power of natural selection to remove from the genome. Thus if a hypothetical “truly” beneficial mutation, it would be of no benefit from a progressive Darwinian scenario since this multitude of slightly deleterious mutations will be far below the power of natural selection to remove from the Genome (Sanford 2005)

  17. Bob O’H,

    “No, genetic drift is the tendency for allele frequencies to change randomly, because of finite sampling. The effect is for populations to become less homogeneous over time. Different “minority variants” become fixed in different populations.”

    I just want to clarify something. When you use the word ‘populations’ it is in the plural. Are you referring to separate populations of the same species that may be separated from each other as opposed to using it in a general way to refer to populations of several species.

    Within a population of a species, genetic drift will cause the population to become more homogeneous as certain variants of an allele are eliminated but it will be non homogeneous with other sister populations that faced different situations.

    And as a population breaks off from a mother population wouldn’t the sub population become more homogeneous a large percentage of the time. It may not have some of the variants of the mother population and being smaller would be more likely to succumb to genetic drift and natural selection might eliminate certain alleles faster if it is in a new environment.

  18. scordova
    From what I understand of your excellent post on the Gambler’s ruin, the biotic analogy of the relative size of the bet would be the size of the individual entity being mutated to the total population.
    Thus “evolution” for the small chance it has might work in a small way in bacteria where there are very large populations and the “bet” is small.

    However, as we get to larger organisms, the relative population drops precipitously. Behe (Edge of Evolution 2007, p 153) states:

    Workers at the University of Georgia estimates of 10^30 single-celled organisms produced every year; over the billion-year-plus history of the earth, the total number of cells that have existed may be close to 10^40.

    By contrast, the elephant population was about 3 million in 1960 (and dropped since then by poaching to a few hundred thousand.) Local herds are that much smaller – and an elephant mutation is a correspondingly larger “bet” than a single cell mutation.

  19. Bob Oh:

    No, genetic drift is the tendency for allele frequencies to change randomly, because of finite sampling. The effect is for populations to become less homogeneous over time. Different “minority variants” become fixed in different populations.

    You’re adding a second variable — multiple isolated populations — that confuses the issue.

    Individual populations become more homogenous over time due to genetic drift, because minority variants are weeded out.

    If you have multiple isolated populations, each of the isolated populations will become more homogenous over time — although they will likely become homogenous with respect to different sets of traits from other, isolated populations.

    All this is beside the point, however. The point is that new mutations tend to be weeded out by genetic drift, and in order to survive, a mutation would need to be sufficiently advantageous to overcome this “stacking of the deck” by genetic drift.

  20. bornagain77,

    “As well, what is crushing to the “beneficial mutation scenario” is “slightly deleterious mutations” are far below the power of natural selection to remove from the genome. Thus if a hypothetical “truly” beneficial mutation, it would be of no benefit from a progressive Darwinian scenario since this multitude of slightly deleterious mutations will be far below the power of natural selection to remove from the Genome (Sanford 2005)”

    If this is true, then these deleterious mutations should show up in the genomes of various related species. For example, there are speculations that there are as many as 300,000 species of beetles. All these deleterious mutations should be in evidence in their genomes.

    If they are not found in these genomes or genomes of other related species or only represent a small insignificant part of the genomes then Dr. Sanford’s ideas may be suspect. Also by studying the genomes of various related species one will eventually be able to determine just what makes each different since the vast majority of the genomes will probably be very similar.

    This research is getting to be within the reach of modern biology as over 4500 genomes have been mapped and more are coming on line each day. There is speculation that the cost to map a genome may get as low as a thousand dollars in the near future and the computer programs to easily analyze them may be forthcoming too. So the proof will be in the pudding. Meanwhile the beetles seem to be doing just fine.

  21. Jerry:

    Within a population of a species, genetic drift will cause the population to become more homogeneous as certain variants of an allele are eliminated but it will be non homogeneous with other sister populations that faced different situations.

    Bingo.

  22. Further to #10, regarding beneficial vs non-beneficial mutations.

    Conceptually, the slope in each variational dimension can be positive, neutral or negative, as the configuration varies from a designed or evolved configuration.

    From a design perspective, a design principle is:

    Optimize the design

    i.e., a designed system would be expected to exhibit some degree of optimization. (Apparent lack of full optimization would likely be caused by failure to recognize the full design and optimization involved.)

    Consequently, variations (mutations) away from that design would result in poorer function than the design. Consequently nearby mutations would be harmful or have a negative fitness relative to the designed system.

    Similarly in neo-Darwinian evolution, biotic systems that are “more fit” locally, would show a reduced fitness for immediate variations away from that configuration.

    In both cases, the local “fitness” has a local “hill” or upwardly convex surface. Deviations from the local configuration have a lower or negative fitness, resulting in a locally negative slope away from the design or evolved configuration.

    Consequently, in the gambler’s scenario described, the gambler has a disadvantage relative to the house, not an advantage.

    Applying this to the model design hypothesis, local variation from a designed system will experience a negative slope and local reduction in function. Similarly with the “fitness” of an “evolved” structure.

    The challenge for evolution is how to “jump” from one configurational mountain to the next configurational mountain – when then the variation required to move away from the local mountain results in substantial variational regional of negative slope in function or a reduced fitness.

    When looking at the map of genomic space, the designed functions are more like tiny mesas of function amid vast deserts of intermediate non-functional space.

    Darwinists would posit that variations in environment might change the relative fitness space to give a locally positive slope towards another evolved species with locally beneficial fitness. However the mathematical probabilities of such movement are daunting to say the least.

    If we consider bridging the very strong species barriers, combined with the local negative slopes away from a design or local fitness, the odds become astronomically small.

    So back to the gamblers ruin, it would help to show the next steps of modeling hillocks and cusps in local design or fitness space which must be overcome to bridge from one design to the next, or one evolved configuration to the next.

    Combine this locally negative slope with Behe’s “tentative molecular edge of evolution” of 10^30 or so trials, (Edge of Evolution 2007, Fig. 7.4 on p 144.)

    Neo-Darwinists nominally have to show a rising “ridge” of beneficial mutations to vary the system from one evolved configuration to the next evolved configuration for each achievable variation parameters (e.g., for each codon in a gene, and for each gene of a system). The actual code space of distantly spaced mesas rapidly negates such “Just-so” speculation.

    Consequently, the probability of systems evolving from one species to another species rapidly goes to zero (towards 1/infinity).

    Conversely, the probability of preserving an optimized design tends towards one (i.e., towards 1-1/infinity).

  23. You’re adding a second variable — multiple isolated populations — that confuses the issue.

    In your original post you referred to populations in the plural, which I interpreted, incorrectly, as meaning several populations.

    All this is beside the point, however. The point is that new mutations tend to be weeded out by genetic drift, and in order to survive, a mutation would need to be sufficiently advantageous to overcome this “stacking of the deck” by genetic drift.

    Unless the relative fitness is large, this is the wrong way round. A new mutation is always going to suffer from drift, regardless of the population size. In a small population, however, it does not have to drift as far to get to fixation. So, the probability of fixing a new mutant increases as population size decreases. The smaller the population size, the larger the effect of drift, so the higher the probability that a new mutant becomes fixed.

    For a neutral mutation the probability of fixation in a diploid population is 1/2Ne, where Ne is the effective population size. For an advantageous mutation, the probability increases, but not significantly unless the relative fitness is quite large.

    The full analysis will depend on Ne and s, and I’m not going to check it right now!

  24. 24

    Jerry you stated:

    If this is true, then these deleterious mutations should show up in the genomes of various related species. For example, there are speculations that there are as many as 300,000 species of beetles. All these deleterious mutations should be in evidence in their genomes.

    If they are not found in these genomes or genomes of other related species or only represent a small insignificant part of the genomes then Dr. Sanford’s ideas may be suspect.

    In the few studies I have been able to look at this “Genetic Entropy” principle holds up, i.e. loss of information (genetic diversity) is always found for younger “species”.
    In fact, the differences of human races we find that the younger races (Chinese, Europeans, American Indians, etc.. etc..) are losing genetic information for skin color when compared to the original race of humans that is thought to have migrated out of east Africa some 50,000 years ago.

    “We found an enormous amount of diversity within and between the African populations, and we found much less diversity in non-African populations,” Tishkoff told attendees today (Jan. 22) at the annual meeting of the American Association for the Advancement of Science in Anaheim. “Only a small subset of the diversity in Africa is found in Europe and the Middle East, and an even narrower set is found in American Indians.” Tishkoff; Andrew Clark, Penn State; Kenneth Kidd, Yale University; Giovanni Destro-Bisol, University “La Sapienza,” Rome, and Himla Soodyall and Trefor Jenkins, WITS University, South Africa, looked at three locations on DNA samples from 13 to 18 populations in Africa and 30 to 45 populations in the remainder of the world.

    This fact is totally contrary to what we would expect to find if the variation found in the sub-species were truly wrought by random mutations in the DNA generating novel information for variability! And this result is to be totally expected if the parent species were indeed created with a certain amount of flexibility for adaptation to differing environments already programmed in its genetic code! Yet, naturalists conveniently ignore the hard conclusive fact that the variation in the sub-species or pure breed is severely limited when it is compared to the much larger variability that is found in the parent species.

    as well as this:

    African cichlid fish: a model system in adaptive radiation research

    http://www.pubmedcentral.nih.g.....id=1635482

    of special note:

    Interestingly, ecological opportunity (the availability of an unoccupied adaptive zone), though explaining rates of diversification in radiating lineages, is alone not sufficient to predict whether a radiation occurs. The available data suggest that the propensity to undergo adaptive radiation in lakes evolved sequentially along one branch in the phylogenetic tree of African cichlids, but is completely absent in other lineages. Instead of attributing the propensity for intralacustrine speciation to morphological or behavioural innovations, it is tempting to speculate that the propensity is explained by genomic properties that reflect a history of repeated episodes of lacustrine radiation: the propensity to radiate was significantly higher in lineages whose precursors emerged from more ancient adaptive radiations than in other lineages.

    Thus as you can see, the evolutionists are mystified that the radiations are not happening for “sub-species” of cichlids but are always radiating from the “more ancient” lineage. This fits in perfectly with Genetic Entropy.

    This principle also holds for the genetic studies of wolfs/dogs and sheep I have looked at i.e. sub-speciation always comes at a loss of genetic diversity from the more ancient lineage. (loss of genetic information)

    As well this following study is very interesting in that it shows genetic entropy being obeyed in trilobites over their 270 million run in the fossil rcord:

    In fact, i think The principle of Genetic Entropy allows us to trace the CSI to point of implementation with a large amount of confidence.

    A Cambrian Peak in Morphological Variation Within Trilobite Species
    Mark Webster

    http://www.sciencemag.org/cgi/.....7/5837/499

    This following study is excellent for Genetic Entropy proof! It is a study of trilobites over their 270 million year history in the fossil record since their “abrupt” appearance at the beginning on the Cambrian explosion. Of special note: It studies within species variation instead of just among species variation. Within species morphological variation, over deep time, for the entire spectrum of trilobites, gives us a peak at CSI “degeneration” within trilobites over their 270 million year history.

    It follows Genetic Entropy to a “T”!

    “Early and Middle Cambrian trilobite species, especially, exhibited greater morphological variations than their descendants. This high within-species variation provided more raw material upon which natural selection could operate, Webster says, potentially accounting for the high rates of evolution in Cambrian trilobites. Such findings may have implications for our understanding of the nature of evolutionary processes, he says.

    Why the early trilobites were so morphologically diverse is a whole different mystery.”

    Guess what we know the answer to the mystery! CSI degeneration aka Genetic Entropy! (Sanford 2005)

    And it gets even better if you go into the actual studies themselves you find all trilobites that branch off the “parent” trilobites species quickly lose variability that is found in the parent stock of trilobites.

    I am extremely confident that this study, when it is fully fleshed out in all its detail, will fit the ID/Genetic Entropy model perfectly (as well as supporting environmentally driven adaptations).

    http://www.geotimes.org/july07.....72707.html

  25. The criticism has been offered that “natural selection is a tautology”. I argue it is too charitable to say natural selection is a tautology. Tautologies are at least self consistent. They are of the form:

    E = E

    A contradiction, a nonsensical statement, an oxymoron is of the form

    E = not-E

    A “square circle” is an inherently nonsensical statement.

    Darwinian evolution is not even logically self-consistent to be elevated to the status of tautology, much less a serious scientific theory. I think it more appropriate to say Darwinism is inherently incosistent. Berlinski said as much in the move Expelled: “The question is whether the theory is clearly stated enough that it has a chance of being correct.”

    What Kimura and others demonstrated is that even granting that natural selection works on occasion, the problem of “random selection” is quantifiably large enough to render “natural selection” almost irrelevant.

    Darwin said the majority mechanism is natural selection, but Darwin was wrong. Natural selection is not even the majority mechanism, it is not even 5%, it may not even be 1%. “Random selection” overpowers “natural selection”.

    As the saying goes, “it’s better to be lucky than good.”

    The next time someone says, “Darwinian Evolution is non-Random”, give them some lessons in the concepts of Gamblers ruin.

    It is instructive to note how Dawkins skirted Kimura’s work in his book Blinkwatchmaker. Dawkins devotes 1.5 pages or so and uses masterful rhetorical spin to address the very body of literature that destroys his main thesis. Dawkins was helped of course by Kimura’s obligatory salute. Kimura said his theory applied to molecular evolution not adaptational evolution. Logical? No. Politically and intellectually expedient. Absolutely.

    Dawkins spin:

    the great Japanese geneticist Motoo Kimura…

    As far as we [we selectionist ultra-Darwinists] are concerned, a neutral mutation might as well not exist, because neither we, nor natural selection can see it. A neutral mutation isn’t a mutation at all, when we are thinking about legs and arms and wings and behavior! To use the recipe analogy again, the dish will taste the same even if some of the word of the recipe have ‘mutated’ to a different print font. Molecular geneticists [like Kimura and Michael Lynch] are like pernickety printers. The care about the actual form of the words in which recipes are written down. Natural selection doesn’t care, and nor should we when we are talking about the evolution of adaptation.

    page 303-304

    This was a brilliant “rebuttal” by Dawkins, not because what he said was true, but because he masterfully evaded the core issues and made it appear all is well in the church of Darwin.

    Even if we grant that Kimura might personally agree with Dawkins, Kimura’s math says something else.

    Dawkins says natural selection may not care about neutral mutations. That is not the real issue. The real issue is that natural selection is mostly impotent when “random selection” is factored in.

    The fact of “random selection” leads to the mathematical proof that neutral muations must be the majority. But lost in the shuffle is also the fact that “random selection” makes natural selection a weak minority player in evolution.

    As an aside, Dawkins says the rival to Darwinian evolution is mutationism (mutation without much selection). That is Nei’s position.

    It is hard to comprehend now but, in the early years of this century when the phenomenon of mutation was first named, it was not regarded as a necessary part of Darwinian theory but an alternative theory of evolution!

    That’s partly because Darwin believed in the inheritance of acquired traits, not the mordern theories of inheritance. Neo-Darwinism relies on mutation, whereas Darwin-Darwinism emphasized acquired traits (or some other flawed notion of inheritance)….

    All of this ended up being a smokescreen in that so much attention was focused on “random mutation” that the real problem, the problem of “random selection” was lost in the fray.

    But “random selection” is anathema to the idea of the guiding hand of “natural selection.” It is thus not surprising, focus was directed away from the fact that selection is mosly random, not “natural” in the Darwinian sense.

    Again we see equivocation and obfuscation and confusion in the term “natural” selection. “Natural selection” in the Darwinian sense is not what happens in nature. Darwin used equivocation and double-speak to justify his theory….he did not use sound logic or math.

  26. Unless the relative fitness is large, this is the wrong way round. A new mutation is always going to suffer from drift, regardless of the population size. In a small population, however, it does not have to drift as far to get to fixation. So, the probability of fixing a new mutant increases as population size decreases. The smaller the population size, the larger the effect of drift, so the higher the probability that a new mutant becomes fixed.

    True. On the other hand, the smaller the population, the less individuals are available to “host” the new mutation. That is to say, if you’ve got a population of 10,000,000, you’ve got a much better chance of getting some lucky mutations than if you’ve got a population of 10.

    Seems like something of a catch 22. To get the mutations you have to have a large population, but having a large population prevents the new mutation from setting, and ultimately wipes it out unless there’s a substantial selection advantage.

    The diluvial model, of course, accounts rather admirably for the population bottleneck necessary to set characteristics in isolated population. If 7 pairs of primal “cats” of substantial heterozygosity stepped off the boat, genetic drift would “set” them into a numnber of distinct species within a matter of a few generations.

  27. Scordova:

    Thank you for the very interesting article, which adds more fuel to the arguments against darwinian evolution, and clarifies many important aspects.

    I would try to make a brief summary of some important points, as we often discuss them separately, and can’t see them in their logical relations:

    1) Random variation (RV) is the only available engine of variation in theories which exclude design. All the recognized causal mechanisms of variation (sinlge point mutations, deletions, inversions, duplications, genetic drift, and so on) are in essence random.

    2) The power of RV to create new useful complex information (CSI) is strictly dependent on the ratio of functional results to the whole serach space. The transition from a preexisistng condition to a new condition which exhibits new CSI (and which, therefore, could be in theory selected) is, as far as we can judge, a completely negligible possibility, due to the huge search space of even the simplest proteins. The probabilities become even more prohibitive for any multiple level of information, involving for instance multiple proteins in specific relation.

    3) If the new level of functionality is irreducibly complex (IC), then it cannot be deconstructed in simpler functional units (with the same function), and should be achieved in its entirety before being selected. The alternative of “cooption” poses even more formidable improbabilities, having to realize a critical concurrence of independent functions in the huge space of all the possibilities.

    4) Anyway, most mutaions seem to be neutral, and therefore cannot be selected. Neutral mutations can be “fixed” only by genetic drift, which, being a totally random process, adds nothing to the considerations of point 1 and 2.

    5) Even if, overcoming the impossibilities of the first 4 points, some beneficial muation can emerge in an individual, its probabilities of being selected are extremely small, as well shown in the above article. It could be useful to remember that, when we speak of a beneficial mutation giving a 1% advantage of reproduction, we are assuming a lot. Most single mutations, even if potentially beneficial, would not be able to generate really a 1% reproductive advantage. A reproductive advantage, even small, is really a big thing, and usually would require a lot of new, coordinated CSI. The only exception (and, indeed, the only examples really known of natural selection) are scenarios of extreme selective pressure (like antibiotics) combined with the possibility that simple, essentially destructive mutations can protect from the pressure. In that case, and only in that case, many of the difficulties described in the above points do not apply.

    6) All those mechanisms however, even if improbable or quite impossible, require anyway lots of reproducing beings and short inter-reproductive time. To try to apply them to complex, and rare, and slow animals, like mammals, is not only a fairy tale, but a really bad one.

  28. Regarding the frequently asked question of how Sanford’s genetic entropy concept may be compatible with the actual survival of biological beings, we could perhaps consider that intelligently designed mechanisms to preserve DNA information are universally active in the biological world. Those mechanisms are a fundamental part of the real scenario, because the real scenario is one of constant interaction (and fight) between design and purpose on one side, and chance and entropy and meaninglessness on the other.

  29. DLH,

    Regarding the question of gambler’s ruin as it relates to casino games and biology…

    Let’s say you spent 2 years training to join the MIT 21 team. You have the skill to watch a stream of cards come out a deck. You know that if certain cards are dealt, that gives you information about what cards remain in the deck. You fine tune your skill such that you know when to lay down a bet at the casino when you have an advantage of 1% over the casino (“the house”)…..

    But lets say you wish to set a record by turning $100 into $1,000,000

    You attempt to do this by putting $100 in your wallet and wandering over to a nice casino. You lay down a $100 bet and decide you just keep laying down $100 bets until you either go broke or have $1,000,000

    The ordinary intuition is that your first bet has a 50.5% of success, thus, if you win that first bet you should be good to go….your chance of success would seem on the order of 50.5% or something in the ballpark. The hard reality is your chance of success is less than 1.7% (or some number close to it)

    So even though you had a slight advantage, it will not prevail.

    A similar situation occurs when an individual with advantageous trait is introduced into a population of 1,000,000 other individuals. That individual is unlikely to overtake the population and spread its trait to the entire population.

    Darwinists point to pesticide and anti-biotic resistance to show cases where this happens, where a single individual overtakes a population. But this is the logical fallacy of a hasty generalization since selection rarely operates with such strength in the wild.

    Also, when seleciton of this sort happens, lots of other “beneficial mutations” are lost. Such examples actually destroy Darwin’s theory if one is willing to look at the problem of “interference selection” which anti-biotic and pesticide resistance create. Who knows how many “slightly advantageous” traits were lost in the powerfully selective sweeps that happen in anti-biotic and pesticide resistance….

  30. True. On the other hand, the smaller the population, the less individuals are available to “host” the new mutation. That is to say, if you’ve got a population of 10,000,000, you’ve got a much better chance of getting some lucky mutations than if you’ve got a population of 10.

    Seems like something of a catch 22.

    Exactly!!!! Spetner was keen to realize this. His book highlights the catch-22.

  31. Scordova:

    Exactly!!!! Spetner was keen to realize this. His book highlights the catch-22.

    Fun! I’ll have to read the book.

  32. Scordova:

    Exactly!!!! Spetner was keen to realize this. His book highlights the catch-22.

    I was just thinking, and there’s a counterpoint to this catch 22 — the larger the population, the less intense the pressure from genetic drift. That might permit new mutations to survive long enough to get picked up by natural selection. Did Spetner address this in his book to your knowledge?

  33. bornagain77:

    In your last post, you stated: “The human genome, according to Bill Gates the founder of Microsoft, far, far surpasses in complexity any computer program ever written by man.”

    I’ve seen this statement before in the literature. Do you, or does anyone else, know the source? I’m just curious.

  34. 34

    Here’s the source:

    The understanding of life is a great subject. Biological information is the most important information we can discover, because over the next several decades it will revolutionize medicine. Human DNA is like a computer program but far, far more advanced then any software ever created.

    The Road Ahead; Bill Gates pg. 188

  35. 35

    I believe Bill Gates wrote that in 1995, yet all comprehensive studies conducted by ENCODE have back his claim up immensly.

    As well;

    There are about….
    Three-billion letters of code on that six feet of DNA. The DNA contains the “complete parts list” of the trillions upon trillions of proteins that are in your body, plus, it contains the blueprint of how all these countless trillions of proteins go together, plus it contains the self-assembly instructions that somehow tells all these countless proteins how to put themselves together in the proper way.If you were to read the code aloud, at a rate of three letters per second for twenty-four hours per day (about one-hundred-million letters a year), it would take you over thirty years to read it. The capacity of a DNA molecule to store information is so efficient that all the information needed to specify an organism as complex as man weighs less than a few thousand-millionths of a gram. The information needed to specify the design of all species of organisms that have ever existed (a number estimated to be one billion) could easily fit into a teaspoon with plenty of room left over for every book ever written on the face of earth. For comparison sake, if mere man were to write out the proper locations of all those proteins in just one human body, in the limited mathematical language he now uses, it would take a bundle of CD-ROM disks greater than the size of the moon, or a billion-trillion computer hard drives, and that’s just the proper locations for the protein molecules in one human body, that billion-trillion computer hard-drives would not contain a single word of instruction telling those protein molecules how to self assemble themselves.

    The coding system used for living beings is optimal from an engineering standpoint. Of all possible mathematical combinations, the ideal number for storage and transcription has been calculated to be four letters. This is exactly what has been found in the DNA of every living thing on earth—a four-letter digital code. As Werner Gitt states: “The coding system used for living beings is optimal from an engineering standpoint.”

    The atoms in a human being are the equivalent to the information mass of about a thousand billion billion billion bits. Even with today’s top technology, this means it would take about 30 billion years to transfer this mass of data from one point to another. That’s twice the age of the universe.

    There are about…..
    One-hundred trillion cells in the average person.Every human spent about half an hour as a single cell Each cell has over a million unique structures and processes (a complexity comparable to a large city ). Each cell consists of artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction and a capacity not equalled in any of our most advanced machines, for one of our most advanced machines would have to be capable of self-replicating its entire structure within a matter of a few hours. Every one of those trillions of cells (except for the brain cells) is regenerated and replaced on average of every seven years! Each cell has about ten-thousand times as many molecules as our Milky Way galaxy has stars.

    When considering this absolutely staggering level of complexity it is apparent that life “accidentally” evolving is absolutely impossible by unintelligent means.
    Darwinian Evolution is sheer fantasy of the highest magnitude as far as hard science and evidence is concerned.

  36. 36
    JunkyardTornado

    DLH wrote:
    a designed system would be expected to exhibit some degree of optimization. (Apparent lack of full optimization would likely be caused by failure to recognize the full design and optimization involved.)

    Consequently, variations (mutations) away from that design would result in poorer function than the design. Consequently nearby mutations would be harmful or have a negative fitness relative to the designed system.

    Similarly in neo-Darwinian evolution, biotic systems that are “more fit” locally, would show a reduced fitness for immediate variations away from that configuration.

    In both cases, the local “fitness” has a local “hill” or upwardly convex surface. Deviations from the local configuration have a lower or negative fitness, resulting in a locally negative slope away from the design or evolved configuration.

    Applying this to the model design hypothesis, local variation from a designed system will experience a negative slope and local reduction in function. Similarly with the “fitness” of an “evolved” structure.

    The challenge for evolution is how to “jump” from one configurational mountain to the next configurational mountain – when then the variation required to move away from the local mountain results in substantial variational regional of negative slope in function or a reduced fitness.

    So, the premise of everything you wrote, if I understand correctly, is that species tend to be highly optimized for a specific niche, so any varation away from that will be detrimental in the short run, so how would this obstacle be surmounted.

    But it seems there are multitudinous examples of species that are highly optimized in certain ways and yet regularly make forays into other environments where they are comically ill-equipped to do so. In such areas there could be change without at all effecting the areas where they were optimized already.

    With marine mammals and birds this is very much in evidence. A walrus is highly optimized and graceful in the sea, but lumbers around like a big slug on the land where it spends most of its time. Would not marginal increase in land motility give male walrus’s competing with others males for breeding purposes a big advantage, without effecting at all their means for making a living in the sea. Same example with birds that dive beneath the surface of the water for fish. (not increase in land motility but possibility something that increases capability in the sea.) Then you have polar bears, with webbing between toes and other adaptions for swimming and yet capable of interbreeding with grizzlies. Also, occasionally you can see a roach fly, but they’re not all that good at it. The more I think about your idea the less sense it makes. Would an increase in lung capacity decrease the level of optimality of a dolphin? Apologies if I’ve missed your point.

  37. Bob O’H:

    (quoting Sal)

    Is there any research you could point me to to show the statistics of a mutation being beneficial?

    Yes – there was a review last year. It’s of the order of 1% or so, but the estimates vary, and it’s difficult to get really good estimates, because mutations are rare in themselves.

    Bob, I can only get the abstract of the article you cite. But looking at a similar article from a year earlier, it appears that it works only with computer simulations. And, of course, that means inputting values for variables that may or may not be realistic (i.e., there’s some guessing involved), and mathematical formulism that has, of yet, been subjected to a rigourous comparison to genomic data for populations. So I think any conclusions they come up with should be taken with a grain of salt, given these limitations. (There are other limitations which are cited in the earlier paper)

  38. In natural selection would it not be the case that in the long run, mutations that confer any sort of advantage at all will predominate over mutations that confer no advantage

    No. Kimura’s math showed most mutations that reach fixation are at best neutral.

    The way to visualize this is consider a population of 10 individuals. Each individual has 4,000,000,000 nucleotides.

    Let’s say 5 of the 10 individuals pass on their genes. You’ve guaranteed lots of novel good mutations (if they even emerged in the first place) have gotten wiped out and lots of bad fixed. That’s the problem of selection interference.

    In fact, if the ratio of bad-to-neutral mutations is substantially larger than beneficial mutations (say a ratio of 10,000 to 1), then the majority of what gets fixed is bad- to-neutral, not beneficial.

  39. 39
    JunkyardTornado

    “In natural selection would it not be the case that in the long run, mutations that confer any sort of advantage at all will predominate over mutations that confer no advantage”

    (Sal:) No. Kimura’s math showed most mutations that reach fixation are at best neutral.

    I actually did know about neutral mutations and understand that was I was saying was contradicting it. But I’m trying to figure out how the house edge concept would apply. Mutations do occur. Even those confering a 1% advantage will of course be repeatedly wiped out (just like someone beating the house over the long run will still lose repeatedly). However, in the long run in nature as well, those mutations that are left would have to be the ones that had at least conferred some marginal advantage. (Just like a gambler with a 51% advantage and enough time will eventually have everything.) I don’t know what sort of victory this would be for Darwinists, just applying the principle you describe to Natural Selection. If you said something to disprove this I missed it.

    Also, as long as I have your attention, how would you respond to the following from my original post:
    (Not sure if I’m thinking clearly on this, just wondered what the response was.)

    (sal:)”A further constraint on selective advantage of a given trait is the problem of selection interference and dilution of selective advantage if numerous traits are involved. If one has a population of 1000 individuals and each has a unique, novel, selectively-advantaged trait that emerged via mutation, one can see this leads to an impasse –selection can’t possibly work in such a situation since all the individuals effectively cancel out each other’s selective advantage.”

    (JT:)So would this mean that if in a population of 1000 individuals each had a harmful dibilitating selectively-disadvantaged mutation, the disdavantages would be cancelled out and these mutations would be rendered neutral. If so, these traits would maintain the same ratio in relation to each other in the population, but the species would dwindle to extinction.

    As far as beneficial mutations rendered “neutral”, if the entire population experiences a sharp peak, (IOW all traits are resulting in an increase in reproduction) then there are more chances for even “neutral” mutations to be preserved (right?)”

  40. JT:

    Just like a gambler with a 51% advantage and enough time will eventually have everything

    That’s the key mistake. A gambler with 51% advantage and enough time will not necessarily win everything — because he could (and likely will) have lengthy losing streaks along the way — and if one of his losing streaks is so bad that he runs out of money entirely, he’s out of the game.

  41. JT:

    Just like a gambler with a 51% advantage and enough time will eventually have everything

    NO! NO! NO! You seem to be the only one here who fails to see that Natural Selection does not behave like a skilled gambler who applies risk management strategies.

    Most newly emerged selectively advantaged traits will not live to see the long run because “random selection” destroys most of their advantage early on.

  42. Jerry:

    If this is true, then these deleterious mutations should show up in the genomes of various related species. For example, there are speculations that there are as many as 300,000 species of beetles. All these deleterious mutations should be in evidence in their genomes.

    Correct. I have stated that Solexa technology could in principle give us the accuracy to actually see this in real time.

    If you’re question then is “why don’t we see it now”, you presume biotic reality had existed for the last several hundreds of millions of years. Sanford is boldly predicting his Genetic Entropy thesis will be at variance with this view. In any case, it will be a testable hypothesis with the advent of Solexa technology.

    Perhaps it is premuture to debate this in detail until the data is in hand….

  43. 43
    JunkyardTornado

    OK please print the following;

    Each new mutation is a new hand. Some of those mutations (hands) have a built in advantage of 1% or higher. All of these mutations are different and only similar in that they confer some sort of advantage. After thousands of rounds – who will be the winner and have all the resources – the one playing the hands with at least a slight advantage. Certainly individual hands with a slight advantage will lose repeatedly. BUT WHAT HAPPENS IN THE LONG RUN?????? FOR CRYING OUT LOUD I’M NOT THE ONE IN ERROR HERE. Please get a clue.

    Getting hostile, are we? Please understand that ID proponents do not hold Darwinian processes to be completely toothless. They do have an effect. The key point is that the effect is so limited. — UD Admin

    Sincerest and Kindest regards-
    Junk

  44. There are so many mathematical problems with neo-Darwinian theory that they would be impossible to enumerate. The bottom line is that the “theory” suggests that entropic processes can produce negentropic results.

    It’s the biggest get-something-for-nothing (or even worse, get-something-for-less-than-nothing) scam in the history of science, and it’s not that hard to figure out.

  45. Interesting post and connection Sal. I’m glad I could help inspire it in some small way.

  46. slightly off topic:

    Noted historian Victor Davis Hanson, on his website yesterday, posted a reprint of a review of Behe’s Edge of Evolution by Terry Scambray. The review originally appeared in the Fall 2007 Faith and Reason.

    In discussing the term “irreducible compleity,” Scambray gives one of the most succinct descriptions of the culture war in which we are engaged that I’ve ever read (emphasis added).

    …The 18th-century naturalist theologian William Paley argued that the existence of something complicated like a watch means that there must be a Watchmaker who purposely designed such a complicated artifact. However, generations of students have been told that David Hume, who lived before Paley, had delivered a lethal blow to such an argument. First, argued Hume, given enough time, nature could self assemble anything. Secondly, human artifacts have wheels, cogs and gears; nature was categorically different than a watch with its discrete parts. Nature apparently was a swirl of things that somehow functioned.

    Hume’s first rationale against design was undercut by the Big Bang and the findings of modern paleontology, both of which severely restrict the amount of time available for natural forces to get working.

    And then professor Behe came along and showed in his first book, Darwin’s Black Box, that a naturally occurring object like the bacterial flagellum with its propeller, shaft, o-rings and dozens, even hundreds of other precisely tailored parts, all of which function harmoniously, are not merely comparable to, but exactly like a humanly designed engine. Behe introduced the phrase “irreducible complexity” to describe such unimaginably sublime complexity, symmetry and harmony.

    Since then, “irreducible complexity” has gained wide usage in the way that the word “existential” became a staple among the cognoscenti since the end of WWII. But a profound difference separates the two expressions, a difference which illustrates the major divide in modernity. For “irreducible complexity” suggests a world of ordered complexity which can only be the product of a purposeful mind — or Mind; whereas “existential” suggests a contingent world, where meaning is an afterthought, congealed accidentally from colliding atoms. The contrast between these words reveals the basis for much of the contentiousness and conflict in the 20th century

    Actually, the scientifically-based counterattack against last century’s materialism has only just begun.

  47. untgss @ 26 (and relevant to your post 32) –

    Seems like something of a catch 22. To get the mutations you have to have a large population, but having a large population prevents the new mutation from setting, and ultimately wipes it out unless there’s a substantial selection advantage.

    For neutral mutations, the rate of accumulation of mutations is independent of the population size, because of this.

    The effect of population size on fixation of an advantageous mutant is non-linear. At small population sizes, drift dominates through the whole process, so a smaller population leads to a greater chance of fixation. In large populations, fixation is independent of population size, because extinction is only likely when the number of copies of the mutant is small. Once its numbers are large enough, the Law of Large Numbers comes into play, and the dynamics are essentially deterministic (or at least independent of demographic stochasticity).

    PaV @ 37 –

    Bob, I can only get the abstract of the article you cite. But looking at a similar article from a year earlier, it appears that it works only with computer simulations.

    No, they review real data.

    At some point I might get onto Sal’s actual post – I want to dig through some books first (rather than try to derive the proofs I need myself).

  48. Interesting . . .

    Just for fun, I wish to toss in a further ingredient, courtesy Loennig, circa 2004:

    . . . examples like the horseshoe crab [morphological stasis over an estimated 250 mn yrs] are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by ‘living fossils’ in the present world of organisms when applying the term more inclusively as “an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time” [85] . . . . Now, since all these “old features”, morphologically as well as molecularly, are still with us, the basic genetical questions should be addressed in the face of all the dynamic features of ever reshuffling and rearranging, shifting genomes, (a) why are these characters stable at all and (b) how is it possible to derive stable features from any given plant or animal species by mutations in their genomes? . . . .

    A first hint for answering the questions . . . is perhaps also provided by Charles Darwin himself when he suggested the following sufficiency test for his theory [16]: “If it could be demonstrated that any complex organ existed, which could not possibly [Selective hyperskepticism alert . . .] have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” . . . Biochemist Michael J. Behe [5] has refined Darwin’s statement by introducing and defining his concept of “irreducibly complex systems”, specifying: “By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning” . . . [for example] (1) the cilium, (2) the bacterial flagellum with filament, hook and motor embedded in the membranes and cell wall and (3) the biochemistry of blood clotting in humans . . . .

    One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if “several well-matched, interacting parts that contribute to the basic function” are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because “the removal of any one of the parts causes the system to effectively cease functioning”) such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process — or perish . . . .

    According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski’s criterion of specified complexity . . . . “For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity” [23]. For instance, regarding the origin of the bacterial flagellum, Dembski calculated a probability of 10^-234[22].

    Have fun!

    I’ll be watching this thread

    GEM of TKI

    PS: Very good job, Sal!

  49. Once its numbers are large enough, the Law of Large Numbers comes into play, and the dynamics are essentially deterministic (or at least independent of demographic stochasticity).

    There is a nuance to this observation and is worth investigating.

    One way to look at it is the probability of extinction in an infinitely large population.

    If the initial infusion of mutants was not one mutant but say thousands of identical mutants, then the law of large numbers is very helpful to prevent extinction and then non-linear considerations apply. We don’t worry about overtake of the population as much as exitinction from the population. I seem to recall Michael Lynch’s book discussed the issue.

    If however the intial infusion is only one mutant, then one must look at the probablity of extinction, which is extremely high.

    But since novel mutations involving more than a few nucleotides rarely come in more than one individual at a time, then the risk of ruin is high. Unless…

    we are dealing with something like malaria and chloroquine resistance. We could of course factor in the multiple appearances of the same mutant over time. It appears chloroquine anti-malarial resistance had a few multiple appearances at different times and geographical locations, thus “ruin” of chloroquine resistance was prevented by multiple entry (the mutation was relatively trivial and thus appeared abundantly in numerous places…not to mention, if I recall correctly, we’re dealing with haploids not diploids, making the odds even more favorable)….

    But Behe’s point in Edge of Evolution was that HIV and malaria were still slow considering the very large trials involved, and the mutations involved were relatively trivial…the Edge of Evolution showed that natural selection had little foresight for irreducible complexity, that Dawkins’ hope for natural selection overcoming combinatorial barriers was deeply suspect.

  50. The problem posed with Gambler’s Ruin and “Random Selection” is severe for natural selection on the side of what is bad. Contrast this with Darwin’s misunderstanding of basic population genetics:

    “[The]preservation of favourable individual differences and variations, and the destruction of those which are injurious, I have called Natural Selection, or the Survival of the Fittest.” —

    “Natural Selection is daily and hourly scrutinising, throughout the world, the slightest variations; rejecting those that are bad, preserving and adding up all that are good”. — C.DARWIN sixth edition Origin of Species — Ch#4 Natural Selection

    C.DARWIN sixth edition Origin of Species -Ch#4 Natural Selection

    NO! NO! NO! I have shown why the slightest good has little chance of surviving. Furthermore, if “Random Selection” overpowers Darwin’s natural selection the ratio of bad to good mutations is fairly high, then the infusion into the population of what is bad will be substantially higher than what is good. I pointed out the problem of weeding out the bad in Nachman’s U-Paradox.

    However, as I said, the easy way to conceptualize this is consider 10 individuals, each with 4,000,000,000 nucleotides in their genome. Each was gets 1 good mutation and 20,000 bad to neutral mutations in their germline.

    5 successfully pass on their good genes, the other 5 are unsuccesful. It is clear which ever 5 you pick, that we’ll be killing half of what little good mutations occurred and ensuring lots and lots of what is bad are infused the population. The trend is clear for an increasing proportion of what is bad. We call this Genetic Entropy.

    It is clear from this example, Darwin could not possibly be right.

    Darwin couldn’t do high school algebra even after much effort and a tutor to spoon feed it to him. He doesn’t deserve to be in the “genius corner” of Westminster Abby. Perhaps we ought to transplant his coffin to the dunce corner.

    HT: ICON-RIDS weblog for the Darwin quotes.

  51. 51
    JunkyardTornado

    sal:

    It will be the third time I have posted the following – a point I made (really just a query) about a contention from your own article which no one else has commented on. Since I’m not an expert I wouldn’t rule out the possibility of error in the following. Mind pointing out what it is.

    Also, as long as I have your attention, how would you respond to the following from my original post:
    (Not sure if I’m thinking clearly on this, just wondered what the response was.)

    (sal:)A further constraint on selective advantage of a given trait is the problem of selection interference and dilution of selective advantage if numerous traits are involved. If one has a population of 1000 individuals and each has a unique, novel, selectively-advantaged trait that emerged via mutation, one can see this leads to an impasse -selection can’t possibly work in such a situation since all the individuals effectively cancel out each other’s selective advantage.

    (JT:)So would this mean that if in a population of 1000 individuals each had a harmful dibilitating selectively-disadvantaged mutation, the disdavantages would be cancelled out and these mutations would be rendered neutral. If so, these traits would maintain the same ratio in relation to each other in the population, but the species would dwindle to extinction.
    As far as beneficial mutations rendered “neutral”, if the entire population experiences a sharp peak, (IOW all traits are resulting in an increase in reproduction) then there are more chances for even “neutral” mutations to be preserved (right?)”

  52. It will be the third time I have posted the following – a point I made (really just a query) about a contention from your own article which no one else has commented on.

    It appears that my example of 1000 individuals confused the issue for you. I provided a simpler example with:

    the easy way to conceptualize this is consider 10 individuals, each with 4,000,000,000 nucleotides in their genome. Each was gets 1 good mutation and 20,000 bad to neutral mutations in their germline.

    5 successfully pass on their good genes, the other 5 are unsuccesful. It is clear which ever 5 you pick, that we’ll be killing half of what little good mutations occurred and ensuring lots and lots of what is bad are infused the population. The trend is clear for an increasing proportion of what is bad. We call this Genetic Entropy.

    So if this example is a clearer conception of Random Selection, Selection Interference, and Genetic Entropy, take that. Take that as my response to your question. You may argue I confused the issue with my example of 1000 individuals. Fine. I provided something a little simpler and hopefully clearer.

    Darwinism fails on mathematical grounds alone. I hope that is clear. Genetic Entropy is a more accurate description of evolution, not Darwinism.

  53. groovamos wrote:

    Thermodynamics and heat flow are two different areas of physics and mechanical engineering, with separately derived sets of equations for problem solution. Heat flow problems were classically defined for solids early on, but are also defined for other states of matter. The Black-Scholes-Merton equation is derived from the laws of heat flow. Thermodynamics is concerned with change of state, kinetic energy, potential energy, and energy conversion.

    That is a nit-pick that is also an erroneous nit-pick.

    See the discussion of Black-Scholes from the perspective of Brownian motion and Statistical Mechanics Here.

    That discussion includes thermodynamics and heat flow.

    Equation 10 is the famous Black-Scholes equation. Its solutions are widely adopted for financial analysis by traders, fund managers, economists, and so on. The Equation can be further transformed into standard form of the heat diffusion equation from Physics (Thermodynamics)

  54. Salvador,

    If natural selection or some other naturalistic process is not working then where did all the variety of species in the world come from and why do they seem to fit their ecology just fine.

    Don’t tell me each was created separately to fit their environment. No one will take you seriously including nearly all the top ID writers.

    And just for those of you who read this and react reflexively, I strongly accept ID.

  55. Jerry wrote:

    If natural selection or some other naturalistic process is not working then where did all the variety of species in the world come from and why do they seem to fit their ecology just fine.

    Don’t tell me each was created separately to fit their environment. No one will take you seriously including nearly all the top ID writers.

    Front Loaded Evolution (Mike Gene and others) or Prescribed Evolution Hypothesis (PEH) by Davison. Spetner argues for NREH (non-Random Evolutionary Hypothesis).

    The human immune system is an illustration of designed mutations.

    Not even creationists believe in the fixity of species and God creating every species for every adaptation. That was the strawman version of creationist theory which Darwin promoted.

    The idea of evolution shaping species was pioneered by creationists like E. Blyth. See: Was Blyth the true scientist and Darwin merely a plagiarist and charlatan?.

    Darwin said:

    I never happened to come across a single [naturalist] who seemed to doubt about the permanence of species …

    Which is an outright falsehood since Darwin knew the creationist Blyth and Blyth argued for the “indefinite radiation” of species from ancestral forms.

    The current creationist-frontloading synethsis is expressed by biologist Chris Ashraft Evolution: God’s Greatest Creation.

  56. 56
    JunkyardTornado

    the easy way to conceptualize this is consider 10 individuals, each with 4,000,000,000 nucleotides in their genome. Each was gets 1 good mutation and 20,000 bad to neutral mutations in their germline.

    5 successfully pass on their good genes, the other 5 are unsuccesful. It is clear which ever 5 you pick, that we’ll be killing half of what little good mutations occurred and ensuring lots and lots of what is bad are infused the population. The trend is clear for an increasing proportion of what is bad. We call this Genetic Entropy.

    If once a beneficial mutation appears there are still all sorts of random hurdles it has to clear that have nothing to do with its fitness, then you might as well say it hasn’t even occured until it clears all those hurdles. Just say its a nonentity, it never happened at all, until it reaches a point where its benefit can be manifested. (Sort of a general response to the article.)

    Of those 20000 “bad to neutral” mutations that accompany it (1 bad 19999 neutral?) it doesn’t seem like they’re all that bad if they can be passed around, propogated and multiplied for any length of time.

    Just some informal remarks. You don’t have to respond to this.

  57. Salvador

    “Front Loaded Evolution (Mike Gene and others) or Prescribed Evolution Hypothesis (PEH) by Davison. Spetner argues for NREH (non-Random Evolutionary Hypothesis).”

    So these are the mechanism for all the tens of millions of various species in all the ecological nooks and crannies all over the world.

    Is this what you subscribe to? No place for natural selection in all this variety.

  58. About this time last May — May 2007 — Robert Marks contacted me and offered me the opportunity for tuition and salary to work at his newly formed Evolution Informatics Lab at Baylor. The value to me was betwee $30,000 – $40,000…

    I got the offer because of a very strong referral from Bill….

    The offer was for a Master of Science in Electrical and Computer Engineering.

    The issues of Darwinian evolution should in the modern day be scrutinized from the fields of math and engineering and population genetics….

    One of the things that would have been a worthy research project is comparing the fidelity of Evolutionary Algorithms like Genetic Algorithms to Darwinian Evolution and Real Evolution.

    It appears to me that Genetic Algorithms succeed because they do not have to be confronted with the intense effects of “random selection” that we see in nature. I would argue that the elimination of realistic “random selection” from genetic algorithms invalidates them as a model of real biolgoical evolution. Thus genetic algorithms like Avida, to the extent they do not model biological reality, ensure the appearance of a “free lunch”. And we know, there is No Free Lunch (NFL).

    The issues raised in this thread would have been valuable issues to investigate had Baylor not shut down the lab. I’m confident however, Baylor probably helped ensure the interest in these esoteric but important topics will only grow. The Martyrdom of the Evolution Informatics Lab will not be forgotten and others will rise to carry on!

  59. Salvador,

    By the way Mike Gene was on ASA arguing for what seemed to be the Darwinian view of evolution.

  60. Contrast this with Darwin’s misunderstanding of basic population genetics:

    Sal, are you aware that we have gone beyond Darwin’s understanding? It doesn’t matter than Darwin mis-understood something: he wasn’t omniscient, and we have built on his ideas since then.

    Your point that most mutations go extinct is correct, and well known. You stopped at the point when you should start: showing that the rate at which beneficial mutations become fixed is too slow for evolution to have occurred.

  61. So these are the mechanism for all the tens of millions of various species in all the ecological nooks and crannies all over the world.

    Is this what you subscribe to? No place for natural selection in all this variety.

    At least statistically speaking 5% could be attributed to natural selection. That would be extremely generous in my humble opinion. 1% would still be too generous.

    So NS has a place, not much of one.

    Dawkins pointed out that Mutationism, not Darwinism was the competing view.

    The kind of mutationsm that may have been front-loaded does not appear to be in large measure today.

    We see some hints of Adaptive Mutation.

    I keep hearing of mutations being “random with respect to fitness”. But how is fitness defined? Malthusian, Dhobzhanzy-Wright? Or how about the problems of defining fitness as pointed out by Lewontin?

    If we use the Malthusian notion, it is clear that considerations from Gambler’s Ruin effectively trash the Malthusian definition of fitness as being viable (except in computer programs that don’t model reality).

    A fruitful research program was indirectly suggested by Behe for trying to find the remenants of front-loaded evolution in the existing genome, and trying to replay the evolution of various life forms from a synthetic ancestral form. A crude approach would be to hybridize a creature from what we believe are ancestrally related species and see if they naturally unwind into the forms we see today. A Zorse or Hebra is an example of the idea…

    For example, hard as it is, could we (through breeding and engineering) create a synthetic ancestor that would de-repress into wolves, coyotees, foxes, dogs, jackals, etc…

    The grand prize would probably be synthezing the ancestor(s) involved in
    Marsupials and Placentals: a case of front-loaded, pre-programmed, designed evolution
    .

  62. Bob OH,

    Thank you by the way for the paper you referenced about the selective advantage being about 1%.

    Noremacam,

    Thank you for advertising this discussion.

    Sal

  63. Salvador,

    So where does the 99% of the tens of millions of species come from if not natural selection? Any estimate by percentages. Please don’t cite theories you do not believe in.

    Where did all the beetle species come from? cichlids? birds? other fish? other insects?

    You see I claim that 99.5% of the species come from naturalistic means. If you do not agree then where did they come from and how?

  64. From what I’m gathering, NS can’t actually create any diversity, it can only eliminate it. So while one can possibly point to NS as the reason we see a predominance of a certain species in specific geographical locations, it does nothing whatsoever to account for the origin of biodiversity.

    NS is a destructive force that reduces variety in populations. The mechanism for the diversity is found elsewhere (Front Loading, RV, ?)

    Here’s an image from the Natural Selection article on Wikipedia:

    Schematic representation of how antibiotic resistance is enhanced by natural selection. The top section represents a population of bacteria before exposure to an antibiotic. The middle section shows the population directly after exposure, the phase in which selection took place. The last section shows the distribution of resistance in a new generation of bacteria. The legend indicates the resistance levels of individuals. Natural Selection

    In the above, Natural Selection is a description of the final effect, where the surviving population resulted from a mutation that conferred antibiotic resistance. The net genetic diversity of the population is reduced.

    I think some would agree that NS isn’t itself a force, but our description of the result of environmental and other natural factors effects on preexisting biodiversity.

    What I’m gathering from this Gambler’s Ruin post is that excepting more extreme environmental factors, we will generally observe very little Natural Selection in action. Varieties conferring only slight survival advantage will be treated by natural factors, very much the same as the rest of the population. Random Selection is quite possibly a much more significant effect. RS+NV anyone?

    I’d appreciate correction if I’ve missed the point.

  65. Bob O’H:
    “It doesn’t matter than Darwin mis-understood something”

    The eliticists of the day didn’t care much about the niceties either…they just wanted to push the theory because it served their evil purposes and sort of gave them a scientific justification for subjugating the weak and/or those of a lower natural order, you know, the whole ‘survival of the fittest’ and ‘struggle for existence’ mantra.

    It is no coincidence that Darwin the English scientist gave great impetus to Rothschild the English banker…

    (poor Abe Lincoln…if only he had obliged and not printed that fiat paper money to finance the civil war)

    …and it continues to this day even though “we have gone beyond Darwin’s understanding”.

    Yeah, “fundies say the darndest things.”

  66. Apollos,

    I agree with much of what you are saying. But what is random selection? What actually causes the different varieties? And what determines if the variety persists in the ecology it finds itself in?

  67. You see I claim that 99.5% of the species come from naturalistic means.

    Natural selection (as Darwin defined it) isn’t naturalistic, it’s not even a tautology, it’s an inchoherent ideology.

    If you don’t like my answer of front loaded special creation, fine. But I surely don’t think natural selection (in the Darwinian sense) is the answer and it is double speak to call it “natural selection” because the Darwinian conception is anything but what we actually see in nature.

    The phrase “Natural Selection” is like saying someone who is homeosexual dying miserably from AIDS is “gay”. That person in anything but “gay” in the traditional sense. Darwin was a rhetorician skilled in double speak.

    So in answer to your question, “so where did the 99% come from”, since you phrased the question to exclude any possibility of front-loaded creation, the best I can say apart from those possibilities is, “I don’t know.”

    I do know however, Darwinism is not the answer, and it doesn’t not even qualify as a naturalistic possibility because it is logically and empirically refuted through purely naturalistic assumptions as I have laid out in this thread.

    Mutation and random drift combined with geographical isolation seems to be good “naturalistic” answers for origin of species from the ancestral forms.

    I think however the ancestral forms were created, but that answer appears to be unacceptable to you. Fine. Perhaps, “I don’t know” would be an acceptable answer to you.

    And as far as the ancestral form being created, Darwin suggested that the progenitor of all species was created Origin of Species Chapter 14:

    the first creature, the progenitor of innumerable extinct and living descendants, was created.

    Charles Darwin

    Since you asked me a question, to which I responded, “I don’t know”, I’ll ask you. Do you think Darwin was right that the first life “was created”.

  68. Jerry:

    “You see I claim that 99.5% of the species come from naturalistic means. If you do not agree then where did they come from and how?”

    Just for curiosity:

    I understand your claim, you have made it repeatedly, and now you are even pretty quantitative about it. No problem, everyone is entitled to have his own ideas.

    But why are you so sure that species did not come by design, as all the other levels of biological information that you apparently believe have been designed?

    Have you any real argument for that strange conviction? Is perhaps the huge number of the final species which scaries you? Do you think the designer would get tired in designing them? Or have you some other type of argument?

    For instance, we could just analyze what is known of two very similar (morphologically almost indistinguishable) species of worms, among the simplest in the world: c. elegans and c. briggsae. Their genomes have been completely sequenced. C. elegans is, indeed, probably the best studied multicellular organism in the world: we know individually each cell, each neuron, each neuronal connection. C. briggsae is being studied very extensively too, and numerous studies of comparative genomics are available for the two species.

    Have you, or anyone else, with all that abundance of knowledge, a clear theory of how that speciation took place, by RV + NS? And if you cannot say how it happened for that very simple instance, how can you be so sure that RV + NS did the trick for all the incredibly numerous (as you correctly affirm) species os higher animals, almost all of them much more complex and different one from the other?

  69. As a bit of an aside, let us recall the exchange I had with Dave Thomas at PandasThumb last year:
    Dave Thomas says Cordova’s Algorithm is Remarkable.

    Well, I thouroughly agree with Dave Thomas that my algorithm was remarkable! :-)

    But why did it work? Was it because of Natural Selection? If by “natural selection” one means the sort of selection intelligently designed to preclude random selection, then a qualified, “yes”. But to call it “natural selection” is double-speak, it is not natural in the sense of what we see in biology based on the considerations I laid out in this thread…..

    The algorithm succeeded because gambler’s ruin, genetic entropy, and the real behavior of mutations were not allowed into the algorithm. Thus the typical genetic algorithm is not an appropriate model of real evolution in biological reality….

    To uphold such examples as evidence in favor of Darwinism is very disingenuous…

  70. Salvador,

    “I think however the ancestral forms were created, but that answer appears to be unacceptable to you.”

    I have never said any such thing, so why attribute this to me.

    gpuccio,

    0.5% of 10,000,000 is ? It is 50,000.

    The designer I believe in is very intelligent and probably could have produce all the biological wonderment in this world by creating much less than this number and then provided a mechanism that would lead to the rest. You see I believe in a really intelligent re-do the basic plan.

  71. Jerry:

    “The designer I believe in is very intelligent and probably could have produce all the biological wonderment in this world by creating much less than this number and then provided a mechanism that would lead to the rest.”

    That’s just a philosophical and religious statement, not even an argument. I was wondering if you had any scientific argument at all.

  72. the last line got cut in a place

    It is supposed to read

    “You see I believe in a really intelligent designer that does not has to constantly re-do the basic plan.”

  73. gpuccio,

    A perfectly accepted theory is available to explain how nearly all these species came about and it is called the modern evolutionary synthesis. Because it fails on many examples, does not mean it fails on all. As a theory it can explain most of the species but like Newton’s theory fails to explain all.

  74. Pardon for my potential ignorance, but

    for JT’s question / argument to have any traction, in order for natural mutations to be analogous to the MIT gambling, wouldn’t the beneficial mutation have to continually recure? That is, the gamblers can continually make the same bet over and over again til they succeed. In nature, this would be analogous to the same beneficial mutation occurring again and again and again until it finally succeeds. But of course this doesn’t happen (except for trivial cases like malaria and quinine)–mutations are rare, beneficial mutations are rarer, and repeated identical beneficial mutations are extremely rare.

    Second, nature never produces gross beneficial traits such as “walrus moves better on land”, especially in more complex animals. Most mutations have an effect that is very small (unless it is lethal), and also because an effect like increased land based mobility requires a number of coordinated beneficial mutations (especially in more complex animals).

    If I get this correctly (I would like to explain this to someone), one of the points of Scordova et al. is that mutation generation and selection is NOT like repeated card playing, and that even with repeated card playing the slight favourable probability only occurs because of (1) playing games with conditional probility (which natural mutation generation and selection is not), and (2) risk management in doling out one’s mutations (controlling the starting fund and the subsequent money bet on each card play, and controlling when big bets are made) — which is also unlike nature in which each mutation is all or nothing and it is unlikely that the same mutation will be repeated frequently enough to make any difference in the outcome.

    signed,
    I hope I got this right

  75. Jerry:

    “A perfectly accepted theory is available to explain how nearly all these species came about and it is called the modern evolutionary synthesis”

    Well, I understand you just have faith in the commonly accepted theory, but only partially. Just wanted to understand.

  76. Jerry,

    I borrowed the phrase Random Selection from Sal’s comment #25 above:

    What Kimura and others demonstrated is that even granting that natural selection works on occasion, the problem of “random selection” is quantifiably large enough to render “natural selection” almost irrelevant.

    Darwin said the majority mechanism is natural selection, but Darwin was wrong. Natural selection is not even the majority mechanism, it is not even 5%, it may not even be 1%. “Random selection” overpowers “natural selection”.

    Some examples would be the effects of predatory behaviors, disease, or other environmental factors such as extreme weather conditions. Any individual manifesting a minor selective advantage would be subject to natural forces of greater effect, such as being eaten by a predator. Random Selection helps to make achieving fixity of a minor selective advantage very difficult, as I’m understanding the argument.

    Both RS and NS can only reduce diversity, but they each have different modus operandi. Where NS would tend to reduce a diverse population to one much more limited — such as in the case of antibiotic resistance, RS would tend to stamp out novel emergences indiscriminately, except in more extreme environmental conditions. The resistant novelties in a population of bacteria for instance, would only manifest significantly in the presence of a specific antibiotic. They otherwise are invisible to NS and more subject to the same random factors as their coevals.

    Neither RS nor NS can be credited with creating diversity, only reducing it. Natural Selection can, however, quite probably be credited as the reason we see concentrations of reduced diversity in certain geographical locations, for instance. The net effect is still reduction. The variation mechanism needs to come from somewhere else, or perhaps the genetic information for the expression of extreme diversities was already present in the genome.

  77. A perfectly accepted theory is available to explain how nearly all these species came about and it is called the modern evolutionary synthesis.

    I beleive Kimura’s math shatters modern synthesis. Kimura’s PhD advisor was Sewell Wright, one of the architects of Neo-Darwinism.

    For that matter, Haldane and Fisher, also architects of Neo-Darwinsm provided the math to shatter modern synthesis. How can they be excused for putting forward a theory which their own math cast doubt on? I speculate that the principle reason are:

    1. They did not understand the mechanisms of heredity in detail. Their principle work was developed even before the genetic code was elucidated. Watson-Crick’s discoveries happened around the time of the origination of the Modern Synthesis, and it was years before molecular evolution would become a serious area of research. Ernts Mayr lamented the advance of the molecular evolutionist versus the organismal evolutionists….

    2. They did not appreciate the complexity of biology that we see today.

    3. They did not fully appreciate the contradictions at the time.

    Mutation in the absence of selection seems to be a far more accurate model than modern synthesis.

    And as Dawkins observed, mutation was not part of Darwin’s original theory. Mutationism was actually seen as a competitor to Darwinism…

    Neo-Darwinism tries to recruit mutation as a source of variation for natural selection. Let us grant mutations happen, but I still don’t think selection can play a majority role.

    I have highlighted the work of Fisher (a neo-Darwinist) which strongly casts doubt on the role of selection on organisms. I don’t think modern synethesis can possibly be correct on the grounds of population genetics alone.

    I believe the Altenberg 16 are meeting this summer because modern synthesis needs to be trashed.

    There will be advocates of self-organization at that meeting. I predict that might be the “new” modern synthesis.

  78. Salvador,

    “I believe the Altenberg 16 are meeting this summer because modern synthesis needs to be trashed.”

    I believe if you scratch each one of the attendees at Altenberg, you will find underneath someone who believes in selection. That was certainly true of Jablonka and Lamb who must mention it 500 times in their book and they are some who are pushing for a new synthesis. I sincerely doubt that any revision of the synthesis will trash selection. It is only looking at other naturalistic ways by which changes occur to a population over time that supplement and explain selection. Jablonka and Lamb’s book covers a lot of them and none are a threat to ID.

    If someone completely rejects selection, then they better have something unusual up their sleeve because there is nothing out there at the moment that makes sense especially self organization. Do you really believe that self organization has anything to do with species origin? Remember the modern synthesis is not interested in OOL.

  79. That was certainly true of Jablonka and Lamb who must mention it 500 times in their book and they are some who are pushing for a new synthesis.

    I appreciate your mention of Jablonka and Lamb as they are correct the gene-centric view of evolution is slowly declinging….

    But let us suppose they are right, that evolution must take care of information outside of genes, but epigenetic, plus two more dimensions. Let us even suppose that in addition to evolution of genetic regions we have evolution of regulatory regions, that most “junk DNA” is functional. We must also consider the evolution of proteomes, not just genomes.

    This means that my illustration of 4,000,000,000 nucleotides might understate the staggaring amount of information which natural selection must actually manage. This makes problem of “Random Selection” far worse than we might have supposed…

    If I had to pick a “naturalistic theory” that really captures the imagination, it would have to be Koonin’s Many World’s. It solves the problems of OOL and the modern synthesis in one felt swoop….

  80. Oh my goodness.

    Apparently I caused a major stink. One of the world’s most renowned Geneticists took time to offer his critique:

    Joe Felsenstein at PT.

  81. Oh my goodness.

    Apparently I caused a major stink. One of the world’s most renowned Geneticists took time to offer his critique:

    His critique is nonsense. He compares the 2% chance of fixation of an s=.01 trait to the .00005% chance for a neutral trait, and screams “See! Natural selection works! It’s much higher than the neutral trait!”

    But the elephant in the room is that there’s only a 2% chance of fixation, and a 98% chance of elimination.

    Genetic drift is in fact the enemy of new mutations, even if advantageous, and it kills them off 98% of the time. And we’re supposed to be impressed by the “power of natural selection.”

  82. The problem becomes even more severe in smaller populations. He assumes a population of 1M. What of smaller populations?

    And what of the problem of sexual reproduction? In sexual reproduction, only 1/2 of the parental genetic variation is passed on to each offspring. That’s a 50% loss in the case of an only child. The problem is mitigated with more children, but not solved. Sexual reproduction increases recombination, but substantially decreases the odds that mutations will be passed on …

  83. Joe Felsenstein writes:

    This would be a shocking disproof of decades of work in population genetics—if it accurately reflected the ultimate fate of those mutants. Fortunately, we can turn to an equation seven pages later in Kimura and Ohta’s book, equation (10), which is Kimura’s famous 1962 formula for fixation probabilities.

    I happen to have Kimura’s book in hand.

    Let us see what Kimura and Ohta have to say after they state equation 10,11,12:

    Formula (12) shows that for slightly advantageous new mutants only a tiny minority are lucky to spread into the entire species, while the remaining majority are lost by chance even if they have a definite selective advantage…for every mutant gene having selective advantage of 0.5% that becomes fixed in the population, 99 equally advantageous mutants have been lost, without ever being used in evolution

    This flies in the face of Darwin’s claim:

    Natural Selection is daily and hourly scrutinising, throughout the world, the slightest variations; rejecting those that are bad, preserving and adding up all that are good.

    C.DARWIN sixth edition Origin of Species — Ch#4 Natural Selection

    Darwin uses the word “all” not “some”. The reality is that it is not even “most”, much less “all”.

    I have refuted Darwin’s claim, as stated in Origin’s, decisively.

    Kimura and Ohta point out:

    The fact that the majority of mutations, including those having a slight advantage, are lost by chance is important in considering the problems of evolution by mutation, since the overwhelming majority of advantageous mutations are likely to have only a slightly advantageous effect. Note that a majority of mutations with large effect are likely to be deleterious. Fisher (1930b) emphasized that the larger the effect of the mutant, the less it its chance of being beneficial.

    In our opinion, this fact has not fully been acknowledged in many discussion of evolution. It is often tacitly assumed that every advantageous mutation that appears in the population is inevitably incorporated.

    page 11

    Darwin was responsible in large part for the false assumption that “every advantageous mutation that appears in the population is inevitably incorporated”. But Kimura and Ohta demonstrate this claim is false by several orders.

  84. ungtss:

    “Sexual reproduction increases recombination, but substantially decreases the odds that mutations will be passed on …”

    I agree with you. Sexual reproduction, and the shuffling of alleles in it, are usually overemphasized as a means of new information generation. As far as we know, the shuffling of existing alleles only allows a single species to express various polymorphisms which already are in the genetic pool of the species itself, and is the basis for the expression of variety among individuals and, probably, of the minimal systematic variation between races in a single species. We really don’t know how the different polymorphisms arise, for instance what is the origin of the different blood groups, or HLA groups, or colours of the eyes and of the hair. That could be a product of random variation (not necessarily of selection), but I don’t think we really know. But there is no evidence that allele shuffling through sexual reproduction and chromosome crossig over may be in any way an explanation for the appearance of really new functions, least of all for speciation, at any level (fron phylum to simple species).

    Each new species, or more dramatically family, class or phylum, is really a new project, which requires new information, a lot of it, new genes, new coordination of genes, new functions, new meta-functions, and so on. In a sense, the general barrier to procreation between different species (even with its exceptions) could be considered a natural prevention of the dangerous shuffling of genes between different projects, and a way to preserve more easily the identity and stability of each design.

    The “explanations” of the official theory are completely flawed. Natural selection is a ghost, derived from a few extreme examples of selection of “lucky and probable errors” under heavy and unnatural pressure (like in antibiotic therapy). Those few and non typical examples have been abnormally extended to generate a theory without evidence, a dogma without reason.

    It is rather obvious that most mutations or variations are negative or neutral. Probably almost all of them. That’s why intelligent systems of defense against variation have been incorporated in the design of living beings. I don’t know how much neutral mutations can be fixed by genetic drift, and I don’t care. Anyway, genetic drift is another kind of random variable, and cannot add anything to the scenario. I have never understood the enthousiasm of some darwinists for that concept, as though it could be an answer to anything.

    Probably, the only natural role of natural selection is to eliminate the most extreme negative mutations. Even in that sense, it is not a very efficient principle, otherwise we would not have so many genetic diseases in our populations, most of which are certainly not “selected” for any subsidiary advantage (S hemoglobin being the exception, rather than the rule). In that sense, NS is really a tautology: those organisms which are too genetically damaged to survive, do not survive. It’s as simple as that. To assume that such a mechanism is the real “author” of biological information is pure folly.

    At the cell level, the problem is solved with even more elegance: the cell whose genome has been damaged is “quarantined”, and repair is attempted, while reproduction is suspended. If repair does not succeed, the cell spontaneously enters a very complex pathway which leads to a series of coordinated events leading to a controlled death (apoptosis), in such a way that the death of the damaged cell be not a danger for the other cells in the organism.

    Does that sound like design?

  85. So, Sal, are you then saying that the incidence of mutations on a population is non-random, but shows some bias in favor of beneficial mutations which furthers the (intelligently designed) development of the species?

    And that also the survival of individuals with beneficial mutations is also not due to natural selection , or random selection, but that it shows a bias in favor of those individuals which further the intelligent design of the species?

  86. I encourage the pro-ID readers to be gracious to Dr. Felsenstein. He has honored me and Uncommon Descent by even taking time to respond.

    He may disagree with me, but I am grateful he is engaging us in dialogue.

  87. I encourage the pro-ID readers to be gracious to Dr. Felsenstein. He has honored me and Uncommon Descent by even taking time to respond.

    Sal you’re right, but let’s take into account that the main reason Dr. F. did spend his time to argue against your (pretty evident) claims semmes not at all related to some sort of kindness, but it’s more plausibly related to the fact that you did hit the nail on the head …

    He may disagree with me, but I am grateful he is engaging us in dialogue.

    Surely it’s better than the behavior of some well-known D. bulldogs .-)

  88. 88
    JunkyardTornado

    Is there a reason why the following post of mine is being rejected?

  89. bornagain77:

    Thanks for the citation from Bill Gates. Your post #35 was an interesting read. Thanks again.

  90. 90
    JunkyardTornado

    My point regarding the house edge which I don’t think was ever understood:

    First of all, you have to consider the kasino’s standard way of earning billions is just the house edge and nothing more. Its nothing brilliant or surreptitious like the strategems described in the article, employed by gambblers trying to “bbeat the house”. Since it already is the house a kasino just has to put a simple straightforward edge for itself into every game, a 0 and 00 on a roulette wheel for example, as well as built in advantages into everything else – the line on sportts bettting, and so on. Then they just sit back and passively wait for the money to roll in (so to speak). I guess the point is, its not a function of intelligent design as such, which is what I think was implied more than once in the opening article. Its just a built in in edge and enough money to back it up. (OK thats my understanding anyway).

    But nature would be like a kasino or gambbler with UNLIMITED funds and this point was never brought out. If a mutation is a bett, (and that’s the analogy that Sal was using) then nature can make betts continuously until the end of the world and never run out of monney.

    I understand perfectly well that for any single mutation, that the gambbler’s dillema applies, as even nature can’t keep making a bett of that specific mutation over and over again. But nature can keep making advantageous mutations at a certain rrate over and over again and never run out of monney, and I was trying to figure out if that possibly implied that ultimately only advantageous mutations would exist. I’m not sure it does now, but to reiterate, nature is like a gambbler with unlimited ffunds that can keep making betts forever and never run out of monney, and for some reason this obvious analogy was never noted.

  91. There is one big misconception going on here. Most of the time natural selection is not operating on beneficial mutations or any changes to the gene pool but will produce new species from within the current gene pool. Everybody assumes natural selection works before they denounce it since they constantly talk about how deleterious mutations get weeded out. Folks, that is natural selection at work.

    But that is only a small thing that is going on. Natural selection shuffles the current elements of the genome in the gametes during sexual reproduction and the offspring have an entirely different set of genes and other elements than their parents. Some of these combinations may be more advantageous than what others are born with based on the current environment and they will probably go on to produce more offspring accordingly.

    How hard is this to understand. No new novel genes or genetic elements but a difference in the offspring based on the reshuffling. Just look at the people around you to see the differences in the gene pool and these are only surface differences.

    If a challenging environment comes up then those with the best genetic basis will survive the most often. Just what it is will be hard to predetermine but while some of it may be luck, most of the time it will not. Thus certain combinations of genetic elements get passed on.

    This is a no brainer and arguing against it will get no where. When a sub population gets isolated it may develop a gene pool through natural selection that may not let its members mate with the original population and voila we have a new species. The genomic differences could be small but we have a new species. And lo and be hold , we have 300,000 species of beetles.

    So arguing against natural selection is like banging your head against a wall. Now looking at natural selection and trying to figure how quickly a new feature can penetrate a gene pool is fair game but it has nothing to do with how most species come about because most new species are not the result of mutations.

  92. 92
    JunkyardTornado

    90 – Well stated.

  93. My point regarding the house edge which I don’t think was ever understood:

    First of all, you have to consider the kasino’s standard way of earning billions is just the house edge and nothing more. Its nothing brilliant or surreptitious like the strategems described in the article, employed by gambblers trying to “bbeat the house”.

    JT,

    Do you understand the techniques used by the MIT Team? If not, you are demonstrating a serious misunderstanding of what I wrote and who Edward Thorp is….

    You would do well to read up on Dr. Throp and his work on Casino games. Do understand the difference between games with and without conditional independence?

    If you are playing a single deck “21″ game, tell me what the house edge is if the dealer has dealt out the following cards to 1 player sitting at his table and himself.

    to player: 6,4,8
    to himself: 6,5,3,2,7

    Give me an estimate of the house edge on the next round.

    Answer: House edge is around -2.7% and player advantage is conversely around +2.7%.

    The reason the house edge is negative is that the properties of the cards remaining in the deck have been partially revealed. It is now rich in 10′s and Aces: a condition which favors the player.

    Given that situation, the player would do well to raise his bet substantially.

    I’m afraid you don’t understand what you are talking about. Sorry…

    Salvador

    PS
    The player would do well statistically to raise his bet regularly in such cases, however, the casinos frown on such practices since it affects their bottom line. Such players can expect to be shown the door…

  94. re 91: I’m not convinced.

    First, the only genes being shuffled are for ones that are variations on existing traits: eye colour, hair colour, limb length, amount of fatty tissue, etc. We do not by reshuffling get new traits, such as additional functional limbs, antennae in addition to eyes, the ability to spit poison, the ability to fart explosive gas.

    So, for beetles, how does shuffling alleles lead to beetles that produce lethal poison, explosive gases, dependencies on radically different food stuffs, etc. It’s not by shuffling.

    Second, you assume a too large a jump when you assume that the shuffling produces a new trait that makes the carrier and expressor of that trait a new species. I’m not aware of any shuffling or mutation that even after several shuffles or mutations is significant enough to create a new species. The changes are so marginally small (else they can neither arise nor be passed on) that they are overwhelmed by other, random factors (e.g., the new bug gets stepped on by an apatosaurus).

    Third, and this is in response to 90 & 92, there has not been an unlimited supply of time or funds. We have a discrete maximum age for the universe and the earth. And we have a discrete and calculable number of cell reproduction events since the earth has existed (let’s ignore the fact that life has to appear first before it can evolve as life). Therefore, when looking at the current state of complexity of life we are dealing with a set number of hands of cards played since life began.

    Have I correctly understood the problems with the reasoning of JunkyardTornado and Jerry?

    regards,
    jct

  95. Sal, I’m surprised at your ignorance of the history of science:

    Watson-Crick’s discoveries happened around the time of the origination of the Modern Synthesis, …

    Fisher published his first big paper on the subject in 1918. Wright started a couple of years later, but his first main papers were published in the first half of the 1920s.

    Crick was born in 1916, and Watson in 1928. Watson must be the world’s greatest genius if he did his big work before he was even conceived!

  96. jct,

    What is classified as a species is very problematic. Wolves and dogs are considered different species but can breed so are essentially one species. Most wild cats can inter breed such as a tiger and a lion, so they are also one species but functionally classified as different species. American buffalo or bisons can breed with cows so they are one species even though each is identified as a different species. The examples go on and on.

    Now I know nothing about beetles and you bring up some interesting cases. But your exceptions do not negate that most beetle species may be like wolves and dogs or if they cannot inter breed are almost identical genetically. Some birds are almost identical genetically except they have different song patterns.

    I am not trying to undermine your examples because as I said I know little about beetles. The lethal poisons and explosive gases seem unique but dependence on radically food stuffs could actually be a behavioral trait that is easily explained or maybe not.

    Eventually we will learn the genetic basis for the traits you mention. That is where a lot of research is leading. We are also only at the tip of the iceberg on a lot of things. For example, each cell type has exactly the same type DNA so why the different cell types. It appears the difference is in which genes get expressed and this study is at its infancy. Also some capabilities are unexpressed in the genome but are there. Why? Again this is in its infancy.

    The point I am trying to make is that the reshuffling produces unique characteristics each time and some of them might lead to small changes that prevent inter breeding and a new species originates. Then there is the morphologically different variants that are the same species; e.g all the types of dogs we see and such things as lions and tigers in nature.

    Now examples like the giraffe, bats and your beetles with self defense mechanisms seem to defy the ability of natural selection to produce. I never said that natural selection was responsible for everything but a very large percentage of species probably owe their existence to naturalistic causes which includes natural selection.

    The point I am making is that it is fruitless to argue against natural selection, but also recognize that even if one accepts it as real, it is very limited in terms of producing new capabilities. Someday we will be able to know how much of the genome is really different between things like a humming bird and a penguin and know how much had to change to get one versus the other and how much was outside the possibility of naturalistic changes.

    Thanks for the examples.

  97. From Wiki:

    This synthesis was produced over a period of about a decade (1936–1947)

    Watson-Crick 1953
    here.

    I actually was too generous to say modern synthesis originated around the time Watson-Crick. This strengthens my original claim that Haldane and Fisher did not have access to the modern understanding of the molecular basis of heredity at the time modern-synthesis was conceived.

    Thank you Bob OH for strengthening my point that neo-Darwinism originated in ignorance of the molecular basis of heredity….

  98. Bob O’H,

    I think the modern synthesis was first developed in the late 1930′s or early 1940′s.

  99. 99
    JunkyardTornado

    Sal:

    Unfortunately (or maybe fortunately for you) you picked a topic filled with gambbling terms which can’t be used in responses (except by the elite at UD), thus preventing rebuttal. Thus I am unable to fully respond to your insult of my gambbling knowledge.
    Here’s a link on “The House Edge”. It says that everything but blackkjack and video pokker have a fixed edge for the kasino.

    Wow, I can’t even give the link because it has Kasino in the title.

    Once again though, it seems you’ve ignored my main point (here it is again):

    JT: “If a mutation is a bett, (and that’s the analogy that Sal was using) then nature can make betts continuously until the end of the world and never run out of monney.

    I understand perfectly well that for any single mutation, that the gambbler’s dilemma applies, as even nature can’t keep making a bett of that specific mutation over and over again. But nature can keep making advantageous mutations at a certain rrate over and over again and never run out of monney”

    And as I stated, I had been trying to ascertain the significance of that. It seems like you would have at least pointed it out in your original post, (an ostensibly thorough overview of the bettting analogy as it applied to natural selection).

  100. jct

    Depends on how much stuff there is to shuffle.

    Chromosomal reorganizations, by the way, result in drastic changes in ability to produce fertile hybrids. Who know what else can be done with radical changes in DNA architecture like that. Position effect in the chromosomes might very well be a saltational mechanism. Read the sidebar articles under John A. Davison.

  101. But nature can keep making advantageous mutations at a certain rrate over and over again and never run out of monney

    Not if the mutation is sufficiently rare. Behe pointed out the required multi-mutation for a two-binding site evolution would be exceptionally rare, perhaps exceeding all the number of humans that ever lived.

    The scenario will also fail if the species go extinct because of genetic entropy or random events. So nature will run out of chances on that account as well….

    In any case, the major point was Darwin was wrong:

    Natural Selection is daily and hourly scrutinising, throughout the world, the slightest variations; rejecting those that are bad, preserving and adding up all that are good.

    I’m glad Joe Felesenstein weighed in on the discussion and introduced the Robertson-Hill paper. Sanford sketched out the problem of how many nucleotide positions Natural Selection can monitor simultaneously, and it isn’t much. Robertson-Hill seems marginally related to Sanford’s claims, but I don’t know.

    Sanford places the figure at 700 nucleotides that can be successfully monitored by natural selection, and that’s pretty small for something as large as the human genome with 4,000,000,000 base pairs.

    The 700 nucleotides are assuming adequate resources are available. If adequate population resources are not available, then genetic entropy results.

  102. I fail to see why Darwin being wrong (in that not ALL good variations are preserved) was the “major point”. Darwin was wrong on many points. But that is only to be expected given the state of knowledge in the 19th century and the fact he couldn’t even have been aware of genes and the actual mechanisms by which variation is transmitted. That is, frankly, nitpicking.

    It seems to me the major point is that beneficial variations can be fixed and that Felsenstein’s post on PT indicates why. Claiming that the major point is that not all of them do is a distraction and suggests you are conceding the real major point to Felsenstein.

  103. It seems to me the major point is that beneficial variations can be fixed and that Felsenstein’s post on PT indicates why.

    But deleterious mutations can be fixed as well and at a higher rate than beneficial mutations. I partially demonstrated that Darwin’s conception of inevitable progress was suspect at best. A more comprehensive dismantling of Darwinism is in John Sanford’s book.

    My lengthy post and discussion here is only a partial attack on Darwin’s flawed theory.

    Dr. Felsenstein does not properly account for the effect of the fixation of deleterious mutations that have a low selection value.

    Random effects can cause selectively advantaged traits to go extinct and they can also cause deleterious mutation to go to fixation. I covered only random effects causing selectively advantage traits to go extinct…

    In anycase, even conceding that my essay might be revised to make it less vulnerable to Dr. Felsenstein’s critique, Darwin was fundamentally wrong about the efficacy of “natural selecton” to bring about positive change. Dennett’s algorithm that supposedly argues the correctness of Darwinism — Dennett’s algorithm is an idealization that is not in line with biological reality.

  104. 104
    JunkyardTornado

    jct wrote: “Third, and this is in response to 90 & 92, there has not been an unlimited supply of time or funds. We have a discrete maximum age for the universe and the earth. And we have a discrete and calculable number of cell reproduction events since the earth has existed (let’s ignore the fact that life has to appear first before it can evolve as life). Therefore, when looking at the current state of complexity of life we are dealing with a set number of hands of cards played since life began.

    (BTW in 92 I was responding to Jerry’s post, not my own – an additional message was inserted in the thread making it appear I was commending my own post.)

    I do recognize the earth has a lifespan:

    Me: “But nature would be like a kasino or gambbler with UNLIMITED funds and this point was never brought out. If a mutation is a bett, (and that’s the analogy that Sal was using) then nature can make betts continuously until the end of the world and never run out of monney. “

    Imagine a gambbler with a 51% edge, unlimited funds and at the very least several thousand years to make betts. That is the picture I was looking for.

    Of course natural selection doesn’t have a 51% edge, but it does have unlimited funds and millions of years to some.

    Remember, the bettting analogy for mutations was Sal’s. Can nature keep producing mutations for as long as it exists? That means it has unlimited funds. Will beneficial mutations continue to occur at a certain rate? Yes.

    Let me emphasize that I am in full agreement with Jerry that most of the variation is coming from continual reshuffling, rather than mutations. If a lot of new new random data is continually thrown into the mix in the form of nuetral or not-too-harmful “mutations” there’s more stuff to reshuffle. I’m thinking a lot of raw random data was dumped into the system at the very start of the biological process on the planet – with random organic material being disgorged in great quantities from the center of the earth (speculation and I don’t know where I read that first.)

    But suppose we can examine the mechanism as currently proposed and say, “Wait a minute – there’s not enough here to account for what we’re seeing.” What is ID’s response: “This proves this magical design-making black-box is responsible”. (Remember that intelligence for ID is defined as whatever is not randomness and mechanism. If it ain’t mechanism it can’t be examined.) Science would say, “well it just proves there are other mechanisms at play we haven’t discovered.” But ID would just shake its head with a sad smile, with pity for pathetic science in its futile search for mechanisms, blind to the obvious necessity of ID’s inscrutable magical black box.

    In response to Sal regarding kasinos – I don’t know how the edge is implemented for every single game. But just take roulette – that’s a HUGE percentage of kasino revenue. And how is the edge implemented there – By 0 and 00 on the wheel, so when the ball lands there everyone loses but the kasino. Do you think it requires a book to explain how that works? Sure, the kasino has to watch for some wise guys casing the wheel and break a few hands occasionally, but is it a concern with 98% of the betttors? No.

  105. 105
    JunkyardTornado

    Sal: I do thank you for taking the time to write this article. I am not a “Darwinist” either and my own views are much more nuanced than can be fully elaborated in this thread. I would look into that book you mentioned, but since gambling is probably a sin I couldn’t make use of it.

  106. scordova,

    what is the process that causes deleterious mutations to become fixed? The whole point is that deleterious mutations will confer a disadvantage on organisms that makes it less likely they will pass on the mutation, whereas the converse is true for advantageous mutations.

    Random effects, being random, will tend to affect all mutations. You claim that random effects can cause deleterious mutations to go to fixation. How so? The point is that they are deleterious, so they act to the disadvantage of the organism. That disadvantage will still hinder the transmission of the mutation to later generations, regardless of the random effect (unless the random effect was so extreme as to render what was a deleterious mutation into and advantageous mutation).

    And Darwin wasn’t wrong about about “the efficacy of “natural selecton” to bring about positive change.” Where he was wrong was claiming that all advantageous variations would be selected – which as I pointed out earlier is something of a nitpick.

  107. But deleterious mutations can be fixed as well and at a higher rate than beneficial mutations.

    Eh? Can you explain, please. You can’t be assuming that everything else is equal.

  108. Jerry, I can see by your post that I missed what you were getting at; now I get it, so thanks for your reply.

    For Junkyard Tornado: the hypothetical that “with unlimited time nature is like a bettor with unlimited resources” is irrelevant because we are not concerned with whether it would be hypothetical possible or inevitable to arrive at a certain level of development and complexity of life.

    The problem is that nature has not had unlimited resources to get to this point. So how did it get here? You rightly note that there could be other factors or operations at work than genetic mutation. However, you are wrongly unidirectional in stating that ID proponents cut off scientific inquiry by positing ID to be a mysterious black box that provides the necessary mechanism to get us to this point of development in just a billion years or so. The argument also cuts the otherway,in that you cut off research into ID by assuming (on faith) that scientists will be able to find (so far missing) materialist mechanisms to over come the indequacies of current (materialist) evolution.

    Given that, historically, belief in a designer God did not impede the advance of science, it would seem that ID merely opens an additional avenue of investigation (i.e., the possibility of design) without inherently closing off investigation into material causes. At least I’m not aware of anyone significant in the ID camp that says “ID is the solution for all gaps, so stop looking for any other solution”.

    In fact, it was the scientists who did NOT believe in God who effectively delayed and hindered research into “junk DNA” because of the evolutionary explanation for it–which has now been found to be wrong, and in fact there is likely only trivial amounts of junk DNA.

  109. Bob OH wrote:

    Eh? Can you explain, please. You can’t be assuming that everything else is equal.

    Of course not. Kondrashov makes the point better than I

    why have we not died 100 times over?

    It is well known that when s, the selection coefficient against a deleterious mutation, is below ? 1/4Ne, where Ne is the effective population size, the expected frequency of this mutation is ? 0.5, if forward and backward mutation rates are similar. Thus, if the genome size, G, in nucleotides substantially exceeds the Ne of the whole species, there is a dangerous range of selection coefficients .…

    Mutations with s within this range are neutral enough to accumulate almost freely, but are still deleterious enough to make an impact at the level of the whole genome.

    In many vertebrates Ne ? 104, whileG ? 109, so that the dangerous range includes more than four orders of magnitude. If substitutions at 10% of all nucleotide sites have selection coefficients within this range with the mean 10?6, an average individual carries ? 100 lethal equivalents. Some data suggest that a substantial fraction of nucleotides typical to a species may, indeed, be suboptimal. When selection acts on different mutations independently, this implies to high a mutation load. This paradox cannot be resolved by invoking beneficial mutations or environmental fluctuations.

    I don’t think Kondrashov’s soft selection solution or “synergistic epistasis” is the answer, but I would welcome hearing the other side.

    PS
    By the way, if you’d like Bob OH, I’d be happy to mail you Genetic Entropy as my personal thanks for your participation here at UD.

  110. 110
    JunkyardTornado

    Sal:

    an average individual carries ? 100 lethal equivalents. Some data suggest that a substantial fraction of nucleotides typical to a species may, indeed, be suboptimal…

    So, if I’m reading the abstract (which is all that’s available) correctly, it is known fact that neutral mutations are accumulating in a typical species to such a great extent that a lot of harmful configurations of this introduced genetic material has occurred, and yet it is also known that somehow these suboptional configurations are being dealt with (by some unknown mechanism?). So if they’re being dealt with – its not by miraculous intervention presumably. Or maybe you’re saying the rate of harmful accumulation is such that, if biology had been around for longer than a few thousand years it would already be exterminated completely.

    Just a question as a layman, do you reject that many innovations have been observed to occur via natural selection among microorganisms. IOW, while understanding that the numbers may not add up for RM-NS among large land mammals, are you saying it also does not occur among microorganisms either.

    And where I’m getting at (also as a layman) is what if the vast majority of (RM-NS) innovations occurred eons ago when all that existed were microorganisms, and innovations we see on a macro-scale now where actually developed in that primitive environment and have only been transmitted or mapped to a different scale.

  111. 111
    JunkyardTornado

    “When selection acts on different mutations independently, this implies to [sic] high a mutation load”

    Not clear why seperate mutations would have to be acted upon independently. If its a phenotype that’s rejected wouldn’t the associated genes (plural) decrease in frequency.

  112. Junk,

    Bob OH is a professional population geneticist. Perhaps he can answer your questions.

    Sal

  113. scordova,

    What do you think about this rebuttal?

    http://scienceavenger.blogspot.....o-sum.html

  114. Hi Mike1962,

    I gave a partial answer to ScienceAvenger.

    here

    He resorted to misreading what I wrote. When someone does that, I don’t invest much time even if he had points worth discussing.

    However, let me know if you have specific concerns you’d like me to address….

    If you want me to respond to the whole thing, well…I can do it in pieces.

    Sal
    PS

    I’m probably more knowledgeable than he gives me credit for. I can tell that from what little I read…

  115. UNGTSS,

    So what did you think of your experience debating with the Pandas?

    Sal

  116. Sal –

    I don’t think Kondrashov’s soft selection solution or “synergistic epistasis” is the answer, but I would welcome hearing the other side.

    This is not a debate I’ve been following, but some of the work on evolution of sex in finite populations is relevant to Kondrashov’s work. Sally Otto talked about them at the ESEB meeting last year.

    PS
    By the way, if you’d like Bob OH, I’d be happy to mail you Genetic Entropy as my personal thanks for your participation here at UD.

    Thank you, I’d be grateful. Do you have my address? If not, email me (or google me, and make sure you send it to the guy in Helsinki!).

    JunkyardTornado -

    Not clear why seperate mutations would have to be acted upon independently. If its a phenotype that’s rejected wouldn’t the associated genes (plural) decrease in frequency.

    Yes, you’re right. But if the genes are independent (technically if there is no epistasis or linkage disequilibrium) then at the population level they can be treated as being independent. I had been intending to chase up the maths, but other stuff intervened.

  117. Gentleman,

    I’ll be on travel for a bit, but I’ll return.

    I also have a discussion at PT ongoing. I have to be cordial there as a professor from my school is also participating and it is my hope we conduct ourselves in a manner which honors the institution we are a part of. He has certainly been cordial to me and I’ll endeavor to reciprocate.

    Also, Dr. Peter Oloffson e-mailed me this, and I thought I’d pass it on (nothing sensiive or private):

    Just read your post about gambling. As a Swede I have to point out that there is no “true” Nobel Prize in Economics. Alfred Nobel was smart enough not to include it in his will and it was instituted by the Bank of Sweden much later.

  118. Sal –
    in #15, I criticized your spread-sheet. I’d like to elaborate a little bit: Your spread-sheet doesn’t reflect the problem you discuss, i.e., gambler’s ruin. This fact gets hidden by the type of diagram you chose: the cumulated sum of paths…
    The problem: a path doesn’t stay at zero once it reached zero, in other words: you are looking at a process {Sn}n∈N with P(Sn+1-Sn=d)=p, P(Sn+1-Sn=-d)=1-p=q and S0≡K, where K is your starting capital and d is the possible gain or loss in each game (you chose K=d=50) – while you should have looked at the stopped process {Sτ∧n}n∈N, τ:=inf{k:Sk=0}….
    The difference becomes obvious when you take the expected value after a couple of rounds. What’s the average capital after the 77 rounds in your spread-sheet? It’s 103.9. And what happens if we take K=0 or K=-50? Well, on average we get a capital of 53.9 resp. 3.9 – and that cannot be as a bankrupted gambler isn’t allowed to play.
    A quick calculation yields:
    S0 – E[S77] – E[Sτ∧77]
    0 – 53.9 – 0
    50 – 103.9 – 59.8
    100 – 153.9 – 118.0
    150 – 203.9 – 175.0
    200 – 253.9 – 230.8
    250 – 303.0 – 285.5
    500 – 553.9 – 548.8
    1000 – 1053.9 – 1053.7

    So, why this nitpicking? If you don’t get the easy things right, i.e., a simple model of gambler’s ruin as an excel-sheet, how can I trust you with the more complicated concepts of your post?

  119. Hm, the preview shows me, that my tags have an effect – which is lost in the displayed post… that’s quite annoying!

  120. So, why this nitpicking? If you don’t get the easy things right, i.e., a simple model of gambler’s ruin as an excel-sheet, how can I trust you with the more complicated
    concepts of your post?

    Where did I say it was nit picking?.

    The graph was to ILLUSTRATE:

    The important thing to grasp is that “slight selective” advantages do not look very different from random walks except in the long run.

    This underscores the strong effect of random events even when one possess an inherent statistical advantage such as a gambling skill or a selectively advantaged trait.

    The name of the file had the word gambler’s ruin to associate it with this discussion. It was not formally a simulation of actual ruin but ILLUSTRATE:

    that “slight selective” advantages do not look very different from random walks except in the long run.

    One could easily modify the spreadsheet to stop progress when zero is hit, except if I did this, one would not easily see all the lines since most of them abort early thus giving a misleading impression of large scale progress.

    If you don’t get the easy things right, i.e., a simple model of gambler’s ruin as an excel-sheet, how can I trust you with the more complicated
    concepts of your post?

    Do trust me then.

    I updated the above post to add a caveat in light of you most uncharitable reading of what I wrote.

  121. - Sal
    1. Thanks for adding your caveat
    2. Do trust me then.
    Sorry, I followed your musings about the Fourier Transform – I’m afraid it will take its time before I trust you anywhere in the regions of higher mathematics. Sorry, if I’m sounding uncharitable…

  122. You promoted falsehoods about me regarding Fourier transforms. I have no reason to give you air time here at UD until you offer a retraction.

    Do you still maintain that I don’t know the difference between and Fourier Series versus a Fourier Transform?

  123. -Sal,
    I assume that no one wants to rehash this discussion. As I said, I followed your musings – and I didn’t find your presentation convincing: you seem to answer your critiques sometimes on a google-first-hit basis (as above with your link for the thermodynamics vs. heat flow question). This behaviour suggests that you haven’t fully incorporated the underlying (mathematical) knowledge for the topics in debate. But of course, that’s just my subjective impression – nevertheless, I’ve to work with it for my Bayesian classifier :-)

  124. You assume wrong DiEb as this thread has now run its course, we can settle the issue.

    Do you know for a FACT that I did not know the difference between a Fourier Series and Fourier Transform?

    In certain conventions, are Dirichelet conditions applicable to determining if a function can have Fourier Transform, even aperiodic functions?

    You shouldn’t have too much problem answering these simple questions should you?

  125. 125

    You shouldn’t have too much problem answering these simple questions should you?

    It would probably be easier for him if his posts were allowed to appear.

  126. Dieb can post a comment, but he’ll need to wait until it’s released from moderation by an admin.

  127. Trying to post a paradox: “This post doesn’t appear”
    FYI: my last tries to post something on this board weren’t dignified with the usual “your post is awaiting moderation” screen, but didn’t appear at all.
    I’ll give it a shot…

  128. BTW, weren’t it for the plucky folks of AtBC, I’d never found that Sal revived this thread after [i]three weeks[/i]…

  129. I don’t know if Sal has even noticed that you guys are commenting here…

  130. Anytime you have two PhD’s you have a paradox.

  131. Sal has noticed. He revived the thread with his last post, #124, which appeared twenty days after the previous post from DiEb. Sal and DiEb are having a very. slow. argument.

    DiEb, use carets <>, not square brackets, for format statements. HTH!

  132. I’m monitoring this thread. I don’t want DiEB to get away with false accusations. He can offer an apology for spreading falsehoods about me or answer the questions I posed.

    How about it DiEB…do functions satisfying Dirichlet conditions have fourier Transforms? :-)

  133. 5/31/08 In response to various comments by those at UD and PandasThumb, I created another spreadsheet with some improvements. See the improvements at: ruin_olegt_mod1.xls. The princple changes were in response to suggestions by a very fine physicist by the name of Olegt who sometimes posts at TelicThoughts and PT. The new simulation has more rounds and actually prevents a player from playing once he is ruined.

  134. 1. Your post #133 answers to my post #15 – Thanks.
    2. Do you know for a FACT that I did not know the difference between a Fourier Series and Fourier Transform?
    For a FACT? Heck, for what I know, you could be Hilbert reincarnated. But by the entries of your blog, you succeeded in giving the impression that you are standing at the very beginning of your journey into the interesting world of Fourier Analysis. You should read your articles in one year from now, again and judge for yourself…
    3. In certain conventions, are Dirichelet [sic] conditions applicable to determining if a function can have Fourier Transform, even aperiodic functions?
    Re aperiodic functions – one could, but one wouldn’t: yes, you found a script where for motivational reasons aperiodic functions were regarded as a limit case of periodic functions. This is quite artificial: you could compare it to gluing moths to trees just to illustrate a point of interest of your students… it’s not what happens in the real world, but it’s close enough.
    4. How about it DiEB…do functions satisfying Dirichelet [sic] conditions have fourier Transforms?
    In this context, this question is ill-posed – see above :)

    PS: Do I know for a FACT that you can’t spell the name of Johann Peter Gustav Lejeune Dirichlet? No, of course not, but somehow you manage to impersonate someone who can’t…

  135. Re aperiodic functions – one could, but one wouldn’t:

    But one did:

    You seem to forget this little item here :-)

    If on every finite interval, f satisfies the Dirichlet conditions and if the improper integral exists, the following integral


    is known as the Fourier transform,

    Try harder DiEB. :-)

    So DiEB, now that I’ve disproven your claim that one wouldn’t use Dirichlet conditions to identify functions with sufficient conditions to have a Fourier transform, are you going to continue to argue otherwise?

    Please don’t obfuscate as you usually do. This is a simple question.

    PS
    By the way, just so I can be assured of your knowledge on the matter. Can there be two functions
    say f(x) and g(x), and there exists some x where f(x) not equal to g(x), but their Fourier Transforms are equivalent?

  136. Quotemining seems to be becoming a habit of yours…
    I wrote: Re aperiodic functions – one could, but one wouldn’t: yes, you found a script where for motivational reasons aperiodic functions were regarded as a limit case of periodic functions. This is quite artificial: you could compare it to gluing moths to trees just to illustrate a point of interest of your students… it’s not what happens in the real world, but it’s close enough.
    The second part of this paragraph (not the bold part you quoted) shows that your conclusion

    But one did:

    You seem to forget this little item here :-)

    If on every finite interval, f satisfies the Dirichlet conditions and if the improper integral exists, the following integral


    is known as the Fourier transform,

    Try harder DiEB. :-)

    isn’t – how to phrase this civilly – anchored in reality, as this second part is just about “this little item here”.

    So DiEB, now that I’ve disproven your claim that one wouldn’t use Dirichlet conditions to identify functions with sufficient conditions to have a Fourier transform, are you going to continue to argue otherwise?

    I’m afraid you have disproven nothing…

    By the way, just so I can be assured of your knowledge on the matter. Can there be two functions
    say f(x) and g(x), and there exists some x where f(x) not equal to g(x), but their Fourier Transforms are equivalent?

    Ever heard of the concept of a null-set?

  137. Perhaps, I should have elaborated my last answer a little bit, as you don’t seem to be familiar with the concept of Lebesgue integration: when we are speaking of a function in Lp, we use a kind of sloppy language as we really speak about a class of functions, the equivalence class of functions under the relation: f ~ g μ{f=g}=0
    So, in this concept, your question doesn’t make much sense.
    OTOH, as you’re thinking in Fourier series, you may take a look at the behaviour of a periodic function on an interval at its points of discontinuity: the actual value at this point doesn’t influence the Fourier series, as the sum of the Fourier series in this points is given by the mean of the left and the right limit of the function at this point…

  138. I’ll give the equivalence relation again, as parts of it don’t appear:
    f ~ g iff μ{f=g}=0

  139. How exasperating – once more, with feeling:
    f ≅ g : ⇔ μ{f≠g}=0
    And if this doesn’t show up right:
    Two functions are said to be equivalent iff they differ only on a null-set.
    BTW: Sal – your spelling of Dirichlet improved – congrats!

    Is there anywhere a list of tags which can be used safely in this forum? The preview window at least is sensitive to another set of tags then the post itself ( and for example)

  140. You’re obfuscating again DiEB. I use the obfuscating becuse the term I ‘d rather use (but is more appropriate) is a bit uncivil. But I’ll be nice to you, since you’re humoring my questions.

    Perhaps, I should have elaborated my last answer a little bit, as you don’t seem to be familiar with the concept of Lebesgue integration:

    I happen to have 3 undergrad degrees, one in mathematics. I do seem to recall studying Lebesgue integrable functions and I happen to still have the textbooks which discuss Lebesgue integration, several on Fourier Transforms.

    In fact one of my textbooks delves into the relationship of Dirichlet integrals, Fourier Integrals, Riemann-Lebesgue lemma, etc.

    I sense you just bungled a bit with your last answer. hehehe

    And let me show you why. To do so, let me back up a bit. Are Riemann integrable functions also Lesbegue integrable? That should be an easy enough question for you. One which you shouldn’t obfuscate. Your answer will lead to discussion of other issues, not the least of which will demonstrate you just bungled one of your answers. :-)

  141. Sal-
    I don’t know what you want to achieve by posing your little questions (and yes, every Riemann-integrable function is Lebesgue-integrable)…
    Just out of curiosity: can you trade your three undergrad degrees for one graduate degree?

  142. DiEB climed:

    Re aperiodic functions – one could, but one wouldn’t:

    If I may say, “oh really?”

    From Mathematical Analysis 2nd edition by Apostol
    page 323-324

    if the given function is already defined everywhere on (-infinity, +infinity) and is not periodic, then there is no hope of obtainin a Fourier series which represents the function everywhere on (-infinity, +infinity). Nevertheless, in such a case the function can sometimes be represented by an infinite integral rather than by an infinite series. These integrals, which are in many ways analogous to Fourier series, are known as Fourier integrals, and the theorem which gives sufficient conditions for representing a function by such an integral is known as the Fourier integral theorem. The basic tools used in the theory are, as in the case of Fourier series, the Dirichlet integrals and the Riemann-Lebesgue lemma.

  143. -Sal, do me – do yourself – a favour and read the following paragraph carefully (or, at least, read it at all.) This may spare you some future embarrassment:
    In mathematics, many concepts are named after great mathematicians. Some of those have even different concepts bearing their names. Though these concepts are often intertwined, most times they are not exchangeable: a question about Gaussian integers can’t be answered with hinting to a Gaussian distribution (though there may be exceptions :-) )… Sometimes, even one name may describe two different concepts – and you have to be aware in which branch of mathematics you’re at the moment: Dirichlet conditions normally refer to Fourier series, but it may be a short form for Dirichlet boundary conditions, a concept from the world of PDEs. For short: One should always read mathematical texts carefully, and not mix up different concepts – only because they are sounding similarly.

    So, what have you done, Sal?
    I stated that one generally wouldn’t apply Dirichlet conditions to aperiodic functions. You hint me to a text where – in the proof of the Fourier integral theorem a Dirichlet integral is used. Read the text carefully, and you’ll see, that the conditions for the function of the theorem are not the Dirichlet conditions, but a) a kind of locally bounded variation or b) a condition for the right and left limit of the function at a point.

    That is a very sloppy example of equivocation!

  144. What strikes me particularly ironic is that Apostol in the book you presented doesn’t even use Dirichlet conditions regarding the convergence of a Fourier series. He gives to other criteria:
    1. Jordan’s test and
    2. Dini’s test
    (both Tom Mike Apostol: Mathematical Analysis 2nd ed., p. 319)

  145. Are you saying that the f(x) in the Fourier Integral Theorem does not satisfy the following properties.

    on every finite interval,
    f(x) is bounded and has at most a finite number of local maxima and minima and a finite number of
    discontinuities

  146. Sal-
    I don’t know what you want to achieve by posing your little questions (and yes, every Riemann-integrable function is Lebesgue-integrable)…

    The reason I asked is because I’m math challenged, and I wanted to learn from a great master like you the answer to the following question which relates our discussion:

    Can you tell me what the Fourier Transform is of x1 and x2

    x1(t) = sin (2 pi f t)

    and the Fourier Transform of

    x2(t) = 5 for t = 0
    and sin (2 pi f t) for t everywhere else

    Since it seems you perceive me as non-comprehending with respect to math, perhaps you can assist me in my understanding. Please provide the Fourier Transforms for x1 and x2.

    Just out of curiosity: can you trade your three undergrad degrees for one graduate degree?

    No, that’s why I’m going to grad school.

  147. Are you saying that the f(x) in the Fourier Integral Theorem does not satisfy the following properties.
    on every finite interval,
    f(x) is bounded and has at most a finite number of local maxima and minima and a finite number of
    discontinuities

    Yes, a function satisfying the conditionds for the Fourier integral theorem as stated by Apostol does not necessarily satisfy the Dirichlet conditions, as the Jordan’s test is a generalization of these conditions (every function satisfying the Dirichlet conditions is – locally – of finite variation, the converse isn’t true). Similarly for the Fourier series: Define for example, a periodic function on L²[0,1] by setting:

    f(x) := x-2^n for x∈[2^(-n),2^(-n+1)[

    n∈N

    While f(x) is of finite variation – and it is bounded, it has an infinite number of discontinuities and local maxima. Nevertheless, the sum of the Fourier series converges to the function – at least at the points where the function is continuous.

  148. -Sal
    For both cases:
    F(ω) = i π (δ(ω + 2 π f) + δ(ω – 2 π f))
    Now, back to you: as what kind of function x1 (resp. x2) was treated here. Hint: x1∉L¹

  149. arrgh, errata:
    f(x):= x-2^(-n) for x∈[2^(-n),2^(-n+1)[…. infinite number of … local minima

    And here’s a useful hint which points to the world behind the Dirichlet conditions: http://eom.springer.de/f/f041090.htm

  150. -Sal
    For both cases:
    F(omega) = i ? (?(? + 2 ? f) + ?(? – 2 ? f))

    Why thank you DiEB. My calculations resulted in a different form, but given that

    omega = 2 pi f, I suppose it’s the same result

    x1(t) = sin (2 pi f t)

    should be

    x1(f) = sin (2 pi f0 t)

    X1(f) =(1/2i) dirac_delta (f-f0) – (1/2i) dirac_delta(f+f0)

    and becasue x2 is essentially x1 except for single point of finite- valued discontinuity
    X1(f) = X2(f). So at least we are in agreement there.

    But this of course serves as a counter example to your claim:

    So, in this concept, your question doesn’t make much sense.

    My question was:

    Can there be two functions
    say f(x) and g(x), and there exists some x where f(x) not equal to g(x), but their Fourier Transforms are equivalent?

    Your showing that X1(f) = X2(f) is an example that my question made perfect sense. One only need to make the independent variable “t” instead of “x”, and use x1 and x2 instead of f and g.

    note: From Signals Continuous and Discrete 2nd ed, by Ziemer, Tranter, and Fannin

  151. While f(x) is of finite variation – and it is bounded, it has an infinite number of discontinuities and local maxima. Nevertheless, the sum of the Fourier series converges to the function – at least at the points where the function is continuous.

    Well then, there are functions then which do not satisfy Dirichlet conditions, but which have a Fourier Series representation. I never said that there weren’t, I even cited Fejer 1904 theorems which show Dirichlet conditions are not necessary, but are sufficient conditions.

    However, if, even knowing Dirichlet conditions are more restrictive than Fejer’s criteria, we still use them in connection with Fourier Series, why are you raising such a stink when applying them to Fourier Transforms?

    I never argued they were necessary conditions, and the links I provided never argued they were necessary conditions, only sufficient ones.

  152. -Sal
    I bag to differ: the x1 and x2 of your example were representatives of the same function in L¹loc – and so, your question is trivial – and therefore, didn’t make much sense…

    BTW:

    Are you saying that the f(x) in the Fourier Integral Theorem does satisfy the following properties:
    on every finite interval, f(x) is bounded and has at most a finite number of local maxima and minima and a finite number of discontinuities (i.e., perhaps you could react on my posts #143, #144, and #147)

    PS: Generally, there are three definitions of the Fourier transform – they differ in where to place this annoying factor of 2π (that’s 2 pi). Therefore, answers to questions like yours may vary by a multiplicative constant… But, you should have seen that the ω in my expressions is the variable of the transformed function, i.e., your renamed “f”.

  153. My question was:

    Can there be two functions
    say f(x) and g(x), and there exists some x where f(x) not equal to g(x), but their Fourier Transforms are equivalent?

    to which

    Dieb responded:

    So, in this concept, your question doesn’t make much sense.

    Now consider, Ziemer and Tranter:

    consider two periodic signals x(t) and y(t) which are identical except at a single point….both x(t) and y(t) have identical Fourier series

    And that is true of their Fourier transforms as well…

    As I promised, I showed DiEB bungled.

  154. -Sal
    A single point is a null-set.

  155. -Sal

    As I promised, I showed DiEB bungled.

    LOL – this has to be one of the worst “gotcha” moments in history… how does consider two periodic signals x(t) and y(t) which are identical except at a single point….both x(t) and y(t) have identical Fourier series contradict anything I said before? From post #136 onwards, I tried to inform you about the idea of L¹ functions (and functions in similar classes)! Really, your view of mathematics is sooo 19th century :-)

  156. LOL – this has to be one of the worst “gotcha” moments in history… how does consider two periodic signals x(t) and y(t) which are identical except at a single point….both x(t) and y(t) have identical Fourier series contradict anything I said before?

    Hiya Dwieb,

    In the example discussed, are x1(t) and x2(t) different functions or not?

    You can’t argue that merely because you can create a class of functions where both x1(t) and x2(t) are members that the two functions are identical.

    What you said before was:

    your question doesn’t make much sense.

    and my question was:

    Can there be two functions
    say f(x) and g(x), and there exists some x where f(x) not equal to g(x), but their Fourier Transforms are equivalent?

    The question made sense, and the answer was “yes”. Instead you began to obfuscate and talk about L1 and say the question didn’t make sense. You were wrong and you pile more obfuscations to cover your error. Did you think I wouldn’t catch your error? :-)

  157. Dwieb wrote:

    But, you should have seen that the [omega] in my expressions is the variable of the transformed function, i.e., your renamed “f”.

    Of course I saw it, that’s exactly why I said above:

    omega = 2 pi f,

    Are you not reading what I wrote, and then accusing ME of not a reading when you are clearly the one who didn’t see.

    By the way, Dwieb, for the reader’s benefit, tell them if functions satisfying Dirichlet conditions have Fourier Transforms.

    PS

    That is a very sloppy example of equivocation!

    In such case I’ll try to make my equivocations a little tighter next time. I’m surprised you’d objected, I figured such forms of argumentation are acceptable to you since I suspect you believe in Darwinian evolution (which is based on equivocation and double speak). Am I wrong to suppose you believe in Darwinian evolution?

  158. Sal-
    1. It’s DiEb, not Dwieb, as it is Dirichlet, not Dirichelet – or was this an attempt of humour? As a moderator of this board, shouldn’t be you above this kind of name-calling?
    2.

    By the way, Dwieb, for the reader’s benefit, tell them if functions satisfying Dirichlet conditions have Fourier Transforms.

    Your usage of Dirichlet conditions in the context of general Fourier transforms raised my interest, as it was like spotting a sledgehammer in a watchmaker’s workshop. Yes, you can use it sometimes, but it’s totally inappropriate in most cases.
    3.

    me: That is a very sloppy example of equivocation!

    Sal: In such case I’ll try to make my equivocations a little tighter next time. I’m surprised you’d objected, I figured such forms of argumentation are acceptable to you since I suspect you believe in Darwinian evolution (which is based on equivocation and double speak). Am I wrong to suppose you believe in Darwinian evolution?

    As a general rule (in fact, as a categorical imperative), you should apply your own standard to your actions, not what you perceive/dream/imagine to be the standard of others. If your standard allows for sloppy or tight equivocations, then I’ve to ask: Are you a YEC? (Just kidding, I know you are…)
    4. What do you call an element of L¹? I call it a function. And so, x1 and x2 are the same function – when we are talking about integration (as you phrased it somewhere else: the S-sign, not the Σ) For an engineer, that’s often a surprise – for a mathematician, not so much…

  159. Dieb wrote:

    Yes, you can use it sometimes, but it’s totally inappropriate in most cases.

    In most cases?

    By the same reasoning, would you then say it is inappropriate to apply Dirichlet conditions to Fourier Series?

    Also, are the Dirichlet conditions more restrictive than the Jordan test or Dini’s test with respect to Fourier Series? I recall you said:

    the Fourier integral theorem as stated by Apostol does not necessarily satisfy the Dirichlet conditions, as the Jordan’s test is a generalization of these conditions (every function satisfying the Dirichlet conditions is – locally – of finite variation, the converse isn’t true).

    [note: do you see the corner you are boxing yourself into? Which ever way you answer, you know it will lead to some embarassment to your claims about me not bein able to distinguish between a Fourier Transform and a Fourier Seires.]

    Where did I insist that Dirichlet conditions are necessary conditions? The link I provided to the wiki article states them as sufficient conditions, not necessary conditions.

    I said:

    If we have an arbitrary function x(t) which obey the Dirichelet Conditions, the Fourier transform of that function is defined as

    Does that mean that every function needs to obey Dirichlet conditions to have a Fourier transform? NO. NO. NO. You misread what I wrote, and used your misunderstanding as an argument to suggest that I didn’t know the difference between and Fourier Transform and a Fourier Series.

    Consider the context of the discussion, and the class of functions I was considering — solutions ot the Schrodinger Equations in certain contexts. From my physics textbook by Tipler and Llewellyn:

    For future reference, we may summarize the conditions that the wave function Psi(x) must meet in order to be acceptable:

    1. Psi(x) must exist and satisfy the Schrodinger equation

    2. Psi(x) and dPsi(x)/dx must be continuous

    3. Psi(x) and dPsi(x)/dx must be finite

    4. Psi(x) and dPsi(x)/dx must be single-valued

    5.Psi(x) approaches 0 fast enough as x apporaches +or- infinity so that the normalization integral remains bounded

    Modern Physics by
    Tipler and Llewellyn
    page 249

    It appears Psi(x)obey Dirichlet conditions with respect to x. That was the context of the discussion.

    Dieb wrote:

    And so, x1 and x2 are the same function – when we are talking about integration

    They are not the same function, they merely result in the same integral and the same Fourier Transform. There is no need to put them in the same class of L1 if they were the same function. It’s painful to watch a fine mind like yours try to save face when a simple admission of a mistake would suffice.

    But notice I stated the inverse transform as well as the transform at YoungCosmos. For this to be true, one actually needs something stronger than Dirichlet conditions, like Holder continuity [I realize now my professor at GMU made a passing remark which was in error which made me presume Dirichlet conditions are sufficient signal reconstruction via inverse transform. The topic he discussed was the phenomenon observed by Josiah Gibbs and Albert Michelson here]

    All this to say, your arguments that x1 and x2 are the same function fail in light of the fact I described the inverse transform as well, not to mention I was exploring the restrictions on Psi(x). There appear
    to be functions which would be in the same class in L1, but not result in correct reconstruction under an inverse transform. So you are clearly in error since the context of my discussions included inverse transforms. :-)

    PS
    lighten up Dieb, you Darwinists are so humorless

  160. Consider the context of the discussion, and the class of functions I was considering — solutions ot the Schrodinger Equations in certain contexts. From my physics textbook by Tipler and Llewellyn:

    For future reference, we may summarize the conditions that the wave function Psi(x) must meet in order to be acceptable:

    1. Psi(x) must exist and satisfy the Schrodinger equation

    2. Psi(x) and dPsi(x)/dx must be continuous

    3. Psi(x) and dPsi(x)/dx must be finite

    4. Psi(x) and dPsi(x)/dx must be single-valued

    5.Psi(x) approaches 0 fast enough as x apporaches +or- infinity so that the normalization integral remains bounded

    Modern Physics by
    Tipler and Llewellyn
    page 249

    It appears Psi(x)obey Dirichlet conditions with respect to x. That was the context of the discussion.

    Ummm, no? In the wikipedia’s article you quoted, the Dirichlet conditions are stated as:
    * f(x) must have a finite number of extrema in any given interval
    * f(x) must have a finite number of discontinuities in any given interval
    * f(x) must be absolutely integrable over a period.
    * f(x) must be bounded

    Correct me if I’m wrong (heck, you’ll correct even when I’m right…), but isn’t it possible that Ψ violates condition no. 3, as Ψ ∈ L² per normalization integral? Therefore, the Dirichlet conditions wouldn’t be appropriate “in the context of the discussion…”

  161. -Sal
    are you still monitoring this thread? :-)

  162. Correct me if I’m wrong (heck, you’ll correct even when I’m right…), but isn’t it possible that ? violates condition no. 3, as ? ? L² per normalization integral? Therefore, the Dirichlet conditions wouldn’t be appropriate “in the context of the discussion…”

    That’s why I put it up to get your editorial feed back. :-) I don’t know.

    But recall Psi(x) must have NO discontinuities, and dPsi(x)/dx must be continuous. We can’t have any of the more extreme behaviors which motivated the generalization of Riemann integrals to Lesbegue integrals.

    So I don’t know. Thank you for your editorial comments.

    Dirichlet is not the approprate set of conditions for inverse Fourier transforms and series as I had supposed in December 2007 when I put my post up at YoungCosmos.

    My physics professor at JohnsHopkins made passing reference to WavePacket formation and their likeness to FourierTransforms. Some of the math looked strikingly similar (including the use of the modulation theorem)….

    An appropriate conditions for inverse transforms are that the functions be Holder Continuous. I do not know if Holder Continuous functions are both necessary and sufficient. They are sufficient, I do not know if they are necessary.

    I’m presuming Psi(x) is Holder Continuous, but I actually don’t know. :-) That’s why you’ll tell me, won’t you, because you’re eager to point out my mistakes….

  163. Sal-
    thanks for your surprisingly modest post. I’m afraid I have to clarify a few points…
    1. DiEb:

    Correct me if I’m wrong (heck, you’ll correct even when I’m right…), but isn’t it possible that ? violates condition no. 3, as Ψ ∈ L² per normalization integral? Therefore, the Dirichlet conditions wouldn’t be appropriate “in the context of the discussion…”

    Sal:

    That’s why I put it up to get your editorial feed back. :-) I don’t know.

    But recall Psi(x) must have NO discontinuities, and dPsi(x)/dx must be continuous. We can’t have any of the more extreme behaviors which motivated the generalization of Riemann integrals to Lesbegue integrals.

    So I don’t know. Thank you for your editorial comments.

    My question was rhetorical…

    2.

    Dirichlet is not the approprate set of conditions for inverse Fourier transforms and series as I had supposed in December 2007 when I put my post up at YoungCosmos.

    To repeat it: Using Dirichlet conditions in the context of general Fourier transforms isn’t appropriate. Using them for Fourier series is all fine and dandy: they’ll provide you with the existence of on inverse transform, at least outside of a null-set. But you know that I tend to neglect null-sets, as does your professor at the GMU, I presume (post #159).
    3.

    An appropriate conditions for inverse transforms are that the functions be Holder Continuous. I do not know if Holder Continuous functions are both necessary and sufficient. They are sufficient, I do not know if they are necessary.

    Sal, in post #142, you quoted Apostol:

    These integrals, which are in many ways analogous to Fourier series, are known as Fourier integrals, and the theorem which gives sufficient conditions for representing a function by such an integral is known as the Fourier integral theorem.

    So, if you read on in Apostol’s text, you’ll find some sufficient conditions for the existence of an inverse transform (outside the null-sets of the points of not-continuity :-) )… I presented those in post #144, and in post #147, I gave an example of a function that doesn’t satisfy the Dirichlet conditions and isn’t hölder-continuous, but nevertheless has an inverse transform (outside the null-sets of the points of not-continuity :-) ).
    So, what’s necessary and sufficient? On L², the Fourier transformation is an isometric isomorphism…
    4.

    I’m presuming Psi(x) is Holder Continuous, but I actually don’t know.

    Differentiability trumps hölder-continuity. But the real question is: What’s about the second (weak) derivation (is it in L²?), as the appropriate spaces for the Schrödinger equations are Sobolev :-)
    n.b.: on this board, I prefer L¹ and L¹ to other spaces, as I only can invoke the superscripts ¹ and ² and have not found a way to write a script-S… :-)

  164. upps – bounded differentiability trumps hölder-continuity…

  165. Using Dirichlet conditions in the context of general Fourier transforms isn’t appropriate. Using them for Fourier series is all fine and dandy: they’ll provide you with the existence of on inverse transform, at least outside of a null-set.

    I don’t believe that it is customary to speak of an inverse transform when one is speaking of a fourier SERIES! :-)

    What I presume you meant was the series converges on points except where the the function lies outside the null-set.

    BUT, does there exist a function x(t) satisfying Dirichlet conditions with transform X(f), where the inverse transform of X(f) yields a function that is not in the same class as x(t) in L1? If not one might complain Dirichlet is too austere for Fourier Transforms.

    But then I’ll counter by saying, will Jordan’s Test or Dini’s test for Fourier series lead to convergence (yes according to Apostol), so in that sense Dirichlet is too austere for Fourier Seiries.

  166. Correct me if I’m wrong (heck, you’ll correct even when I’m right…), but isn’t it possible that ? violates condition no. 3, as as ? ? L² per normalization integral?

    It is possible in principle that there exists a Psi(x) that is not absolutely integrable over a period of infinity but Psi*(x)Psi(x) is integrable.

    But I don’t know that there exists a set of physically realizable boundary conditions which would imply such a Psi(x) solution to Schrodinger’s equation!

    However, it appears that condition 3 (absolute integrability of P(x)) will ensure that a normalization integral exists. So that seemed reasonable to me….is this correct?

  167. What I presume you meant was the series converges on points except where the the function lies outside the null-set.

    What I wanted to say: If a function satisfies the Dirichlet conditions – or Jordan’s test – then it has at most a countably number of discontinuities, i.e., the points of discontinuity form a null-set (regarding λ.) In this points, the series will converge to the mean of the left-hand-side and right-hand side value of the function, as stated so nicely in the text of Apostol.

    It is possible in principle that there exists a Psi(x) that is not absolutely integrable over a period of infinity but Psi*(x)Psi(x) is integrable.

    yes.

    But I don’t know that there exists a set of physically realizable boundary conditions which would imply such a Psi(x) solution to Schrodinger’s equation!

    “physically realizable boundary conditions”? That I don’t know neither – I’m a mathematician, not a physicist.

    However, it appears that condition 3 (absolute integrability of P(x)) will ensure that a normalization integral exists. So that seemed reasonable to me….is this correct?

    Condition 3 in which enumeration? In post #159, you give condition 3 as

    3. Ψ(x) and dΨ(x)/dx must be finite

    I don’t see there anything about absolute integrability

  168. Condition 3 in which enumeration? In post #159, you give condition 3 as

    3. ?(x) and d?(x)/dx must be finite

    I don’t see there anything about absolute integrability…

    I thought you meant condition 3 from the list of 4 Dirichlet conditions. I mis-interpreted your remarks.

    as stated so nicely in the text of Apostol.

    Do you have Apostol’s book? Do you like it?

    “physically realizable boundary conditions”? That I don’t know neither – I’m a mathematician, not a physicist.

    I’ll point you to a simple example in the Britney Spears Guide to Finite-Barrier Quantum Wells. See equation (3)…

    One can see the solutions to the schrodinger equation (when they are defined from -infinity to +infinity). In this case the solutions are going to be functions damped by a decaying exponential. In this case, we have two well-defined boundary conditions imposed by 2 physical boundaries. We could in priniciple create n-boundaries, for n-boundary conditions…..

    It’s seems it would be impossible (in a universe with finite resources) to construct an infinite number physical boundaries spaced apart in such a way that we have a function Psi(x) that is not absolutely integrable. I believe if we have a finite set of boundaries, with a finite distance from each other, we’ll eventually end up with functions damped with decaying exponentials on the ends that go to +/- infinity. Does that seem correct?

  169. Sal,
    so, we’re talking about the Dirichlet conditions as stated in post #160.

    However, it appears that condition 3 (absolute integrability of P(x)) will ensure that a normalization integral exists. So that seemed reasonable to me….is this correct?

    Even for a finite period, condition 3 (“f(x) must be absolutely integrable over a period”) isn’t sufficient for the existence of a normalization integral, you’d need condition 4 (“f(x) must be bounded”), too.
    But, as I stated before, you wouldn’t invoke the Dirichlet conditions in the context of L², as there is a beautiful Fourier transform on this space.
    I never used Apostol’s “Modern Analysis” before. If someone is interested in mathematics, I’d usually hint him at books for which more current editions exist.

  170. DiEB,

    Can you tell if R. Santilli is a real mathematician? He claims to have taught at some good schools.

    Thanks.

    Salvador

  171. Yes, that is the one. What do you think? Legit or crank? Have you heard of him in the course of your work?

    See:
    Hadronic Mathematics I

    Hadronic Mathematics II

    Many thanks. I figured whatever the outcome of your evaluation you might find his ideas amusing. I’m not familiar with Lie-admissible Algebras at all…

    I hear the term Lie-algebra all the time in connection with advanced physics, but I’ve yet to run into it in formal study…

    Thank you again, and thanks for responding to my questions here at UD.

  172. Legit or crank?

    As so often, the case is not so clear cut. Earlier, he obviously has done some legit work, while his International Committee for Scientific Ethics and Accountability shouts “CRANK”.
    He isn’t referred to in the books with which I was introduced to Lie-theory – and that seems to be part of his problem :-)

  173. scordova wrote:

    “That is a nit-pick that is also an erroneous nit-pick.

    See the discussion of Black-Scholes from the perspective of Brownian motion and Statistical Mechanics Here.

    That discussion includes thermodynamics and heat flow.

    Equation 10 is the famous Black-Scholes equation. Its solutions are widely adopted for financial analysis by traders, fund managers, economists, and so on. The Equation can be further transformed into standard form of the heat diffusion equation from Physics (Thermodynamics) ”

    For the record, placing information in these comments which helps the scientifically untrained folks with scientific distinctions such as I wrote, is not nitpicking; in my view it ups the profile of these discussions by demonstrating that there are professional folks in the sciences posting here. Scordova does not need to take this as a put down. After going to the site of the link he provides I see that an economist equates heat flow with thermodynamics. Not helpful. My Holman thermo text from 1970 makes no mention of Fourier’s law. The decent wikipedia article on thermodynamics makes no mention of heat flow: http://en.wikipedia.org/wiki/T.....d_branches

  174. There are two very serious errors in this post. 1. Equating “natural variation” to mutation. 2. Ignoring time. No matter how low the probability of an outcome, if there are enough trials, the outcome is sure to occur.

    Related to the willingness to make such obvious errors, the core problem with the ID paradigm is that it has no explanatory power. It is fundamentally an argument from ignorance. “I thought really hard about this and I claim natural laws cannot account for it.” The history of science is one long laugh at such claims.

    Note that this has nothing to do with theism. One could claim for example that the very existence of natural laws, or that the very persistence of the universe, reflects the continuous involvement of God in the universe.

Leave a Reply