Home » Intelligent Design » Poker Entropy and the Theory of Compensation

Poker Entropy and the Theory of Compensation

The American Journal of Physics article by Daniel Styer which was offered as a “concise refutation” to my Applied Mathematics Letters article by the blogger whose letter apparently triggered the withdrawal of my AML article is possibly the dumbest work ever published by a major physics journal. To demonstrate how absurd the logic in this article is, I wrote a little satire ( here ) which extends Styer’s attempts to quantitatively demonstrate that the decrease in entropy of the universe due to biological evolution is easily “compensated” by the increase in the “cosmic microwave background”, to the game of poker.

I submitted this satire to the American Journal of Physics this morning, just to see what reason they would give for not wanting to correct the errors in the Styer piece. As cynical as I have become, I still was not prepared for this answer, which I received a few hours later:

I do not see any educational value in your manuscript. Because it is well established in the physics community that there is no conflict between the second law of thermodynamics and evolution, we can consider manuscripts which help students understand why. However, papers that promote views that are contrary to accepted understanding in physics should be sent to research journals not to AJP.

In other words, we will print anything that supports the accepted view, no matter how stupid, and won’t consider anything that challenges it, no matter how logical.

Any suggestions as to which “research journals” might consider papers which “promote views that are contrary to accepted understanding?”

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

20 Responses to Poker Entropy and the Theory of Compensation

  1. Dr. Sewell, your poker satire reminds me of this article:

    Evolutionismist wins millions in casino! (Also quickly escorted to city limits)
    http://satirizingscientism.blo.....asino.html

  2. Dr. Sewell,
    I had an important question regarding your original paper and this letter. I recognize that in your original paper, your key claim was that there are different types of entropy (say, thermal entropy, carbon entropy, poker entropy, etc…) which cannot necessarily be interconverted. Because of this claim, you argue that the extreme increase in the complexity and order of life on earth cannot necessarily be compensated by an decrease in the order (or increase in the entropy) of the cosmic radiation background.

    I would like to get your thoughts on a simple counterexample. Imagine a coffee cup floating in free space. The coffee would spontaneously cool and even freeze, thereby significantly increasing its order and decreasing its entropy. How is this possible given the 2nd law of thermodynamics? The textbook answer is that infrared radiation is emitted by the coffee, thereby increasing the entropy of the cosmic radiation background. So the total entropy of the universe increases even though the local entropy of the coffee cup decreases. In other words, the increase in entropy of the cosmic radiation background compensates for the decrease in the entropy of the coffee. So this example seems to show that thermal entropy can be “converted” into radiation entropy.

    Three questions:
    1. Do you agree that in this example, the loss of thermal entropy in the coffee is indeed compensated by the increase in entropy of the cosmic radiation?

    2. Do you agree that this is an example of the “conversion” of “thermal entropy” into “radiation entropy”?

    3. If such conversion is allowed, then how do you know that other forms of entropy (say carbon-entropy and radiation entropy) are not interconvertible?

    Thanks,
    Neil

  3. In a multiverse, we should not be at all surprised if 7 appears 150 times in a row.

  4. However, papers that promote views that are contrary to accepted understanding in physics should be sent to research journals not to AJP.

    There’s a word for this: Group-think.

  5. Hi,
    My name is Dr. Neil Shenvi, and I posted the previous comment #2 at 10:14 am, which is still awaiting moderation. Would you mind removing that comment (and this one!)? I have emailed Dr. Sewell privately asking him my questions and I do not want to publicize these issues until I have consulted him personally. I believe his paper is fundamentally flawed, but as a fellow evangelical Christian, I don’t think it necessary to make this issue public until I have talked to him privately. Hopefully, he will either be able to convince me of my error or he will recognize that he himself wrong and will make a statement to that effect.
    -Neil

  6. ‘In a multiverse, we should not be at all surprised if 7 appears 150 times in a row.’

    Nor should we be surprised by Bolztmann’s Brain;

    BRUCE GORDON: Hawking’s irrational arguments – October 2010
    Excerpt: For instance, we find multiverse cosmologists debating the “Boltzmann Brain” problem: In the most “reasonable” models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science.
    http://www.washingtontimes.com.....arguments/

  7. Dr. Sewell, since the following has to do, somewhat, with entropy, I think you may find it interesting. First to note the ‘irreconcilable problem’ that mathematicians have in unifying General Relativity and Quantum Mechanics;

    Quantum Mechanics and Relativity – The Collapse Of Physics? – video
    http://www.metacafe.com/watch/6597379/

    ,,,Though the physicists/mathematicians, in the preceding video, feel they are at a dead end in reconciling General Relativity with Quantum Mechanics, I would like to put forth the case that Jesus Christ, Himself, as strange as it may sound, is the most parsimonious solution to the number one problem in science today. The problem of the unification of Quantum Mechanics(QM) and General Relativity(GR)into a ‘theory of everything’.
    As noted in the video, the unification of QM and GR, into a ‘theory of everything’, has been a notoriously difficult problem for physicists and mathematicians to solve. In fact, Einstein himself spent many of the last years of his life on earth vainly searching for a solution to the QM-GR split. Moreover, the subsequent years of persistent search, by many leading, brilliant, physicists and mathematicians in the world, have not yielded any plausible solution to the problem that has not involved highly speculative, ‘verification-less’, appeals to string theoretic multiverses, M-Theories, Quantum Gravity etc.. etc.. The problem shows no experimental support of ever abating,,,

    Quantum Mechanics Not In Jeopardy: Physicists Confirm Decades-Old Key Principle Experimentally – July 2010
    Excerpt: the research group led by Prof. Gregor Weihs from the University of Innsbruck and the University of Waterloo has confirmed the accuracy of Born’s law in a triple-slit experiment (as opposed to the double slit experiment). “The existence of third-order interference terms would have tremendous theoretical repercussions – it would shake quantum mechanics to the core,” says Weihs. The impetus for this experiment was the suggestion made by physicists to generalize either quantum mechanics or gravitation – the two pillars of modern physics – to achieve unification, thereby arriving at a one all-encompassing theory. “Our experiment thwarts these efforts once again,” explains Gregor Weihs. (of note: Born’s Law is an axiom that dictates that quantum interference can only occur between pairs of probabilities, not triplet or higher order probabilities. If they would have detected higher order interference patterns this would have potentially allowed a reformulation of quantum mechanics that is compatible with, or even incorporates, gravitation.)
    http://www.sciencedaily.com/re.....142640.htm

    Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law:
    Peter Woit, a PhD. in theoretical physics and a lecturer in mathematics at Columbia, points out—again and again—that string theory, despite its two decades of dominance, is just a hunch aspiring to be a theory. It hasn’t predicted anything, as theories are required to do, and its practitioners have become so desperate, says Woit, that they’re willing to redefine what doing science means in order to justify their labors.
    http://www.amazon.com/Not-Even.....0465092756

    ‘What is referred to as M-theory isn’t even a theory. It’s a collection of ideas, hopes, aspirations. It’s not even a theory and I think the book is a bit misleading in that respect. It gives you the impression that here is this new theory which is going to explain everything. It is nothing of the sort. It is not even a theory and certainly has no observational (evidence),,, I think the book suffers rather more strongly than many (other books). It’s not a uncommon thing in popular descriptions of science to latch onto some idea, particularly things to do with string theory, which have absolutely no support from observations.,,, They are very far from any kind of observational (testability). Yes, they (the ideas of M-theory) are hardly science.” – Roger Penrose – former close colleague of Stephen Hawking – in critique of Hawking’s new book ‘The Grand Design’ the exact quote in the following video clip:

    Roger Penrose Debunks Stephen Hawking’s New Book ‘The Grand Design’ – video
    http://www.metacafe.com/watch/5278793/

    ,,,The main problem, mathematically, for the split, between GR and QM, seems to arise from the inability of either theory to successfully deal with the ‘zero/infinity’ conflict that arises in different places of each framework;,,,

    THE MYSTERIOUS ZERO/INFINITY
    Excerpt: What the two theories have in common – and what they clash over – is zero.”,, “The infinite zero of a black hole — mass crammed into zero space, curving space infinitely — punches a hole in the smooth rubber sheet. The equations of general relativity cannot deal with the sharpness of zero. In a black hole, space and time are meaningless.”,, “Quantum mechanics has a similar problem, a problem related to the zero-point energy. The laws of quantum mechanics treat particles such as the electron as points; that is, they take up no space at all. The electron is a zero-dimensional object,,, According to the rules of quantum mechanics, the zero-dimensional electron has infinite mass and infinite charge.
    http://www.fmbr.org/editoral/e....._mar02.htm

    ,,,One of the things I find interesting about the preceding zero/infinity mystery, of QM and GR, is that the ‘infinity’ of the 4-Dimensional space-time of General Relativity is related to black holes in the universe. The reason this is interesting for me is because black holes are now verified to be, by far, the largest contributors of ‘entropic decay’ in the universe;,,,,

    Entropy of the Universe – Hugh Ross – May 2010
    Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated.
    http://www.reasons.org/entropy-universe

    Moreover, Black Hole singularities are completely opposite the singularity of the Big Bang in terms of the ordered physics of entropic thermodynamics. In other words, Black Holes are singularities of destruction and disorder rather than singularities of creation and order.

    Roger Penrose – How Special Was The Big Bang?
    “But why was the big bang so precisely organized, whereas the big crunch (or the singularities in black holes) would be expected to be totally chaotic? It would appear that this question can be phrased in terms of the behaviour of the WEYL part of the space-time curvature at space-time singularities. What we appear to find is that there is a constraint WEYL = 0 (or something very like this) at initial space-time singularities-but not at final singularities-and this seems to be what confines the Creator’s choice to this very tiny region of phase space.”

    ,,,Moreover, besides entropy being the primary reason why the universe, without ‘supernatural intervention, is steadfastly heading for ‘entropic heat death’,,,

    The Future of the Universe
    Excerpt: After all the black holes have evaporated, (and after all the ordinary matter made of protons has disintegrated, if protons are unstable), the universe will be nearly empty. Photons, neutrinos, electrons and positrons will fly from place to place, hardly ever encountering each other. It will be cold, and dark, and there is no known process which will ever change things. — Not a happy ending.
    http://spiff.rit.edu/classes/p.....uture.html

    ,,,entropy is also the primary reason why we will all grow old and eventually die,,,

    80 years in 40 seconds – video
    http://www.youtube.com/watch?v=i9wToWdXaQg

    ,,,Thus ‘Death’, itself, of the universe and of us, seems to semi-directly linked to the fact that this ‘inaccessible infinity of destruction’ is found in black holes. At least it seems readily apparent that black holes are forever an ‘inaccessible infinity of destruction’ as far as the endeavors of mortal man are to be concerned. Yet Quantum Mechanic offers its own unique infinity that can, in principle, counterbalance the ‘destructive infinity’ of Black holes (as they tried to accomplish in the video). Yet the problem that QM has in overcoming the entropic decay of the universe, besides the problem mentioned by Michio Kaku in the video of at about the 7:00 minute mark of a ‘repeating infinity’, is, as mentioned previously, this,,,

    “Quantum mechanics has a similar problem, a problem related to the zero-point energy. The laws of quantum mechanics treat particles such as the electron as points; that is, they take up no space at all. The electron is a zero-dimensional object,,, According to the rules of quantum mechanics, the zero-dimensional electron has infinite mass and infinite charge.”

    ,,,thus it seems readily apparent that QM requires a ‘space’ within the 4-D space-time of General Relativity, separate from the zero point infinity of Black holes, in which to ‘pour its infinity’. That is QM needs this space separate from the Black Holes if the destructive, ‘Death Causing’, entropic infinities of Black Holes were ever to be successfully overcome by Quantum Mechanics. And if physics were ever to be ‘unified’ into a ‘theory of everything’. And indeed, subtle, yet strong, hints that this ‘unification’ is possible are now available,,,,

    Scientific Evidence That Mind Effects Matter – Random Number Generators – video
    http://www.metacafe.com/watch/4198007

    ,,,I once asked a evolutionist, after showing him the preceding experiment, “Since you ultimately believe that the ‘god of random chance’ produced everything we see around us, what in the world is my mind doing pushing your god around?”,,,

  8. further note: ,,,The following is particularly interesting,,,

    “Most people think that the matter is empty, but for internal self consistency of quantum mechanics and relativity theory, there is required to be the equivalent of 10 to 94 grams of mass energy, each gram being E=MC2 kind of energy. Now, that’s a huge number, but what does it mean practically? Practically, if I can assume that the universe is flat, and more and more astronomical data is showing that it’s pretty darn flat, if I can assume that, then if I take the volume or take the vacuum within a single hydrogen atom, that’s about 10 to the minus 23 cubic centimeters. If I take that amount of vacuum and I take the latent energy in that, there is a trillion times more energy there than in all of the mass of all of the stars and all of the planets out to 20 billion light-years. That’s big, that’s big. And if consciousness allows you to control even a small fraction of that, creating a big bang is no problem.” – Dr. William Tiller – has been a professor
    at Stanford U. in the Department of materials science & Engineering
    http://www.beyondtheordinary.n.....ller.shtml

    ,,,The following offers a ‘hint’ as well,,,, though Dr. Dembski, in the following quote, does not directly address the zero/infinity conflict of QM and GR, he does offer interesting insight that, ‘serendipitously’, parallels the problem we find for reconciling QM and GR;

    The End Of Christianity – Finding a Good God in an Evil World – Pg.31
    William Dembski PhD. Mathematics
    Excerpt: “In mathematics there are two ways to go to infinity. One is to grow large without measure. The other is to form a fraction in which the denominator goes to zero. The Cross is a path of humility in which the infinite God becomes finite and then contracts to zero, only to resurrect and thereby unite a finite humanity within a newfound infinity.”
    http://www.designinference.com.....of_xty.pdf

    ,,,Moreover, unlike Quantum Gravity, String Theory and M-Theory, there actually is physical evidence that lends strong support to the position that the ‘Zero/Infinity conflict’, we find between General Relativity and Quantum Mechanics, was successfully dealt with by Jesus Christ:,,,

    General Relativity, Quantum Mechanics, Entropy and The Shroud Of Turin – video
    http://www.metacafe.com/w/5070355

    Turin Shroud Enters 3D Age – Pictures, Articles and Videos
    https://docs.google.com/document/pub?id=1gDY4CJkoFedewMG94gdUk1Z1jexestdy5fh87RwWAfg

    A Quantum Hologram of Christ’s Resurrection? by Chuck Missler
    Excerpt: “You can read the science of the Shroud, such as total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics.” The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically. Dame Piczek created a one-fourth size sculpture of the man in the Shroud. When viewed from the side, it appears as if the man is suspended in mid air (see graphic, below), indicating that the image defies previously accepted science. The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically.
    http://www.khouse.org/articles/2008/847

    ,,,Thus I firmly believe that the evidence we have in hand clearly indicates that God’s crowning achievement for this universe was not when He created this universe, but that God’s crowning achievement for this universe was when He Himself inhabited the human body He had purposely created the whole universe for, to sanctify human beings unto Himself through the death and resurrection of his “Son” Jesus Christ. This is truly something which should fill anyone who reads this with awe. The wonder that science and ‘religion’ would intersect so dramatically is of no small consequence.

    ================

    Hebrews 2:14-15
    “Since we, God’s children, are human beings – made of flesh and blood – He became flesh and blood too by being born in human form; for only as a human being could He die and in dying break the power of the devil who had the power of death. Only in that way could He deliver those who through fear of death have been living all their lives as slaves to constant dread.”

    Matthew 28:18
    And Jesus came up and spoke to them, saying, “All authority has been given to Me in heaven and upon earth.”

    further note:

    If scientists want to find the source for the supernatural light which made the “3D – photographic negative” image on the Shroud of Turin, I suggest they look to the thousands of documented Near-Death Experiences (NDE’s) in Judeo-Christian cultures. It is in their testimonies that you will find mention of an indescribably bright ‘Light’ or ‘Being of Light’ who is always described as being of a much brighter intensity of light than the people had ever seen before. All people who have been in the presence of ‘The Being of Light’ while having a deep NDE have no doubt whatsoever that the ‘The Being of Light’ they were in the presence of is none other than ‘The Lord God Almighty’ of heaven and earth.

    In The Presence Of Almighty God – The NDE of Mickey Robinson – video
    http://www.metacafe.com/watch/4045544

    The Scientific Evidence for Near Death Experiences – Dr Jeffery Long – Melvin Morse M.D. – video
    http://www.metacafe.com/watch/4454627

    The Extremely Monitored NDE of Pam Reynolds – video
    http://www.metacafe.com/watch/4045560

    There are a few more detailed notes, in first part of this following site, on the spiritual/material split between GR and QM and Jesus Christ’s reconciliation of the two frameworks:

    Intelligent Design – The Anthropic Hypothesis
    http://lettherebelight-77.blog.....is_19.html

  9. Dr. Sewell:

    I’m not familiar with Styer’s paper, but is he arguing that the increase in entropy elsewhere(background radiation) makes it more probable that a decrease in entropy (progressive evolution) could occur elsewhere? Or is he making a more limited point: that the overall entropy of the universe is increasing, despite evolution taking place in a small corner of the cosmos?

  10. Mung: “In a multiverse, we should not be at all surprised if 7 appears 150 times in a row.”

    Mung, I’m often unsure whether you are just toying with us, but I’ll take the bait.

    By “multiverse” I presume you don’t mean that there is an actual something that is a multiverse, but rather that there are multiple universes, each with its own set of properties. In the latter case, it makes no difference whether there are other universes or not, in our universe, based on the laws we have and what we know, we should indeed be exceedingly surprised to see a 7 rolled 150 times in a row — so much so that we would be justified in ruling out chance as the explanation.

  11. Mung,

    In a multiverse, we should not be at all surprised if 7 appears 150 times in a row.

    In a multiverse, can seperate universes interact?

  12. Granville,

    I can easily explain the decrease in entropy produced by playing Rachmaninoff’s Rhapsody on a Theme of Paganini by pointing out that there is an equivalent, balancing increase in entropy — the heat produced by the vibration of the piano strings.

    Any fool with any sense of basic logic can therefore figure out that in an open system, with heat energy available to do useful work, it is inevitable that someone will compose and perform Rachmaninoff’s Rhapsody.

    How can this not be obvious?

  13. 13

    Professor Sewell,

    To summarize my response to your Poker paper: most of your ridicule of Styer and Bunn’s analysis seems to stem from a misunderstanding of their use of the connection between thermodynamics and probability (while confusing, their methods seem to be valid as far as I can see). You also object to their use of entropy compensation, and while I’ll agree that they both make some mistakes, I think Bunn’s analysis is pretty close to correct (I’ll post my own analysis of Earth’s net entropy flux for comparison).

    About improbability:
    Styer and Bunn are both using the probabilities that various events would have under the Boltzmann distribution as an easy way to calculate their associated entropy deficits. The Boltzmann distribution describes probabilities for a system at thermodynamic equilibrium (not just ideal gasses, it’s fairly widely applicable); systems that are not at equilibrium do not obey the Boltzmann distribution. The Earth is not at equilibrium, so the Boltzmann probabilities do not correspond to the actual probabilities of events on Earth, and mistaking them for real probabilities will cause all manner of confusion.

    To illustrate the difference between Boltzmann vs. real probabilites, consider the probability that a rock will be significantly warmer on one side than the other. The probability of this occurring at thermodynamic equilibrium (Boltzmann probability) is vanishingly small. But consider a rock lying in a desert somewhere: in the morning, the sun shines on the top (and east) side, heating that more than the bottom (and west) side. This rock has a very high (actual) probability of being in an extremely low-(Boltzmann-)probability state.

    So, if they don’t correspond to reality, what are Styer and Bunn doing with them? They’re using them as a shortcut to calculate how much less entropy the system has than it would at equilibrium. To oversimplify quite a bit: if the system is in an extremely Boltzmann-improbable state, that means it’s not at equilibrium; the improbability gives a measure of how far it is from equilibrium, and hence how much entropy is missing.

    It looks nonsensical if you’re not familiar with the technique they’re using (and if you try to do it with non-Boltzmann probabilities or treat Boltzmann probabilities as descriptions of reality, as you do in your parody, it is nonsensical). But (at least as far as I can see) it’s actually a good way to do the analysis. It allows them to compute the entropy deficit due to living organisms without having to deal with a lot of messy (& basically irrelevant) things like the entropy deficit due to nonuniform temperature (poles vs. equator), salt distribution (oceans & salt lakes vs. fresh water), etc etc etc etc…

    (Hmm, actually I guess I could make a case the other way: that they shouldn’t have ignored the entropy deficits due to temperature, salt, etc because the Earth’s entropy flux has to be sufficient to explain all of them. But I’m pretty sure that wouldn’t change the result.)

    The principle of compensation:
    Compensation happens. Local entropy decreases happen all the time, always coupled with compensating entropy increases elsewhere. Decreases in one form of entropy happen all the time, similarly coupled with increases in another form. This is very standard, basic, thermodynamics.

    That said, you’re not entirely wrong: a local entropy decrease will never be compensated by a causally unrelated entropy increase elsewhere. There are a number of ways of distinguishing legitimate instances of compensation. The most common is to consider what would happen if the system’s surroundings were replaced with hypothetical, idealized surroundings, that interact with the system just like its real surroundings, but don’t have any unnecessary entropy increases. If there’s an entropy decrease in this (hypothetical) case, you can say that you have a second-law violation even if the actual surroungs have all kinds of stray entropy increases going on.

    My favorite way of handling this is to look at the entropy flux into/out of the system (essentially, the method you mention in the video at 12:29 to 13:05). If you do everything right, it comes out the same as the first method, but with a bit less hassle.

    So I would argue that to figure out how much entropy decrease the second law allows on Earth, we should just calculate Earth’s entropy efflux and subtract the influx. Let’s see how everyone did.

    I can’t really make sense of Styer’s analysis here. He calculates the entropy decrease of the sun due to the portion of its light that hits Earth, the Earth’s entropy throughput(?), and the entropy increase of the cosmic microwave background due to Earth’s reemitted sunlight, none of which are directly relevant (although the one he concentrates on — what he calls the Earth’s throughput — is close to the relevant value; see below).

    Bunn’s analysis is much better. He sets up the relevant calculation (eq. 2) as the entropy of the sun’s light when it’s thermalized on Earth (which he assumes is the same as the entropy it carries away when it leaves), minus the entropy loss of the sun’s rediation field to Earth (essentially the entropy flux into Earth), and then discards the latter as too small to matter (quite reasonable IMHO). My only real objection here is that he assumes the entropy of thermal radiation (both from sun -> Earth and Earth -> deep space) is the same as the heat it “came from”:

    In this system no entropy is produced by emission of radiation from the Sun, because this process is a flow of energy from the Sun to its radiation field at the same temperature. The same applies to radiation emitted by the Earth. Entropy production occurs only when radiation from the Sun is absorbed on the Earth, because this absorption represents energy flow between parts of the system at different temperatures.

    This is incorrect, as thermal emission (without matching incoming radiation) is thermodynamically irreversible, and hence creates entropy, so the thermal radiation carries more entropy (both to and away from Earth) than he assumes. He also ignores the fact that some sunlight reflects off Earth without being thermalized to Earth’s temperature.

    I should also point out that Bunn explicitly does not make the mistake you’re so worried about, of assuming entropy increases somewhere else necessarily allow entropy decreases:

    In this estimate we did not include any entropy increase due to thermalization of the radiant energy emitted by the Earth. If we assume that this radiation eventually thermalizes with the cosmic background (CMB) radiation in deep space, then an additional, much larger entropy increase results: (dS/dt)_CMB = P/T_CMB = 4 × 10^16 (J/K)/s. We may not include this entropy production in accounting for evolution, though. One reason is that this thermalization probably never occurs: the mean free path of a photon in intergalactic space is larger than the observable Universe and is probably infinite.[5] In any case, even if thermalization does occur, it happens far in the future and at great distances from Earth and so is not available to drive evolution on Earth. For this reason we may ignore the existence of distant thermalizing matter in defining the system to which we apply the second law. The argument in Sec. IV, which concludes that inequality (1) is satisfied, would only be strengthened if this extra entropy were included.[6]

    For completeness, I’ll add my own analysis of Earth’s net entropy flux. This is from a usenet posting I wrote almost exactly 10 years ago (the date on the file is June 24, 2001), complete with a typo and a slightly incorrect albedo (I couldn’t find a good source, so I was conservative and took the highest of the ones I did find):

    Really quick oversimplified summary: heat flows carry entropy with them. If a quantity Q of heat flows at (absolute) temperature T, it carries entropy S=Q/T. Heat flows from the Sun to the Earth at a temperature of 6,000 Kelvin; about the same amount of heat flows from the Earth to deep space at around 290 Kelvin. Do the math.

    …Well, actually, don’t do that math, because what I just said isn’t entirely accurate. The “heat flows” involved aren’t happening under sufficiently near-equilibrium conditions to have a well-defined temperature in the right sense for the equation I gave to apply. The radiation from the Sun is pretty close to a 6,000K blackbody spectrum, and blackbody radiation carries entropy S=4E/3T, so you can get a good idea of the entropy carried by sunlight from that. The entropy carried by radiation leaving Earth is much harder to analyse: there’s some reflected sunlight (that no longer matches a blackbody spectrum), and a lot of thermal emission following a wide assortment of non-blackbody spectra. But I think I can get a pretty safe lower bound on it…

    So, let’s take a stab at doing the (right) math. The solar constant at Earth’s orbit is 135.30 mW/cm^2 = 1353 W/m^2 (all data is from the _CRC Handbook of Chemistry and Physics_, 57th edition; this value is from page F-200). Earth’s cross section is 1.27e14 m^2 (Pi*R^2, where R = 6371 km; page F-175). Earth’s total insolation is then solar constant * cross seaction = 1.73e17 Watts. At 6000K, using S=4E/3T, that comes to 3.83e13 W/K (or if you prefer, 3.83e13 J/K per second) of entropy received by Earth from the Sun.

    As for the outgoing entropy… since I’m going for a lower bound, I’ll ignore the contribution from reflected sunlight, and just count the entropy of thermal radiation from Earth. Since Earth isn’t a decent blackbody, I can’t even calculate that properly, but I claim that the entropy must be at least E/300K (based on the idea that most of the radiation is produced spontaneously at temperatures below 300 Kelvin — if that’s not enough information for you to figure out my reasoning, I’ll try to explain later). Earth’s albedo is around .36, meaning around 36% of the sunlight reflects off and 64% (1.1e17 W) is absorbed; for present purposes I’ll assume the same amount is emitted (although I understand it’s actually a little higher). Divide that by 300K and we get at least 3.7e14 W/K (or 3.7e14 J/K per second) leaving Earth. That’s almost 10 times the amount we receive from the Sun, and the actual figure is probably noticeably higher than what I calculated here.

    If you want the net entropy flux for Earth, you should also count matter flows (meteorites and cosmic rays incoming, helium outgoing, neutrinos in then right back out, etc), but I suspect those are nigligible compared to the flux due to light. If you’ll go along with me on that, we can treat 3.7e14 W/K – 3.83e13 W/J = 3.3e14 W/K as a plausible lower bound on Earth’s net entropy efflux.

    Plugging that into the second law tells us that Earth’s entropy could be decreasing by up to 3.3e14 J/K per second (<infomercial>or more! </infomercial>). How fast (or even if) Earth’s entropy is actually decreasing depends on how much entropy is produced by irreversible processes on (and in) Earth. If it’s produced faster than it leaves, it’ll build up and Earth’s entropy will increase. If production lags behind exports, Earth’s entropy will decrease. The second law won’t tell you which if these is happening, all it’ll tell you is that while entropy can be produced, it can never be destroyed.

    Despite my objections to Styer and Bunn’s calculations, we all get pretty much the same answer for how fast Earth’s entropy could decrease:
    Styer: 4.2e14 J/K per second
    Bunn: 4e14 J/K per second
    Davisson: more than 3.7e14 J/K per second
    I suspect the actual figure is higher than any of these. If I get a chance, I’ll see if I can find someone who’s done a more detailed analysis.

    (I’ll also try to get back to you about “Poker entropy” — the short summary is that even though you wrote it as parody, it’s actually fairly close to correct. Also, there’s a good reason to use the Boltzmann constant instead of something else.)

  14. Gentlemen:

    Basic problem:

    || A, Th –> d’Q –> B, Tc ||

    Clausius worked out the entropy relationships for an isolated system based on heat flow from one zone to another. The overall entropy will tend to rise because Tc is less than Th — linked to availability of ways to arrange energy and mass at micro levels — so when we add up d’Q/T [which is the value of ds, entropy change], A’s loss will be more than compensated for by B’s rise.

    In the case of a coffee cup or a star or planet radiating into space, of course, the radiation field is taking up the heat outflow.

    But this is all a misdirection. Look more closely at B: as it takes in energy, its entropy . . . RISES.

    Raw energy injection tends strongly to increase a disorder metric.

    If B is now an energy conversion device, B’, it will in turn need an exhaust to a heat sink, which preserves the basic relationship.

    Now, some energy conversion devices are spontaneously created, e.g. a hurricane, but what happens when we have those that are replete with functionally specific, complex information, FSCI? (In short, we are now back to the issue of the origin of complex functional organisation, not mere order; FSCO. Given a highly contingent set of possibilities, islands of function are so utterly isolated in the space of possibilities, that the quantum state, Planck time resources of our solar system or of the observed cosmos are grossly inadequate. And, the energy conversion devices in the living cell, just like those in our autos and power plants or homes, are rich in FSCO/I. The only empirically warranted, observed source for FSCO/I is design. Indeed, it is arguably a reliable sign of design.)

    The mere inflow and outflow of energy does not adequately explain the origin of FSCO/I.

    And that is what was to be explained, as Dr Sewell pointed out over and over again. Namely, an extremely unlikely outcome for an isolated system does not suddenly become likely on opening up a system to energy or matter flows, unless something else is present that makes the outcomes very much more likely. Chloroplasts make energy and food production from sunlight very likely indeed, but that is exactly an instance of a system that is FSCO/I rich. A windmill explains grinding of corn in a mill, but the origin of the mill is not explained on teh mere presence of energy and mass flows. So also, with the energy using information processing of a computer, or the movements of a pick-place robot — which bears ore than a passing resemblance to the action of tRNA in the ribosome.

    I suggest a glance here as a bit of backdrop [onlookers may also find Appendix 1 my always linked through my handle, relevant].

    GEM of TKI

  15. 15
    Granville Sewell

    Neil,
    I never said different types of entropy can never be interconverted, what I said is that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering (or leaving) that makes it not extremely improbable.” Do you have a counterexample for this tautology?

    Gordon,
    I was going to ask you if you thought my application of the Boltzmann formula to poker hands was legitimate, and if not, why Styer’s application is more legitimate than mine, but was astonished to find at the end that you do think it is meaningful to convert probabilities of poker hands to units of thermal entropy (Joules/degree), so I don’t know what else to say to you.

    It is becoming obvious that the strategy of my critics is to try to make things more complicated and confusing than they are. My whole point was that you cannot say, as Styer and Bunn do, sure, the probability of evolution is astronomically small, but that is not a problem because of what is happening elsewhere. All I did is point out that in other cases where entropy is decreasing in open systems, it is not because something macroscopically describable is happening which is extremely improbable from the microscopic point of view, but just because something is entering or leaving the system which makes this not extremely improbable.

    If anyone has an example (other than biological evolution) where natural causes have done something macroscopically describable which is extremely improbable from the microscopic point of view, I will try to respond, otherwise not.

    And if you want to argue that the influx of solar energy makes the formation of computers and the Internet not really extremely improbable, I don’t have anything to counter except that it is quite obvious to me that it is.

  16. F/N: It seems I need to again remind of the remarks by Wicken and Orgel in the 1970′s to fix in our minds the context for discussion. I will also clip a remark by Thaxton et al in TMLO ch 7:

    ________________

    Wicken, 1979: >>‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] >>

    Orgel, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >>

    Thaxton et al, TMLO ch 7, 1985: >> While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

    It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, "Thermodynamics and the Origin of Life," in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB: as the journal's online issues will show, this is not necessarily a "friendly audience" for design thinkers.] >>
    ____________

    I will note that the boiling off and freezing of hot water in a cup in vacuo, insofar as the crystallisation is concerned, is based on mechanical necessity of crystals, i.e this order is not the same as organisation.

  17. 17

    Sewell:

    Gordon,
    I was going to ask you if you thought my application of the Boltzmann formula to poker hands was legitimate, and if not, why Styer’s application is more legitimate than mine, but was astonished to find at the end that you do think it is meaningful to convert probabilities of poker hands to units of thermal entropy (Joules/degree), so I don’t know what else to say to you.

    It’s not as simple as that, but in view of what you say next we should probably avoid getting distracted by something that’s basically irrelevant to the main point.

    It is becoming obvious that the strategy of my critics is to try to make things more complicated and confusing than they are.

    I’m not intentionally trying to make things more confusing than need be; my main concern is that I see an awful lot of thoroughly bogus thermodynamics thrown around (by both sides) in the evolution debate, and I’d really like to see the debate cleaned up to the point where it has some connection to reality. Unfortunately, I also tend to find obscure technical topics fascinating, and sometimes get distracted by interesting (to me) digressions. It’s not a strategy, it’s a personality quirk, and I’ll try to suppress it (but the application of the thermodynamcs of computation to poker decks is really cool and, … sorry, I’ll shut up now).

    My whole point was that you cannot say, as Styer and Bunn do, sure, the probability of evolution is astronomically small, but that is not a problem because of what is happening elsewhere.

    This is not what Styer and Bunn are saying, as I explained in my earlier comment. First, because they aren’t talking about actual probabilities but hypothetical probabilities that only apply at equilibrium. Second, because while Styer does (sort of) seem to think entropy increases elsewhere can automatically compensate for an entropy decrease on Earth, Bunn does not.

    All I did is point out that in other cases where entropy is decreasing in open systems, it is not because something macroscopically describable is happening which is extremely improbable from the microscopic point of view, but just because something is entering or leaving the system which makes this not extremely improbable.

    Styer, Bunn, and I all based our analyses on the energy entering & leaving Earth. While Styer dances with the mistake you’re fixated on, neither Bunn nor I do, and we all get pretty much the same result.

    Styer sort-of made “the compensation mistake”: he said “Although the entropy of the universe increases with time, the entropy of any part of the universe can decrease with time, so long as that decrease is compensated by an even larger increase in some other part of the universe”, and calculated the entropy change of the sun & cosmic microwave background (CMBR) related to the energy flowing through Earth. But he didn’t include them in his figure for the max. entropy decrease on Earth; he based that figure only on the entropy flowing through Earth itself. This is actually wrong — he should’ve looked at the difference between entropy flowing into & out of Earth, not just the amount flowing through, but this is a quite different mistake.

    Bunn did not make “the compensation mistake”: he used a (somewhat sloppy) version of the first approach I described last time: considering the entropy change if the Earth was surrounded with idealized surroundings that don’t include any unnecessary entropy increases. Unfortunately, he doesn’t realize that the emission of thermal radiation produces entropy, so he doesn’t eliminate this from his surroundings. Fortunately, it also means he doesn’t include this entropy increase in his calculations, so this mistake is mostly self-cancelling. BTW, he also explicitly explained why the entropy increase in the CMBR is not relevant to entropy changes on Earth.

    I did not make “the compensation mistake”: I explicitly calculate the difference between the entropy flux into and out of the Earth (well, a lower bound on it anyway). I completely ignore entropy changes in the sun, CMBR, alpha centauri, etc. In fact, my procedure is exactly the one you seem to approve of in your video at 12:29 to 13:05 (is this the J. Dixon, Thermodynamics I: An Introduction to Energy book you reference in your paper?)

    I did the analysis with a method you seem to approve of, and got essentially the same results as Styer and Bunn. So what’s the big problem?

    If anyone has an example (other than biological evolution) where natural causes have done something macroscopically describable which is extremely improbable from the microscopic point of view, I will try to respond, otherwise not.

    I’m not sure what you mean by “improbable from the microscopic point of view”; if you mean improbable under the Boltzmann distribution (which is what Styer and Bunn are talking about), any system not at equilibrium is an example (and there are lots of those). In my last comment, I mentioned three:
    * A rock being warmer on top than bottom due to sunlight (wildly improbable according to Boltzmann, but it happens all the time).
    * The Earth itself is much warmer at the equator than the poles (same thing on a larger scale).
    * The Earth’s distribution of salt is uneven because the hydrologic cycle (driven by sunlight) collects and concentrates it in the oceans and salt lakes.

    You want more examples? No problem:
    * Wind: what’s the probability that a large quantity air should all be headed in the same direction at the same time? If the direction each molecule moves is random (as Boltzmann requires), the probability is negligible.
    * Snowflakes: according to Boltzmann, an ice crystal is most likely to be in the shape that minimizes its free energy: a rough sphere. Branched stars have much higher surface area, hence higher surface energy, and (because Boltzmann probabilities drop exponentially with energy) much much lower probability.

    And if you want to argue that the influx of solar energy makes the formation of computers and the Internet not really extremely improbable, I don’t have anything to counter except that it is quite obvious to me that it is.

    “It’s obvious” isn’t a very good argument. It’s obvious that living organisms’ growth, reproduction, and homeostasis violate the second low of thermo — but it’s not true. It’s obvious that oscillating chemical reactions violate the second law — but again it’s not true. It’s obvious that a block of iron is a solid object, not mostly empty space. It’s obvious that a particle cannot also be a wave. It’s obvious that heavy opjects fall faster than light ones, and that the Earth is flat and immobile. It’s obvious that it’s getting late and I should stop writing and go to bed.

    Ok, maybe that last one’s actually true. Good night all!

  18. Gordon, to prove your point, merely pour energy, meteorites, etc.. into any system you want and demonstrate the generation of functional information above 500 bits. Something tells me you are going to have extreme problems:

    Stirring the Soup – May 2009
    “essentially, the scientists have succeeded in creating a couple of letters of the biological alphabet (in a “thermodynamically uphill” environment). What they need to do now is create the remaining letters, and then show how these letters were able to attach themselves together to form long chains of RNA, and arrange themselves in a specific order to encode information for creating specific proteins, and instructions to assemble the proteins into cells, tissues, organs, systems, and finally, complete phenotypes.”
    Uncommon Descent – C Bass:
    http://www.uncommondescent.com...../#comments

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

    of note: The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, considered the most accurate measure of the transcendent information present in a ‘simple’ life form. For calculations please see the following site:

    Molecular Biophysics – Information theory. Relation between information and entropy:
    https://docs.google.com/document/pub?id=18hO1bteXTPOqQtd2H12PI5wFFoTjwg8uBAU5N0nEQIE

    Abiogenic Origin of Life: A Theory in Crisis – Arthur V. Chadwick, Ph.D.
    Excerpt: The synthesis of proteins and nucleic acids from small molecule precursors represents one of the most difficult challenges to the model of prebiological evolution. There are many different problems confronted by any proposal. Polymerization is a reaction in which water is a product. Thus it will only be favored in the absence of water. The presence of precursors in an ocean of water favors depolymerization of any molecules that might be formed. Careful experiments done in an aqueous solution with very high concentrations of amino acids demonstrate the impossibility of significant polymerization in this environment. A thermodynamic analysis of a mixture of protein and amino acids in an ocean containing a 1 molar solution of each amino acid (100,000,000 times higher concentration than we inferred to be present in the prebiological ocean) indicates the concentration of a protein containing just 100 peptide bonds (101 amino acids) at equilibrium would be 10^-338 molar. Just to make this number meaningful, our universe may have a volume somewhere in the neighborhood of 10^85 liters. At 10^-338 molar, we would need an ocean with a volume equal to 10^229 universes (100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000) just to find a single molecule of any protein with 100 peptide bonds. So we must look elsewhere for a mechanism to produce polymers. It will not happen in the ocean.
    http://origins.swau.edu/papers.....fault.html

    As well Gordon, have you ever noticed that pouring more energy into a system actually increases the rate of entropic decay within the system???

    Evolution Vs. Thermodynamics – Open System Refutation – Thomas Kindell – video
    http://www.youtube.com/watch?v=HoQ-iokM7p0

  19. 19
    Granville Sewell

    Gordon,

    In none of the examples you give is anything extremely improbable happening, given the forces, initial conditions, etc acting. If I flip a “fair” coin a billion times and it comes up heads every time, that is extremely improbable. If I flip a “loaded” coin and it comes up heads a billion times in a row, that may not be extremely improbable. In calculating probability, you have to take into account everything that is influencing the system!

  20. 20
    Gordon Davisson

    Professor Sewell:

    In none of the examples you give is anything extremely improbable happening, given the forces, initial conditions, etc acting. [...] In calculating probability, you have to take into account everything that is influencing the system!

    That’s why i asked what you meant by “improbable from the microscopic point of view” — the examples I gave are all highly improbable under the Boltzmann distribution, but not improbable under real conditions. Going back to your earlier question:

    If anyone has an example (other than biological evolution) where natural causes have done something macroscopically describable which is extremely improbable from the microscopic point of view, I will try to respond, otherwise not.

    If you’re using actual probabilities (not Boltzmann), I know of no examples. Of course, using actual probabilities, biological evolution doesn’t require anything extremely improbable either. Evolution is one of the things “influencing the system”, and makes things like complex life reasonably probable. How probable? We don’t really know, because the processes involved are far too complex to fully analyse. And you don’t really know, either, no matter how confident you are in your intuitions.

    BTW, I have a question about your argument that I haven’t been able to figure out from your various writings, that I think is very important: When you say “… it seems clear that what is entering through the boundary cannot explain the increase in order observed here”, do you mean that there isn’t enough entropy leaving Earth for the entropy decrease happening here, or do you mean it’s insufficient in some other way? I’l appreciate it if you could clarify this.

Leave a Reply