Home » Intelligent Design » Other Types of Entropy

Other Types of Entropy

If you look at university physics texts which discuss the second law, you will find examples of “entropy” increases cited such as books burning, wine glasses breaking, bombs exploding, rabbits dying, automobiles crashing, buildings being demolished, and tornadoes tearing through a town (I have actually seen each of these cited). According to Sal, all of these “creationist” text writers are confused, because in most or all of these cases, “entropy” is actually decreasing. When an albatross dies, or a tornado destroys a 747, entropy is actually decreasing, he says. Of course, Sal is talking about “thermal” entropy, since the only formulation of the second law he recognizes as valid is the early Clausius formulation, which deals with thermal entropy alone.

Well, no one is arguing that these examples result in thermal entropy increases, they are examples of “entropy” (disorder) increases of a more general nature. The reason tornadoes can turn a 747 into rubble and not vice versa is, of all the configurations atoms could take, only a very small percentage could fly passengers safely across the country, and a very large percentage could not. Thus we can argue that the original 747 has lower “entropy” (more order) than the demolished machine. Another very confused creationist, Isaac Asimov, even wrote, in the Smithsonian Magazine,

We have to work hard to straighten a room, but left to itself, it becomes a mess again very quickly and very easily…How difficult to maintain houses, and machinery, and our own bodies in perfect working order; how easy to let them deteriorate. In fact, all we have to do is nothing, and everything deteriorates, collapses, breaks down, wears out—all by itself—and that is what the second law is all about.

There are many formulations of the second law, the later ones recognize a more general principle, for example, Kenneth Ford in Classical and Modern Physics writes

There are a variety of ways in which the second law of thermodynamics can be stated, and we have encountered two of them so far: (1) For an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability, (2) For an isolated system, the direction of spontaneous change is from order to disorder.

Sal says the second law has nothing to do with order or disorder; that is certainly not true of these more general, later, formulations.

OK, so Sal doesn’t like these examples, since they are difficult to quantify; he is not alone. Thermodynamics texts, as opposed to general physics texts, tend to shy away from them for that reason. Goodness knows, if we watch a video of a tornado tearing through a town, it is so difficult to quantify what we are seeing that we can never be sure if the video is running forward or backward, or if entropy is increasing or decreasing. But there are other types of “entropy” which are as quantifiable as thermal entropy, for example, the “X-entropy” which measures disorder in the distribution of any diffusing component X is defined by essentially the same equations as are used to define thermal entropy, which measures disorder in the distribution of heat as it diffuses, and X-entropy is certainly equally quantifiable. And X-entropy has little or nothing to do with thermal entropy (doesn’t even have the same units), one can increase while the other decreases in a given system. So why do people like Styer and Bunn and Sal, insist on treating all types of entropy as thermal entropy, and attempt to express the entropy associated with evolution, or the entropy of a 747, in units of Joules/degree Kelvin?

If you insist on limiting the second law to applications involving thermal entropy, and that the only entropy is thermal entropy, than Sal is right that the second law has little to say about the emergence of life on Earth. But it is not just the “creationists” who apply it much more generally, many violent opponents of ID (including Asimov, Dawkins, Styer and Bunn) agree that this emergence does represent a decrease in “entropy” in the more general sense, they just argue that this decrease is compensated by increases outside our open system, an argument that is so widely used that I created the video below, Evolution is a Natural Process Running Backward to address it a few months ago.

embedded by Embedded Video

YouTube Direkt

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

46 Responses to Other Types of Entropy

  1. Dr. Sewell, I have a question. Is it acceptable for me to use following paper to show how (unbelievably) far out of thermodynamic equilibrium a ‘simple’ cell is?:

    Moleular Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
    Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
    http://www.astroscu.unam.mx/~a.....ecular.htm

    Related notes:

    “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

  2. Thanks, Dr. Sewell.

    It seems to me that, indeed, much of the confusion results from different uses of terminology.

    Question: I’m curious how the ‘open system’ proponents could argue that that “this decrease is compensated by increases outside our open system.” Are they really arguing that the amount of information has decreased outside our system, or do they fall back on energy input (i.e., back to thermal considerations) when they make that claim? In other words, do they recognize a non-thermal form of entropy, but when trying to provide a naturalistic explanation for decreases in entropy in our open system do they then fall back to talking about energy/thermal entropy?

  3. How does a room become messy if no one enters it? I know it will become dusty, but messy?

  4. imo, Salvador has contributed to the increased entropy at Uncommon Descent.

  5. Does Shannon entropy pertain to the thermal component of information?

    The amount of heat lost divided by the amount of heat generated by information transfer. Or even the amount of heat lost divided by the amount of heat generated by thinking about Shannon information.

  6. Here’s an interesting paper (pre-print) that begins to address the link between thermodynamics, information theory, and complexity.

    http://arxiv.org/pdf/1110.4217v1.pdf

  7. Rod Swenson, anyone?

    1. Swenson, R. (1988). Emergence and the principle of maximum entropy production: Multi-level system Meeting of the International Society for General Systems Research, 32.theory, evolution, and non-equilibrium thermodynamics. Proceedings of the 32nd Annual Meeting of the International Society for General Systems Research, 32.

    2. Swenson, R. (1989a). Emergent evolution and the global attractor: The evolutionary epistemology of entropy production maximization. Proceedings of the 33rd Annual Meeting of The International Society for the Systems Sciences, P. Leddington (ed)., 33(3), 46-53.

    3. Swenson, R. (1989b). Gauss-in-a-box: Nailing down the first principles of action. Perceiving Acting Workshop Review (Technical Report of the Center for the Ecological Study of Perception and Action) 5, 60-63.

    4, Swenson, R. (1991a). End-directed physics and evolutionary ordering: Obviating the problem of the population of one. In The Cybernetics of Complex Systems: Self-Organization, Evolution, and Social Change, F. Geyer (ed.), 41-60. Salinas, CA: Intersystems Publications.

    5, Swenson, R. (1991b). Order, evolution, and natural law: Fundamental relations in complex system theory. In Cybernetics and Applied Systems, C. Negoita (ed.), 125-148. New York: Marcel Dekker Inc.

    6. Swenson, R. and Turvey, M.T. (1991). Thermodynamic reasons for perception-action cycles. Ecological Psychology, 3(4), 317-348. Translated and reprinted in Perspectives on Affordances, in M. Sasaki (ed.). Tokyo: University of Tokyo Press, 1998 (in Japanese).

    7. Swenson, R. (1997). Autocatakinetics, evolution, and the law of maximum entropy production: A principled foundation toward the study of human ecology. Advances in Human Ecology, 6, 1-46

    8. Swenson, R. (1998a). Thermodynamics, evolution, and behavior. In The Handbook of Comparative Psychology, G. Greenberg and M. Haraway (eds.), Garland Publishing, New York.

    9. Swenson, R. (1998c). Spontaneous order, evolution, and autocatakinetics: The nomological basis for the emergence of meaning. In Evolutionary Systems, G. van de Vijver, S. Salthe, and M. Delpos (eds.). Dordrecht, The Netherlands: Kluwer.

    10. Swenson, R. (1999). Epistemic ordering and the development of space-time: Intentionality as a universal entailment. Semiotica, Volume 127 – 1-4 , pp. 181-222.

  8. Joe,

    Back ways around. Heat is energy moving between bodies in various ways due to temperature difference. Temp is an index of avg random energy per accessible degree of freedom of microparticles in a body.

    in short, we are looking at something connected to loss of access to info about specific microstate given a macrostate.

    KF

  9. A few notes: neo-Darwinian evolution has no evidence that material processes can generate functional prescriptive information above that which is already present in a parent species and thus ‘breaking the thermodynamic barrier’ (passing ‘the fitness test’), whereas Intelligent Design does have ‘proof of principle’ that information ‘from a mind’ can ‘locally’ violate the second law and generate potential energy necessary for doing as such:

    Maxwell’s demon demonstration turns information into energy – November 2010
    Excerpt: Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
    http://www.physorg.com/news/20.....nergy.html

    Further notes:

    Landauer’s principle
    Of Note: “any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008).
    http://en.wikipedia.org/wiki/L....._principle

    It should be noted that Rolf Landauer maintained that information in a computer was ‘physical’. When he said ‘physical’, he held that information in a computer was merely an ‘emergent’ property of the material basis of a computer, and thus he held that the information programmed into a computer was not really ‘real’. Landauer held this ‘materialistic’ position in spite of a objection from Roger Penrose that information is indeed real and has its own independent existence separate from a computer. Landauer held this ‘materialistic’ position since he held that ‘it ALWAYS takes energy to erase information from a computer therefore information must be ‘merely physical’ (merely emergent). Yet now the validity of that fairly narrowly focused objection from Landauer, to the reality of ‘transcendent information’ encoded within the computer, has been brought into question.

    Scientists show how to erase information without using energy – January 2011
    Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.,,, “Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.”, Vaccaro explained.
    http://www.physorg.com/news/20.....nergy.html

    This following research, which followed fairly close on the heals of the preceding, provides far more solid falsification for Rolf Landauer’s contention that information encoded in a computer is merely physical (merely ‘emergent’ from a material basis) since he believed it always required energy to erase it:

    Quantum knowledge cools computers: New understanding of entropy – June 2011
    Excerpt: No heat, even a cooling effect;
    In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy.
    Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
    http://www.sciencedaily.com/re.....134300.htm

    Moreover, I’ve recently noticed that there are two very different sources for randomness between quantum mechanics and classical mechanics. Randomness in Quantum mechanic is driven by, of all things, a ‘free will’ assumption:

    Can quantum theory be improved? – July 23, 2012
    Excerpt: Being correct 50% of the time when calling heads or tails on a coin toss won’t impress anyone. So when quantum theory predicts that an entangled particle will reach one of two detectors with just a 50% probability, many physicists have naturally sought better predictions. The predictive power of quantum theory is, in this case, equal to a random guess. Building on nearly a century of investigative work on this topic, a team of physicists has recently performed an experiment whose results show that, despite its imperfections, quantum theory still seems to be the optimal way to predict measurement outcomes.,
    However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (*conscious observation) parameters can be chosen independently (free choice, free will, assumption) of the other parameters of the theory.,,,
    ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random.
    http://phys.org/news/2012-07-quantum-theory.html

    i.e. it is found that a required assumption of ‘free will’ in quantum mechanics is what necessarily drives the completely random (non-deterministic) aspect of quantum mechanics. Moreover, it was shown in the paper that one cannot ever improve the predictive power of quantum mechanics by ever removing free will conscious observation as a starting assumption in Quantum Mechanics!

    Whereas randomness in classical mechanics is found to be driven by, of all things, gravity:

    Evolution is a Fact, Just Like Gravity is a Fact! UhOh! – January 2010
    Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged.
    http://www.uncommondescent.com.....fact-uhoh/

    Entropy of the Universe – Hugh Ross – May 2010
    Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated.
    http://www.reasons.org/entropy-universe

    Though it is hard for me to follow the math in all these debates on thermodynamics, it seems fairly obvious to me that, number 1, these very different sources of randomness are very important to consider. Number2, it seems fairly obvious that consciousness (from somewhere!) must have played a major role in the ‘quantum falsification’ of Landauer’s contention that ‘information was physical’, i.e. emergent from a material basis. To requote the paper:

    “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer

  10. I guess the primary question that needs to be answered would be, “what is the source of the “more than complete knowledge” in quantum entanglement that leads to deletion of the data being accompanied by removal of heat from the computer? If it (more than complete knowledge) is attained independently, separate from specific human knowledge gathered of specific microstates of the computer bits, then would this not provide another argument for God from consciousness?

  11. Joe #3:

    How does a room become messy if no one enters it? I know it will become dusty, but messy?

    Also without humans and animals, natural forces will destroy the room, given enough time. It is the destructive power of entropy, as Dr. Sewell clearly explained. Evolutionists trust entropy for creation of life but are like men who horse a crocodile to get across a river.

  12. GS:

    The reason tornadoes can turn a 747 into rubble and not vice versa is, of all the configurations atoms could take, only a very small percentage could transport passengers safely across the country, and a very large percentage could not.

    This entire discussion here at UD about entropy seems to just go around in circles. Alas.

    Granville Sewell’s comment above points out a “directionality” to entropy that we all intuitively understand.

    I think, for the sake of simplicity (and, hopefully, clarity) we should take a very simplistic view of what entropy is: it’s basically directionless-ness. It is a ‘loss’ of ‘direction’.

    But, as it turns out, it is not a total loss of ‘direction’.
    Here’s what I mean.

    When, classicly, we deal with entropy, we’re dealing with heat; and heat is considered as a measure of energy. However, energy is also equal to ‘work’, and ‘work’ is equal to Force x distance. But force, being a vector, has a direction. Energy, OTOH, has no direction: it is a “scalar” quantity. So, as ‘work’ is being done, any thermal energy it generates appears to be ‘directionless.’ (It can go in any one of millions of directions depending on the total number of degrees of freedom available)

    I think this is where the confusion arises: viz., when we speak of entropy, specifically the 2ndLoT, we assume a total loss of directionality. But, it’s not a total loss.

    Let’s look at Clausius’ definition of entropy. Sal states it as:

    No cyclic process is possible whose sole outcome is transfer of heat from a cooler body to a hotter body

    This statement presupposes: (1) that the bodies are either separate from one another, or, that there are separate areas of one body–a ‘hotter’ and a ‘cooler’ area, and (2) that the ‘flow’ of energy/heat goes in only ONE ‘direction’: from hot to cold. Flow itself is likewise a vector quantity and, hence, itself implies a ‘direction’.

    What we seem to go round and round on here at UD is this notion of ‘directionality,’ and it seems that what we loose sight of is this minimal amount of ‘directionality’ that entropy requires.
    (N.B. Again, this irreducible ‘directionality’ that entropy requires can be lost sight of even in a classical thermodynamic equations like dU = TdSpdV. On the r.h.s., the first term amounts to dQ, which is the already present ‘internal energy’ of the system. On the l.h.s., dU is the change in ‘internal energy’. Finally, the second term on the r.h.s. is pressure x volume. What is perhaps hidden here is that pressure equals Force/Area. And, again, force is a vector quantity, and, so, has ‘direction.’ To change the heat content of something, ‘work’ has to be done, and pdV is what accomplishes it. Now, if you heat up a pressure cooker, the pressure will always be at right angles to the container’s walls–i.e., it is a NON-RANDOM orientation of forces!

    This is the critical point: an entropy change always has a direction. This is exactly what the 2LofT tells us, but, we usually lose track of it. Nevertheless, it’s there.

    Let’s look at an example that Sal uses in the second part of his recent post on entropy.

    He tells us that the change in entropy, S, is equal to the integral (sum) over the initial and final values of dQ/T. (N.B. the ‘initial’ value of dQ/T is always higher than the final value of dQ/T, revealing again, this hidden directionality. Normally, in any given mathematical expression, there is no need for the initial value to be ‘less’ than the final–they just need to be different)

    But let’s move on.
    Sal then adds:

    Perhaps to make the formula more accessible, let us suppose we have a 1000 watt heater running for 100 seconds that contributes to the boiling of water (already at 373.2?K). What is the entropy contribution due this burst of energy from the heater?

    Well, let’s turn this around (change it’s ‘direction’): what would happen if we had a 1000 watt (electric) heater surrounded by water that was boiling, and then we added steam to the system that was generated by a nuclear reactor. Would this produce electricity? Of course not! This energy-producing system only works in one direction!

    Nevertheless, it is possible to take the steam that a nuclear reactor produces and then produce electricity! But this is an entirely different process/system, and, it is a process/system that is given its ‘directionality’ via intelligent design of engineers.

    And, here we have it. This is the nub of the issue, I believe. Darwinists want to convince us that if ‘steam’ is available (analogously, the energy of the sun), then ‘electricity’ can be produced (analogously, the ‘direction’ of entropy can be reversed).

    But, in reality, electricity can be produced from steam only if intelligent agency is involved. Or, put another way, only intelligent agents can provide a ‘direction’ to forces that can counteract the ‘direction’ that entropy wants to take. RANDOM forces cannot do this. ‘Directionality’ is required; and ‘directionality’ is tantamount to the ‘intentionality’ of intelligent agents.

    The problem with Darwinian evolution is that it is driven by RANDOM, not directed, changes, which are powerless to overcome what Sanford calls “genetic entropy”. Dawkins tells us that NS can provide this direction, but fails to furnish any truly convincing arguments that it can. And, in his chapter 3 of the Blind Watchmaker, he finally, at the end of the chapter, has to postulate ‘directionality’ to overcome the obstacle he faces, though without ever telling us where this ‘directionality’ comes from. ‘Directionality’ is the Achilles Heel of Darwinism.

    At UD, we see random, ‘non-directional’, mutations occurring, and we see this as introducing disorder into the system (such disorder being understood as ‘entropy’). And we say that the only way that ‘order’ can be restored, or a new ‘order’ established, is through ‘directed’ mutations, a ‘directed’ action. And we say that only intelligent agents are capable of such directed actions. And “open systems” don’t mean that ‘steam’ can produce ‘electricity’ other than through intelligent intervention.

    The 2LoT, with its implicit ‘directionality’, remains a problem for Darwinism–despite Sal’s pleadings. And, as Granville Sewell stated, “. . . tornadoes can turn a 747 into rubble and not vice versa . . .”

  13. As pointed out post #1, from a thermodynamic perspective, life is shown to be in severe ‘disequilibrium’:

    “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong

    And yet ‘mysteriously’, in fairly short order when a organism dies, the molecules of the organism will quickly return to thermodynamic equilibrium:

    Fruit and Vegetable Decomposition, Time-lapse – video
    http://www.youtube.com/watch?v=c0En-_BVbGc

    Yet what is so special about ‘life’ that, when ‘life’ is present, it prevents this relentless march to thermodynamic equilibrium? Dr Talbott makes a strong case here that it is ‘information processing’ that prevents the march to thermodynamic equilibrium:

    The Unbearable Wholeness of Beings – Steve Talbott
    Excerpt: Virtually the same collection of molecules exists in the canine cells during the moments immediately before and after death. But after the fateful transition no one will any longer think of genes as being regulated, nor will anyone refer to normal or proper chromosome functioning. No molecules will be said to guide other molecules to specific targets, and no molecules will be carrying signals, which is just as well because there will be no structures recognizing signals. Code, information, and communication, in their biological sense, will have disappeared from the scientist’s vocabulary.
    http://www.thenewatlantis.com/.....-of-beings

    And Dr. McIntosh, a Professor of Thermodynamics and Combustion Theory at the University of Leeds, proposes here that ‘information’ the entity that is itself what is constraining the local thermodynamics (of living organisms) to be in ordered disequilibrium:

    Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
    Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.http://www.uncommondescent.com.....tors-note/

    Well is it, as Dr McIntosh holds, ‘transcendent information’ this is what is constraining life to be so far out of thermodynamic equilibrium? Yes! Quantum Entanglement/Information is now found on a massive scale in molecular biology. Here it is in DNA:

    Quantum Information/Entanglement In DNA – Elisabeth Rieper – short video
    http://www.metacafe.com/watch/5936605/

    DNA Can Discern Between Two Quantum States, Research Shows – June 2011
    Excerpt: — DNA — can discern between quantum states known as spin. – The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team’s results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property.
    http://www.sciencedaily.com/re.....104014.htm

    Does DNA Have Telepathic Properties?-A Galaxy Insight – 2009
    Excerpt: The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
    http://www.dailygalaxy.com/my_.....ave-t.html

    And here quantum information/entanglement is confirmed to be in proteins:

    Coherent Intrachain energy migration at room temperature – Elisabetta Collini & Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73
    Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state.
    http://www.scimednet.org/quant.....d-protein/

    Testing quantum entanglement in protein
    Excerpt: The authors remark that this reverses the previous orthodoxy, which held that quantum effects could not exist in biological systems because of the amount of noise in these systems.,,, Environmental noise here drives a persistent and cyclic generation of new entanglement.,,, In summary, the authors say that they have demonstrated that entanglement can recur even in a hot noisy environment. In biological systems this can be related to changes in the conformation of macromolecules.
    http://www.quantum-mind.co.uk/.....-c288.html

  14. And thus since ‘environmental noise’ no longer ‘drives a persistent and cyclic generation of new entanglement’ upon the death of a organism, where does this ‘conserved quantum information’ go upon death?

    Quantum no-hiding theorem experimentally confirmed for first time – March 2011
    Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed.
    http://www.physorg.com/news/20.....tally.html

    Quantum no-deleting theorem
    Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist.
    http://en.wikipedia.org/wiki/Q.....onsequence

    I believe the preceding evidence to be, finally, after many years of relentless mockery of Theists by atheistic materialists, strong supporting evidence for the Theistic Contention of a ‘eternal soul’ that lives past the death of our temporal material bodies:

    Being the skunk at an atheist convention – Stuart Hameroff
    Excerpt: When metabolic requirements for quantum coherence in brain microtubules are lost (e.g. death, near-death), quantum information pertaining to that individual may persist and remain entangled in Planck scale geometry.
    http://www.quantumconsciousness.org/skunk.htm

    Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff – video (notes in description)
    http://vimeo.com/29895068

    Quantum Entangled Consciousness (Permanence of Quantum Information)- Life After Death – Stuart Hameroff – video
    https://vimeo.com/39982578

    As Pam Reynolds, a Near Death Experiencer commented in this following video:

    The Day I Died. NDE + Consciousness Documentary – video
    http://www.youtube.com/watch?v=zD9jigzzuas

    I think death is an illusion. I think death is a really nasty, bad lie. I don’t see any truth in the word death at all – Pam Reynolds Lowery (1956 – May 22, 2010)
    http://christopherlovejoy.com/.....eally-are/

  15. ‘Entropy’- is wrongly called ‘disorder’.

    personally I think that is a (perhaps subconscious)tool by atheists/skeptics to try and convince themselves and others -that IF there is such a thing as Disorder in the universe;then there can be no absolute intelligence(god)behind it.- which would be true- then there would be god + something outside of god–therefor god would not be absolute.

    but there is nothing that does not follow the law of cause and effect.

    i will quote from prof Frank Lambert:

    Entropy change is the measure of dispersal of energy: how much or how widely spread out it is in a particular process (at a Specific Temperature)

    I am an admirer of professor Lambert, the now very old man has been responsible for removing from textbooks references to entropy as disorder.
    to quote him:

    “What are the dimensions of “disorder” ? Malarkeys per minute or some such nonsense ?
    The scientific dimensions of entropy change are joules/Kelvin. ”


  16. mazda,

    but there is nothing that does not follow the law of cause and effect.

    Are you asserting that there is a cause behind the timing of when individual atoms of carbon-14 decay? Just as an example. Or the occurences of quantum tunnelling for another. Or the varying gap between prime numbers?

  17. Mike Elzinga says:

    Remember, entropy is simply a name given to a mathematical expression involving the quantity of heat and the temperature.

    And what Granville is saying is that can also be applied to other things, like information, hence Shannon entropy, which does not involve a quantity of heat and the temperature. Unless of course we are talking about superconductors (zero resistance).

    It is the logarithm of the number of energy microstates consistent with the macroscopic state of a thermodynamic system.

    That means by observing some phenomena we should be able to piece it all back together to see what it was.

  18. It is the destructive power of entropy, as Dr. Sewell clearly explained.

    Entropy has no power to do anything. It can neither build nor destroy.

  19. 19

    But there are other types of “entropy” which are as quantifiable as thermal entropy, the “X-entropy” which measures disorder in the distribution of any diffusing component X is defined by essentially the same equations as are used to define thermal entropy, which measures disorder in the distribution of heat as it diffuses, and X-entropy is certainly equally quantifiable.

    But this provides no basis for thinking that the second law applies to “X-entropy”. As far as I can see, you’ve provided only 4 reasons for thinking that the second law applies to these “X-entropies”, none valid:

    * They’re called “entropy”. But you can call anything “entropy”, and that doesn’t imply any connection at all to the second law.

    * They’re quantifiable. While I would argue that being quantifiable is necessary for the second law to apply, it’s hardly sufficient.

    * They have similar mathematical form to thermal entropy (in at least some circumstances). This is suggestive, but hardly a real reason to think the second law applies to them.

    * Under some circumstances (diffusion through a solid, as analyzed in your paper) they always increase (unless there’s a flux through the boundaries of the system). Again, this is suggestive, but hardly a real argument. In the first place, showing that something holds under some circumstances doesn’t show that it holds under all circumstances. In the second place, even if something does hold under all circumstances, it doesn’t necessarily have anything to do with the second law.

    While examples are not sufficient to establish a universal law, counterexamples are sufficient to refute a (claimed) universal law. As I’ve pointed out before, there are circumstances where X-entropies decrease despite there being no X-flux through the boundaries of the system. The example I gave in the linked article — carbon settling to the bottom of a jar — is hardly the only possible counterexample. In fact, if you’d taken gravity into account in your analysis you would’ve found that heavier substances (*) tend to concentrate at the bottom, lighter ones at the top; in both cases, the X-entropy would undergo a spontaneous decrease without any X-flux, violating equation 5 of your paper.

    (* actually, heaver vs. lighter isn’t quite right — substances that increase the overall density tend to settle, while those that decrease overall density will tend to rise.)

    Do these counterexamples show violations of the second law? Of course not, because the second law (the real second law) does not apply to these X-entropies.

  20. Entropy has no power to do anything. It can neither build nor destroy.

    Hey that’s very similar to natural selection and neo-darwinian processes. So similar only common descent can explain it.

  21. Entropy has no power to do anything. It can neither build nor destroy.

    It depends on what one means with “power”. We can say that “entropy destroys” as we can say that “ignorance damages”. Both are “negative” per se (ignorance is “non knowledge” while entropy is “non order” and “non information”). They haven’t power if we with “power” mean something positive and constructive only. But these “non knowledge” and “non order” have results, negative and destructive. With “power” (in the negative sense) I had in mind exactly these results.

  22. PaV:

    This is the critical point: an entropy change always has a direction. This is exactly what the 2LofT tells us, but, we usually lose track of it. Nevertheless, it’s there.

    “… the answer to the question of “Why” the process occurs in one direction is probabilistic.”

    – Arieh Ben-Naim

  23. PaV:

    He tells us that the change in entropy, S, is equal to the integral (sum) over the initial and final values of dQ/T. (N.B. the ‘initial’ value of dQ/T is always higher than the final value of dQ/T, revealing again, this hidden directionality.

    Potential nitpick.

    Why do you refer to S as a change in entropy?

    What was the entropy before, and how did you measure it?

  24. niwrad, Dr. Sewell was describing the destructive power of a tornado.

  25. kf:

    in short, we are looking at something connected to loss of access to info about specific microstate given a macrostate.

    hi kf,

    Have your views on the relationship between information and entropy changed? Regards

  26. Joe:

    And what Granville is saying is that [entropy, a mathematical expression] can also be applied to other things, like information, hence Shannon entropy, which does not involve a quantity of heat and the temperature.

    Actually, it’s the other way around, as kf says.

    It’s Shannon entropy that is the more general of the two.

  27. From the OP:

    If you look at university physics texts which discuss the second law, you will find examples of “entropy” increases cited such as books burning, wine glasses breaking, bombs exploding, rabbits dying, automobiles crashing, buildings being demolished, and tornadoes tearing through a town (I have actually seen each of these cited).

    “…there is a great deal of confusion between the description of what happens in a spontaneous process, and the interpretation of entropy.

    For example, in an expansion process of an ideal gas as described in Chapter 7, it is qualitatively correct to describe what happens by saying that:

    1. The system has become more disordered.
    2. The particles have spread from a smaller to a larger volume.
    3. The total energy of the system has spread into a larger volume.
    4. Part of the information we had on the location of the particles was lost in the process.

    All of these and many more (in particular the mixing in a mixing process depicted in Fig. 1.4) are valid descriptions of what happens in the spontaneous process. The tendency, in most textbooks on thermodynamics, is to use one of these descriptors of what happens in a spontaneous process, to either describe, or to interpret, or even define entropy.

    Clearly, in order to identify any of these descriptors with entropy, one must first show that the descriptor is a valid one for any spontaneous processes. Second, one must also show that one can relate quantitatively the change in one of these descriptors to changes in entropy.

    Unfortunately, this cannot be done for any of the listed descriptors.”

    http://www.ariehbennaim.com/books/entropyd.html

  28. Folks:

    Entropy has several linked expressions and contexts. The dS >/= d”Q/T expression is just one. Statistical thermodynamics gave us two expressions, one due to Boltzmann [S = k Log W) and the other to Gibbs, which is a weighted age of missing info required to specify a microstate given a macrostate compatible with a set of microstates:

    S = - k [SUM on i] pi* log pi

    Now it turns out that – log pi is a “natural” metric of info, hence the discussions ever since Shannon put forth his theory and identified avg info per symbol.

    Jaynes et al have identified this much as I have described, entropy in effect being a metric of additional — thus missing — info to specify a microstate given macrostate. Or, we could use more traditional terms about degrees of microscopic freedom consistent with a macrostate. Also, since we are missing that info, when we use a system in say a heat engine, we have to treat the system as random across the set of microstates — a heat source not a mechanical source. (Contrast how efficient a wind turbine can be to a heat engine using air at the same temp as hot temp working fluid.)

    Relevance to FSCO/I is that cellular life is an example of macro-identifiable function that locks down state possibilities to a narrow pool of possibilities rather than a much broader config space. Island of function.

    A low entropy, high information state.

    One that is not to be “simply” accounted for by making appeals to open systems and flows of energy and or matter. Tornados reliably rip apart aircraft, they don’t assemble them from parts. For reasons closely connected to the statistical-informational view of entropy.

    The loose usage of “entropy” to denote the info in N symbols of average info per symbol H, is potentially misleading. Please reserve entropy for H, as per Shannon.

    KF

    PS: Mung, I have not fundamentally altered my views in any recent time.

  29. But you can call anything “entropy”

    Just ask Clausius!

  30. F/N: Since it is a very useful intro at basic mathematical level and goes at Amazon for about US$ 10, L K Nash’s Elements of Statistical Thermodynamics is a book I suggest for a basic read. Work through especially the discussion of the Boltzmann distribution in Ch 1. Harry S Robertson’s Statistical Thermophysics is also useful on the Informational school of thought, but is a stiffer read, and is much more expensive. Go look up in a Library. KF

  31. kf,

    Please reserve entropy for H, as per Shannon.

    Excellent point. One that I had intended to make (and may still) after a re-read of Sal’s initial post (OP in a different thread).

    http://ada.evergreen.edu/~arun.....weaver.pdf

  32. I agree with Gordon Davisson in that if it -the entropy du jour- does not apply to thermodynamics then the 2nd law of thermodynamics does not apply to it.

    However I still maintain that the fact that there is a 2nd law of thermodynamics is evidence for Intelligent Design.

  33. kf @29,

    Don’t you mean that H is enthalpy and S is entropy?

  34. Dr Sewell, you may find the remarks here helpful. KF

  35. DGW:

    I spoke about the Shannon entropy — average info per symbol, usually symbolised by H. (To make things even more confusing, it seems Boltzmann used H for entropy; I do not know what Gibbs used. The modern symbol for entropy in thermodynamic contexts is S.)

    The loose usage where the term “entropy” in information is extended to include the info in a message of N symbols, clouds the issue and the proper meaning of H. Notice, how I have given a concrete example of Shannon’s own usage in 1950/51 that makes the matter crystal clear:

    The entropy is a statistical parameter which measures, in a certain sense, how much information is produced on the average for each letter of a text in the language. If the language is translated into binary
    digits (0 or 1) in the most efficient way, the entropy is the average number of binary digits required per letter of the original language. The redundancy, on the other hand, measures the amount of constraint imposed on a text in the language due to its statistical structure, e.g., in English the high fre-quency of the letter E, the strong tendency of H to follow T or of V to follow Q. It was estimated that when statistical effects extending over not more than eight letters are considered the entropy is roughly 2.3 bits per letter, the redundancy about 50 per cent.

    As can be seen here, it is average info per symbol, aka Shannon Entropy, which is directly related to the Gibbs entropy metric.

    The Gibbs entropy metric turns out to be a measure of the average MISSING information to specify the microstate actually taken up by a system, where what we know is the macrostate specified by lab observable conditions and constraints. That is why I have now taken to speaking in terms of MmIG: the macro-micro info gap.

    This then implies that configuration is a relevant aspect of entropy, thus information.

    Going further, when we deal with something that exhibits FSCO/I, the observable function sharply constrains possible configs, i.e. we are in a low entropy, high KNOWN information context.

    The only empirically warranted way — and this is backed up by the needle in the haystack challenge — to put something complex enough to be FSCO/I into that state is IDOW — intelligently directed organising work. AKA, Design.

    All of this brings us back to the significance of FSCO/I as an empirically reliable, tested sign of design.

    KF

  36. Mung:

    Why do you refer to S as a change in entropy?

    Sal’s formula had “delta” S; I didn’t want to bother looking up the code for the delta symbol.

  37. F/N:

    Re entropy:

    It is the logarithm of the number of energy microstates consistent with the macroscopic state of a thermodynamic system.

    This is intended to sum up the Boltzmann expression S = k log W in words. W includes, however, position and momentum degrees of freedom of microparticles [up to six per particle, three each . . . gets us into phase space) in the system in view. W is the number of ways mass and energy (which is in moving mass and can be partly stored in position with a vibrating particle, etc) can be distributed consistent with the lab-observable macrostate.

    Gibbs’ expression generalises this.

    In so doing of course the bridge to Shannon’s entropy opens up. Cf here.

    KF

  38. PaV:

    Sal’s formula had “delta” S; I didn’t want to bother looking up the code for the delta symbol.

    Right you are. I was looking at a quote of something written by Clausius rather than the actual equation.

    So what does a change in entropy really mean? What is it *really* that is changing?

  39. p.s. And if it is the *same* thing that is changing in each case, what does it mean to say that there are “other types of entropy”?

  40. Mung,

    Cf here for a case in point (melting of ice); and the OP will help.

    Entropy tends to disorder systems by degrading — I would say diffusing, but that already has a meaning that applies entropy — concentrations of energy and loosening constraints on configurations. That last part is where it eats up info, hashing it.

    As the OP shows there is a little switcheroo involved in Shannon vs Gibbs. Shannon’s metric is avg info communicated per symbol which looks a lot like the other.

    After Jaynes et al were through with it, it looks like Gibbs entropy is looking at avg MISSING info on specific microstate compatible with a macrostate given by lab observable factors.

    Where design issues come in is that if we observe a relevant life-functional state, that is macro-observable (in the relevant sense, looking at a few items of micron scale is not like looking at 10^18 – 10^26 molecules at sub nanometre scale) AND sharply constrains configs. Indeed we can bring to bear knowledge of genomes, protein synthesis etc that MUST be going on. Compare that with a Humpty Dumpty [sp?] exercise of pricking open a cell and decanting it into a mini test tube. No function, and the parts diffuse all over the place through Brownian motion — extra-large molecules. Never to be seen back together again.

    My nanobots-microjets thought exercise is about putting together a tiny functional system in the teeth of diffusion. The lesson is that there is a need for intelligently directed organising work [IDOW] that — as S is a state variable — can be broken in two parts: dS_clump to bring parts together anyhow, and dS_config, to put in functional order. Per state function:

    dS_tot = dS_clump + dS_config

    And, for the parts, the entropy falls drastically twice as you clump and config, when it is over the flyable microjet is in a tightly constrained island of function in the space of possible configs of parts. As a crude estimate let’s have 100 * 100 * 100 = 10^6 1-micron location cells vs a 1-l cube with (10^4)^3 = 10^12 cells for diffused parts. So we would have 1 in 10^6 cells with something in it.

    To get them all next to one another is a task. Similarly, to configure the right part next to the right part in a 3-d array of nodes and arcs, where parts also need to be properly oriented gives a huge search space again.

    The chance based search mechanism is obvious, diffusion leading to overwhelming dominance of spread-out states. (Ever did the ink drop in a beaker exercise?)

    Even when clumped, non-functional states will dominate over functionally configured ones.

    And, the IDOW to assemble 10^6 parts correctly is well beyond 1,000 bits of info. Just the LIST much less the nodes and arcs net shows that.

    There is no good reason on the gamut of the observed cosmos to expect to assemble spontaneously, and that holds even for sub-assemblies. (Sir Fred Hoyle’s 747 in a junkyard is overkill, just to put together a D’Arsonval moving coil instrument on its dashboard or the instrument panel clock — with a nod to Paley — would be more than enough. As another, imagine the task of our tornado to assemble a self-replicating watch.)

    Nope, absent something being built into the laws of the co9smos that forces assembly of something like a cell based living form in ladderlike fashion from molecular components in Darwin’s pond or the like, just diffusion alone much less hydrolysis reactions and the need for gated encapsulation put the usual OOL scenarios on the death watch list.

    Of course, such a front loaded cosmos would SCREAM design.

    So, entropy turns our to be very relevant, once we see the issue of configurations and macro-micro info gaps.

    As for types of entropy, I suspect that we are really discussing applications to specific contexts. However there are also different metrics depending on how sophisticated a mathematical framework you want. Clausius –> Boltzmann –> Gibbs –> von Neumann –> Shannon and onwards from there. (And yes, von Neumann crops up here again, that joke about the nest of Martians in and around the old Austro-Hungarian Empire has just enough bite to make it pinch.)

    KF

  41. For those who might be interested, I’ve found discussions of entropy in the following textbook to be helpful:

    Thermodynamics & Statistical Mechanics: An intermediate level course, by Richard Fitzpatrick, Associate Professor of Physics, University of Texas at Austin. Freely available in PDF format here.

    And I second Mung’s recommendation of Arieh Ben-Naim‘s Entropy Demystified. I’m not competent to judge its scientific merits, but the content is accessible and straightforward.

  42. Entropy has no power to do anything. It can neither build nor destroy.

    Whence then the phrase “entropic force”, which is commonplace in scientific literature?

    Several posters have remarked on the confusion surrounding the topic of entropy: terminology, concepts, etc. This is a case in point.

  43. Hi KD

    Glanced at the book, seems okay though I would have strongly preferred that he did not use unusual symbols for probability, the cup-cap if he insisted would have been good. As it is the pipe as in P(A|B) has a standard meaning — conditional probability — that becomes confusing.

    In addition the premise of equal a priori probabilities in this context will come in for all sorts of objections. (I am not saying they are good, just that there will be endless demands for proof of a postulate of the theory justified in the end by its empirical success . . . )

    The derivation of Boltzmann entropy is workable, though Nash’s development is far more intuitive. Also, please, for decades now it has been Kelvins, “degrees” having been deprecated.

    L K Nash’s Elements of Statistical Thermodynamics remains a classic teaching tool, and Robertson’s Statistical Thermophysics lays out the informational approach that Ben Naim is using to some extent.

    KF

  44. PS: Entropic force probably describes loosely what the strong tendency to move to predominant clusters of microstates DESCRIBES.

  45. It is fascinating to see how, as the theory of thermodynamics progressed, the focus of interest shifted from what it is possible for a system to do, to what it is possible for an observer to know about the system.

    – Jeremy Campbell, Grammatical Man

Leave a Reply