Home » Biophysics, ID Foundations, Intelligent Design, Physics, Science » 2nd Law of Thermodynamics — an argument Creationists and ID Proponents should NOT use

2nd Law of Thermodynamics — an argument Creationists and ID Proponents should NOT use

ID proponents and creationists should not use the 2nd Law of Thermodynamics to support ID. Appropriate for Independence Day in the USA is my declaration of independence and disavowal of 2nd Law arguments in support of ID and creation theory. Any student of statistical mechanics and thermodynamics will likely find Granville Sewell’s argument and similar arguments not consistent with textbook understanding of these subjects, and wrong on many levels. With regrets for my dissent to my colleagues (like my colleague Granville Sewell) and friends in the ID and creationist communities, I offer this essay. I do so because to avoid saying anything would be a disservice to the ID and creationist community of which I am a part.

 [Granville Sewell  responds to Sal Cordova here. ]

I’ve said it before, and I’ll say it again, I don’t think Granville Sewell 2nd law arguments are correct. An author of the founding book of ID, Mystery of Life’s Origin, agrees with me:

“Strictly speaking, the earth is an open system, and thus the Second Law of Thermodynamics cannot be used to preclude a naturalistic origin of life.”

Walter Bradley, Thermodynamics and the Origin of Life

To begin, it must be noted there are several versions of the 2nd Law. The versions are a consequence of the evolution and usage of theories of thermodynamics from classical thermodyanmics to modern statistical mechanics. Here are textbook definitions of the 2nd Law of Thermodynamics, starting with the more straight forward version, the “Clausius Postulate”

No cyclic process is possible whose sole outcome is transfer of heat from a cooler body to a hotter body

and the more modern but equivalent “Kelvin-Plank Postulate”:

No cyclic process is possible whose sole outcome is extraction of heat from a single source maintained at constant temperature and its complete conversion into mechanical work

How then can such statements be distorted into defending Intelligent Design? I argue ID does not follow from these postulates and ID proponents and creationists do not serve their cause well by making appeals to the 2nd law.

I will give illustrations first from classical thermodynamics and then from the more modern versions of statistical thermodynamics.

The notion of “entropy” was inspired by the 2nd law. In classical thermodynamics, the notion of order wasn’t even mentioned as part of the definition of entropy. I also note, some physicists dislike the usage of the term “order” to describe entropy:

Let us dispense with at least one popular myth: “Entropy is disorder” is a common enough assertion, but commonality does not make it right. Entropy is not “disorder”, although the two can be related to one another. For a good lesson on the traps and pitfalls of trying to assert what entropy is, see Insight into entropy by Daniel F. Styer, American Journal of Physics 68(12): 1090-1096 (December 2000). Styer uses liquid crystals to illustrate examples of increased entropy accompanying increased “order”, quite impossible in the entropy is disorder worldview. And also keep in mind that “order” is a subjective term, and as such it is subject to the whims of interpretation. This too mitigates against the idea that entropy and “disorder” are always the same, a fact well illustrated by Canadian physicist Doug Craigen, in his online essay “Entropy, God and Evolution”.

What is Entropy? by Tim Thompson

From classical thermodynamics, consider the heating and cooling of a brick. If you heat the brick it gains entropy, and if you let it cool it loses entropy. Thus entropy can spontaneously be reduced in local objects even if entropy in the universe is increasing.

Consider the hot brick with a heat capacity of C. The change in entropy delta-S is defined in terms of the initial hot temperature TH and the final cold temperature TM:

Supposing the hot temperature TH is higher than the final cold temperature TM, then Delta-s will be NEGATIVE, thus a spontaneous reduction of entropy in the hot brick results!

The following weblink shows the rather simple calculation of how a cold brick when put in contact with a hot brick, reduces spontaneously the entropy of the hot brick even though the joint entropy of the two bricks increases. See: Massachussetts Institute of Technology: Calculation of Entropy Change in Some Basic Processes

So it is true that even if universal entropy is increasing on average, local reductions of entropy spontaneously happen all the time.

Now one may argue that I have used only notions of thermal entropy, not the larger notion of entropy as defined by later advances in statistical mechanics and information theory. But even granting that, I’ve provided a counter example to claims that entropy cannot spontaneously be reduced. Any 1st semester student of thermodynamics will make the calculation I just made, and thus it ought to be obvious to him, than nature is rich with example of entropy spontaneously being reduced!

But to humor those who want a more statistical flavor to entropy rather than classical notions of entropy, I will provide examples. But first a little history. The discipline of classical thermodynamics was driven in part by the desire to understand the conversion of heat into mechanical work. Steam engines were quite the topic of interest….

Later, there was a desire to describe thermodynamics in terms of classical (Newtonian-Lagrangian-Hamiltonian) Mechanics whereby heat and entropy are merely statistical properties of large numbers of moving particles. Thus the goal was to demonstrate that thermodynamics was merely an extension of Newtonian mechanics on large sets of particles. This sort of worked when Josiah Gibbs published his landmark treatise Elementary Principles of Statistical Mechancis in 1902, but then it had to be amended in light of quantum mechanics.

The development of statistical mechanics led to the extension of entropy to include statistical properties of particles. This has possibly led to confusion over what entropy really means. Boltzmann tied the classical notions of entropy (in terms of heat and temperature) to the statistical properties of particles. This was formally stated by Plank for the first time, but the name of the equation is “Boltzmann’s entropy formula”:

where “W” (omega) is the number of microstates (a microstate is roughly the position and momentum of a particle in classical mechanics, its meaning is more nuanced in quantum mechanics). So one can see that the notion of “entropy” has evolved in physics literature over time….

To give a flavor for why this extension of entropy is important, I’ll give an illustration of colored marbles that illustrates increase in the statistical notion of entropy even when no heat is involved (as in classical thermodynamics). Consider a box with a partition in the middle. On the left side are all blue marbles, on the right side are all red marbles. Now, in a sense one can clearly see the arrangement is highly ordered since marbles of the same color are segregated. Now suppose we remove the partition and shake the box up such that the red and blue marbles mix. The process has caused the “entropy” of the system to increase, and only with some difficulty can the original ordering be restored. Notice, we can do this little exercise with no reference to temperature and heat such as done in classical thermodynamics. It was for situations like this that the notion of entropy had to be extended to go beyond notions of heat and temperature. And in such cases, the term “thermodynamics” seems a little forced even though entropy is involved. No such problem exists if we simply generalize this to the larger notion of statistical mechanics which encompasses parts of classical thermodynamics.

The marble illustration is analogous to the mixing of different kinds of distinguishable gases (like Carbon-Dioxide and Nitrogen). The notion is similar to the marble illustration, it doesn’t involve heat, but it involves increase in entropy. Though it is not necessary to go into the exact meaning of the equation, for the sake of completeness I post it here. Notice there is no heat term “Q” for this sort of entropy increase:

where R is the gas constant, n the total number of moles and xi the mole fraction of component, and Delta-Smix is the change in entropy due to mixing.

But here is an important question, can mixed gases, unlike mixed marbles spontaneously separate into localized compartments? That is, if mixed red and blue marbles won’t spontaneously order themselves back into compartments of all blue and red (and thus reduce entropy), why should we expect gases to do the same? This would seem impossible for marbles (short of a computer or intelligent agent doing the sorting), but it is a piece of cake for nature even though there are zillions of gas particles mixed together. The solution is simple. In the case of Carbon Dioxide, if the mixed gases are brought to a temperature that is below -57 Celcius (the boiling point of Carbon Dioxide) but above -195.8 Celcius (the boiling point of Nitrogen), the Carbon Dioxide will liquefy but the Nitrogen will not, and thus the two species will spontaneously separate and order spontaneously re-emerges and entropy of the local system spontaneously reduces!

Conclusion: ID proponents and creationists should not use the 2nd Law to defend their claims. If ID-friendly dabblers in basic thermodynamics will raise such objections as I’ve raised, how much more will professionals in physics, chemistry, and information theory? If ID proponents and creationists want to argue that the 2nd Law supports their claims but have no background in these topics, I would strongly recommend further study of statistical mechanics and thermodynamics before they take sides on the issue. I think more scientific education will cast doubt on evolutionism, but I don’t think more education will make one think that 2nd Law arguments are good arguments in favor of ID and the creation hypothesis.

UPDATE:
Dr. Sewell has been kind enough to respond. He pointed out my oversight of not linking to his papers. My public apologies to him. His response contains links to his papers.
Response to scordova

UPDATE:
at the request of Patrick at SkepticalZone, I am providing a link to his post which provides links to others discussions of Dr. Sewell’s work. I have not read these other critiques, and for sure, they may not have much attempt at civility. But because Skeptical Zone has been kind enough to publish my writings, I have some obligation to reciprocate. Again, I have not read those critiques. What I have written was purely in response to the Evolution News and Views postings regarding the topic of thermodynamics.

Patrick’s Links to other critiques

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

45 Responses to 2nd Law of Thermodynamics — an argument Creationists and ID Proponents should NOT use

  1. 1

    Thank you for debunking Dr Sewell.
    He continues to put out gibberish about how evolution is related to the second law and entropy, while studiously avoiding a coherent defintion of either.

    The second law can be stated by first defining a Stable State:
    A system is in a Stable State if it is, at most, hopelessly improbable that it can attain another state, different by a finite amount, without a finite and permanent change in the environment.

    The phrase “hopelessly improbable” accounts for the remote probability that the entropy of an isoalted system will spontaneously decrease.

    Using that defintion, the first and secod laws are given by the Creationist Law of Stable Equilibrium: “In the absence of supernatural forces, a bounded system can attain one and only one stable state”.

    The “one and only one” is the first law

    It applies to all systems, black holes, quantum relativisitic, newtonian, systesm without matter, the works.

    In regard to entropy its defintion is starightforward and needs no integrals.

    Entropy: It is a property of a system equal to…. The “Lost Work” divided by the Temperature of the reservoir used to define the lost work.

    Lost Work: It is a property of a system equal to…..the differnce between the Energy and the “Available Energy”

    Available Energy: It is a property of a system equal to……….the maximum work that can be extracted from the system while coming to equilibrum with a heat reservior.

    Although Entropy is not necessarily statistical, Bolzmann’s Entropy S = k ln W falls under this defintion.

  2. There further needs to be a distinction between ORDER and ORGANIZATION. Microsoft windows or any binary computer program are highly DIS-ordered in the classical sense, but highly organized in the conceptual sense. To the extent entropy may deal with “disorder” but not relevant to organization, it is not an appropriate argument for ID.

    A ZIP file will exhibit levels of disorder comparable to random coin flips, but this does not imply a ZIP file is disorganized. More attention should be made to these nuances.

    The problem for evolution and the origin of life is one of ORGANIZATION not ORDER. Hence, 2nd law arguments are not appropriate. Possibly arguments that borrow from statistics (which is part of statistical mechanics) and information science are appropriate.

  3. Regarding the following quote and the concept it expresses:

    “Strictly speaking, the earth is an open system, and thus the Second Law of Thermodynamics cannot be used to preclude a naturalistic origin of life.”

    I’ve heard this argument before, but it seems to have a deep flaw. The Earth may well be an “open system”, but the UNIVERSE is not. And life didn’t just originate on the Earth, but rather on the Earth WITHIN this universe.

    As it happens, I don’t bother with 2nd law arguments much myself… I actually think there are more “accessible” arguments to be made. But I do think 2nd law arguments are valid.

  4. a few things to ponder,
    by definition a closed system should be considered evidence of ID.
    That a decrease in localized entropy happens in nature and things renew in nature is evidence of ID.
    The laws of nature themselves being a product of ID by the One designer/creator Hashem (G-d).
    Entropy/2nd law in an open system (the universe and the earth to some extent as it interacts w/ the sun..) is an argument against old universe fable, as the longer a continuous life sustaining atmosphere in any location, in our knowledge base that being earth, the lower the odds, the greater the miracle.
    see the recent complex creation,
    pearlman cta
    author

  5. Though my opinion on this is probably worthless, I will, none-the-less, go on record as disagreeing with you scordova and as agreeing with Dr. Sewell.

    Namely you cite this example:

    “Nitrogen will not, and thus the two species will spontaneously separate and order spontaneously re-emerges and entropy of the local system spontaneously reduces!”

    And Yet,

    Three subsets of sequence complexity and their relevance to biopolymeric information – Abel, Trevors
    Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,,

    Testable hypotheses about FSC

    What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses:

    Null hypothesis #1
    Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.

    Null hypothesis #2
    Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function.

    Null hypothesis #3
    Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function.

    Null hypothesis #4
    Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time.

    We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified.
    http://www.tbiomed.com/content/2/1/29

    Scordova, in your example, purporting to show why Dr. Sewell is wrong, I still see no movement towards viable functional sequence complexity (functional information) and it seems very reasonable to me that you have in fact moved away from viable functional sequence complexity. i.e. In regards to functional information I would hold that the entropy increased when you moved from RSC to OSC in your example. Thus entropy, the second law, holds as valid as far as the generation of complex functional information is concerned, which is the context in which Dr. Sewell used it!

    further note:

    “Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
    Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

    “Bertalanffy (1968) called the relation between irreversible thermodynamics and information theory one of the most fundamental unsolved problems in biology.”
    Charles J. Smith – Biosystems, Vol.1, p259.

    “Gain in entropy always means loss of information, and nothing more.”
    Gilbert Newton Lewis – preeminent Chemist of the first half of last century

    “Klimontovich’s S-theorem, an analogue of Boltzmann’s entropy for open systems, explains why the further an open system gets from the equilibrium, the less entropy becomes. So entropy-wise, in open systems there is nothing wrong about the Second Law. S-theorem demonstrates that spontaneous emergence of regular structures in a continuum is possible.,,, The hard bit though is emergence of cybernetic control (which is assumed by self-organisation theories and which has not been observed anywhere yet). In contrast to the assumptions, observations suggest that between Regularity and Cybernetic Systems there is a vast Cut which cannot be crossed spontaneously. In practice, it can be crossed by intelligent integration and guidance of systems through a sequence of states towards better utility. No observations exist that would warrant a guess that apart from intelligence it can be done by anything else.”
    Eugene S – UD Blogger
    http://www.uncommondescent.com.....ent-418185

  6. I´m really sorry if my English is not good enough. I hope you can understand my thoughts anyway.
    I think Mr Scordova you made a mistake. That is quite obvious if you have a second look at your gas example. The separation effect of the two gases is due to the presence of the gravitation field that you introduced without noting it. Actually you won’t be able to construct your little experiment in a surrounding where gravitation is missing. So in fact your gases are not separating because of the reduction of temperature but because of the presence of a directed gravitational field. Because the condensing CO2-molecules are heavier than the N-Molecules they sink in direction of the directed gravitational field. This was present throughout the whole time of the experiment but is only able to show up its presence and force when temperatures are reduced. This sort of directed field is exactly what Mr Sewell proposes must enter the border of non-isolated systems to get order.
    Imagine you would create a strong gravitational field of the right strength in your first scenario (where the gases are still mixed), for instance put the whole stuff in a centrifuge. Voila! The separation effect will show up as well, but nobody would propose, that no special ordering force had entered the border of the non isolated system.
    It’s the same as you would introduce a sieve in your marbles container, where the clumpsing red marbles (clumpsing happens because of temperature reduction) can’t pass the sieve but the blue ones can. Without a doubt the blue ones will collect at the bottom of the container.
    I’m sorry but the arguments of Mr Sewell are in my opinion still valid.

  7. Hi SC:

    Pardon, but I do not agree.

    I suggest, you read here, here [esp chs 7 & 8], and also here. In this last, please note the discussion on parts of a micro-jet moving about by forces of diffusion in a vat.

    These links will give my reasons in outline.

    I particularly note that, by the nature of the case, functionally specific, complex organised states will be deeply isolated in the space of possible configs, and will be maximally elusive to blind searches. In short, on the accessible atomic resources and time, spontaneous origin of such states will be maximally implausible similar to monkeys at keyboards typing significant passages from Shakespeare or the like. This is closely related to the analytical grounds of the statistical form of the 2nd law.

    And, as I note here, while indeed there is a distinction between order and organisation — the latter being a special type of order that is information-rich, there is also a bridge between thermodynamics, entropy and information. citing Wiki speaking against interest:

    At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

    But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

    Last but not least, I think an excerpt from Thaxton et al, at the end of Ch 7 of TMLO, will I believe, help to further frame discussion:

    While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

    It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . .

    I trust this will prove helpful.

    KF

  8. Just a little test, because my comment doesn’t show up. It’s the first time I’m commenting.

  9. I’m not going to take sides in this but will suggest that the first equation in the post be corrected to include differential dT . I actually found myself wondering, because of the omission, if the OP has analysis background.

  10. I trust this will prove helpful.

    KF

    I agree with most of what is in Mystery of Life’s Origin and I quoted one of the author’s (Walter Bradley’s) thoughts on the matter of the second law.

    But the 2nd law proceeds from the microstates, not the other way around! The first formulations of the 2nd law had no model whatsover of microstates. Hence, the Clausius postulate is inappropriate to describe the modern notions of entropy (like Mixing or things like configuration and organization).

    A DNA strand like AATG….. etc in the sense of its effect on Thermodynamic entropy is not really different from another DNA strand of the same length but with different characters like TTTG…… but in terms of organization, this could make all the difference between life and death. The Clausius postulate (the first formal statement of the 2nd Law) has little if anything to say about such matters, that’s why we should not use the 2nd law.

    Any way, I posted this because I think there is disgreement over the matter in the ID community and I wanted others in the ID community who held my reservations to know they weren’t alone in their doubts.

    If I have these reservations, I’m sure other ID sympathizers will share them.

    But thank you for your comments. This could be a very education discussion, and such input is valuable to clarify the issues.

    In the book you mention, use was made of the notion of “Configuration Entropy” which measures the level of disorganization (not disorder). These are not at all a part of the Clausius postulate.

    If one has to rely on certain version of the 2nd Law, rather than making a claim that can be justified by all widely-accepted versions of the 2nd law, then this is going into difficult territory, and I wish to let my ID comrades know, they will find it challenging defending ID with such arguments because their opponents will simply keep throwing up the Clausius postulate and say “how does the Clausius postulate disprove the naturalistic origin of life?”

    The Clausius postulate:

    No cyclic process is possible whose sole outcome is transfer of heat from a cooler body to a hotter body

    Here is a more acceible way to argue against naturalistic origins of life courtesy Jonathan Wells (who was an undergraduate physics major before getting his doctorate in biology):

    So here is an interesting proposition by Dr. Jonathan Wells and aptly named “The Humpty Dumpty” solution. Take a sterile medium and in this medium insert a single-celled animal. Then simply rupture the outer membrane of the animal (or cell) and let all the contents of the animal freely float in the solution. Now, in this environment you have all the available components (according to reductionists) necessary for life. In fact, you have many times more available organic compounds than have been ever produced in any experiment that attempts to simulate the early formation of life as did the Miller-Urey experiment. All the components for life are right there waiting for you. What sane biologist in Ann Arbor or anywhere would say that you can create life from this “primordial soup?” The old nursery rhyme is true: you can’t put Humpty Dumpty together again.

    That’s the way to argue against naturalistic origins without using the Clausius postulate.

  11. I’m not going to take sides in this but will suggest that the first equation in the post be corrected to include differential dT . I actually found myself wondering, because of the omission, if the OP has analysis background.

    I see a dT as in CdT. So you misread, you didn’t see and your attempted ad hominem isn’t appreciated. Further, it had no basis since you obviously can’t see the CdT has a dT!

  12. 12
    Chance Ratcliff

    Assuming there is no 2nd law violation, what law does the spontaneous generation of functional complexity violate, exactly? It takes energy to purposefully order and organize arrangements of matter into information. Taking for example magnetic letters on a board, distributed haphazardly, suppose I wanted to arrange the letters into a specific phrase. I would either need to input the phrase manually, or leave it to law and chance. What law says that I’m required to impart information into the arrangement of letters that need to be organized specifically, if I don’t wish to wait?

  13. I think that the best argument from the 2nd law relates not to biological evolution but to the low entropy in the early universe. Oxford physicist Roger Penrose has computed that the initial conditions of the universe were extremely unlikely – about 1 change in 10 to the power of 10 to the power of 123. This clearly points to design – much more than any argument related to bio-evolution. It is exceedingly surprising that the entire universe is not so dominated by black holes that there could exist any life whatsoever…

  14. SC:

    Without going into a debate, you may find the discussion of a diffusion model from Yavorski and Pinsky, in my always linked, app I, helpful. Practising physicists usually do not make over much about the specifics of history. The macro picture is rooted in the micro one.

    And the key to why entropy increases in Clausius’ first example has to do with micro-level issues and moving to clusters of configs of higher relative statistical weight if free to do so.

    BTW, if a system is opened up to receive energy, as that same example shows, entropy tends to INCREASE.

    KF

  15. Dr. Sewell has been kind enough to respond. I also updated the original post to provide links to his papers. My sincere apologies for the oversight of not linking to them earlier.

    See:
    http://www.uncommondescent.com.....-scordova/

  16. If one reads Sewell, one inevitably comes to the conclusion that a perfectly-mixed suspension of oil and water can never, ever spontaneously separate into two perfectly-separated phases. The numbers are simply too prohibitive. (Readers here should be able to figure this out – just calculate the probability that 10^23 water molecules and 10^23 oil molecules can spontaneously find their ways to opposite and separate layers of, say, a cruet. The numbers make Dembski’s UPB look positively inevitable.)

    Sewell’s real problem is that he believes that chemistry is not a factor. Because of this, he is ignorant of the fact that life, and evolution, exist because of the second law, not in spite of it. The oil/water example nicely illustrates the fundamental underlying chemistry.

    A disclaimer – if you believe Sewell, don’t try the oil-water experiment. We wouldn’t want to violate the second law and have to deal with the catastrophic consequences.

  17. Kudos to you Sal for being willing to be upfront in correcting a common error that, unfortunately, runs deep in the minds of a number of ID proponents. We should never accept a view simply because it is mainstream – be it in the mainstream non-teleological community or in the mainstream ID community.

  18. Assuming there is no 2nd law violation, what law does the spontaneous generation of functional complexity violate, exactly?

    Walter Bradley list 5 of the major laws of physics:

    1. clasical mechanics
    2. electrodynamics
    3. statistical mechanics (and thermodynamics)
    4. quantum mechanics
    5. relativity

    It would be hard to say only one such law in isolation supports ID or precludes naturalistic origins of life.

    Bradley points out ID is not so much trying to say that physical laws make designs impossible except through intelilgence but rather only a narrow set of conditions (which only intelligence can provide) acting through those laws allow designs to be achieved.

    It takes energy to purposefully order and organize arrangements of matter into information.

    Energy is a necessary but not sufficient condition. An atomic bomb has enough energy in principle to build a city, but well….

  19. 19
    Chance Ratcliff

    Thanks for the response, scordova.

    Perhaps it’s easier to suggest that none of the five laws precludes ID, nor gives any support to naturalistic origins. ;-)

    It’s clear that the tornado running backwards violates some axiomatic principle. I couldn’t say why no major laws of physics suggest an explanation, given that the impropriety of such an occurrence is immediately obvious. The default conclusion would seem to be that tornadoes which assemble rubble into structures violate no known physical laws.

  20. Dr. Sewell and I have come to a rare point of agreement, and I will highlight the point with which I agree from the above link to his response to me:

    Obviously the origin and evolution of life do not violate the second law as stated in the early formulations you quote, but there are many formulations of this law, some more general than others.

    And what formulation have I quoted? Clausis and Kelvin Planck. What formulations are the most prominent? According to Wiki on the 2nd law

    The second law of thermodynamics may be expressed in many specific ways,[4] the most prominent classical statements[3] being the statement by Rudolph Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carathéodory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. They have been shown to be equivalent.

    So the formulations I provided constitute 2 of the 3 major ones in use. According to wiki the 2 I quoted are equivalent to the 3rd formulation by Caratheodory, but I have not been able to verify this for myself, but if so, then the origin and evolution of life do not violate the three major formulations of the second law of thermodynamics. This is consistent with what Walter Bradley said.

    Reference:
    http://en.wikipedia.org/wiki/S.....modynamics

    If ID proponents wish to use 2nd law arguments, they’ll have to work from formulations of the law that are not the major ones. People have a freedom of choice in this matter, but that’s not the route I would take or advise others to take.

    I’d like to thank Dr. Sewell for responding.

  21. Hey Groovamos,

    Are you still having problems understanding that the symbolic phrase

    CdT

    includes the exact differential dT. You’re making it seem you have little comprehension of basic math notation.

    Do you not understand CdT means the heat capacity multiplied by the exact differential dT? :-)

    If I inserted an extra dT as you suggested, then that eqaution would be wrong. Are you going to confess your error or will I have to keep repeating it until you recant and admit you’re the one with analytical problems since apparently you can’t discern “CdT” means “C” multiplied by “dT”!

    Can you explain to the UD readers how you made such an error? How were you proposing to amend the equation? Were you going to suggest using CdTdT? :-)

    Oh I know, you might be thinking of the Java naming convention where different words are delimited by capitalization….yeah right.

    Unforuntately, I pointed out that C is heat capacity, thus you have no excuse to say you read it as “Cd” multiplied by “T”. Not to mention this equation will be found in thermodynamic texts, and not to mention the equation was sourced from the linked MIT website.

    So are you going to fess up on your egregious error or will I have to keep reminding readers about it. :-)

    I mean it’s unbelievable you couldn’t notice the integrand that has CdT/T has the term “dT” right there to the right of “C”. How could you miss it? Do you not understand basic 1st semester calculus?

    How can you claim you didn’t see it since you supposedly scrutinized the equation closely enough to make the erroneous claim that “dT” wasn’t in the equation when it clearly was.

    How can you say that the symbolic phrase “CdT” does not include “dT”? Do you not understand basic math notation? How can you say you didn’t see the “dT” following the C. That’s awfully hard to miss. Are you feeling OK. Maybe you shouldn’t be driving if your vision is having such problems of reading omission whereby you can’t see the “dT” that is only milimeters to the right of “C” in the phrase “CdT”!

  22. F/N: It is clear that the thread has unfortunately largely served as an occasion for tangential, irrelevant and distractive objections.

    I again draw attention to the key issues in 6 above. The spontaneous formation of a metabolising, von Neumann algorithmic self replicator based entity has nothing to do with oil water separation due to sharp differences in inter-molecular forces due to water being a polar molecule. It has everything to do with why diffusion does not spontaneously undo itself, why random text does not spontaneously form long passages of coherent meaning and function of 73 or more ASCII characters etc etc.

    All of these have been highlighted for a long time and have been ducked, dodged or obfuscated.

    I suggest that the key clips cited at 6 and the 1984 discussion in TMLO chs 7 – 9 be used for a point of departure for serious discussion, taking in what Sewell actually has argued. If that is not taken up, it should serve to show to the astute onlooker that we are unfortunately back at the all too familiar design objector rhetorical tactic of red herrings and strawmen.

    KF

  23. F/N 2: The following from my always linked, app 1, uses a model of diffusion to give a bridge between microscopic and classical views, that is relevant:

    _________

    >> 4] Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that FSCI-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.) >>

    _________

    The link to the pivotal issue on the table, to explain the rise of functionally specific complex organisation to build first life, and thence novel body plans, should be clear. Such things are so isolated in the field of possible configs that they are not credibly observable on the gamut of the observed cosmos or solar system. At least by blind chance and mechanical necessity. but, FSCO/I is readily and routinely observed as the product of intelligence.

    Dr Sewell is right. A tornado spontaneously assembling a house from its components is not a credible observation, on the principles of the second law. The same holds for the vastly more complex living cell.

    And before you go off yelling Hoyle Fallacy, I suggest you look at the task of spontaneously forming a universal constructor with a von Neumann self replicator controlled by a coded tape. For, that is a minimal requirement for metabolising self replicating life. Where also, no self-replication, no evolution by differential performance and locking in of genetic improvements. Recall, first life looks like 100K plus bits of genetic info, and novel body plans 10 – 100 mn bits, where 500 – 1,000 swamps the atomic resources of our solar system and observed cosmos.

    The thought exercise on forming a microjet in a vat by diffusion in app 1 here, should also be instructive.

  24. I think there is a fundamental quality to the 2nd Law that is overlooked. Namely, that discussions of meaningful entropy change (of which “life” is one) must involve “barriers.”

    For example, Sal’s first model involved two bricks, one hot and one cold. But how could these two bricks become hot or cold unless they’re separated from one another? In fact, Sal’s example has the two bricks coming together in the end, wherein the two exchange and reach an equilibrium.

    This is the fundamental point: disequilibrium, of which many go to make up “life”, can only come about by a barrier/partition/separation of some sort.

    In Sal’s second example, consisting of marbles, what does he do to “increase” entropy? He removes a partition! And, so, to get a “decrease” of entropy, a barrer will be needed.

    This is where the 2nd Law shows that “life” cannot come about via random forces. Why? Because too many barriers are needed. A cell has a membrane; it has its organelles, it has its own genetic system in place that must work, in some measure, apart from the rest of the cell (think of transcription). How did these barriers arise? How are they maintained? Where did the energy come from that maintained these barriers?

    This is a fundamental way of seeing how the 2nd Law militates against OOL sans ID.

    Sono in Italia; allora, ciao!

  25. Can you please tell me why am I still after one day in moderation?

  26. hi PaV:

    I think the key thing is to appreciate that thermodynamic barriers like this are not hard “it is forbidden” but instead it becomes so probabilistically implausible on reasonably accessible resources that a certain kind of observation is maximally unlikely.

    That is, it is a practical — as opposed to in-principle — impossibility.

    It is strictly possible for a tornado to assemble a house out of components, but it is so implausible per the relevant clusters of accessible possibilities, that it is unobservable on the gamut of our observed cosmos.

    KF

  27. Sal felt so proud of his “coming out” that he had to announce it on an evo blog.

    But Sal does have a point- we don’t need this argument because materialism, and by extention evolutionism, are totally void of substance. And taht means we can invoke the late Christopher Hitchens:

    “That which can be asserted without evidence, can be dismissed without evidence.”

  28. Joe: Would it surprise you to learn that it was thermodynamics issues — with TBO’s TMLO playing a key part — that brought me to look at design theory in the first place? KF

  29. KF- I understand and believe you. Accumulations of random events do not construct muklti-part machinery.

    I’m just saying that we don’t need it because there isn’t anything to refute, yet.

  30. 30
    Chance Ratcliff

    Yes I’m left wondering which mechanism of evolution, exactly, is compatible with the 2nd law.

  31. 31
    Chance Ratcliff

    * is or isn’t compatible ^^

  32. Greetings, Sal:
    I’m curious if you might agree that the Boltzman-Gibbs version of the second law would perhaps be easier to apply to the “reverse video” argument. I use it in my biology classes at Cornell when discussing the second law and found that students very quickly understand the concept of microstates and how they relate to entropy.

    Allen!

    How are you? I knew you got into some hot water for teaching an ID class a few years back. I can now appreciate what it feels like to be a dissenting voice among my own colleagues. I feel your pain… :-)

    With respect to your question, I don’t think I personally would teach it in that way.

    Here are my reasons.

    There are various kinds of entropy, and thermal entropy is only one form of entropy. Further, a few physicists get indigestion at the thought that thermal entropy means more disorder. There are some forms of entropy that might be described as disorder, other forms not. Oh and that’s the other thing, there is more than one form of entropy!

    Thermal entropy is in principle easier to measure if one has:

    1. thermometers
    2. means of estimating mass or other physical properties like volume and pressure, etc.

    Thermal entropy is measured in Joules/Kelvin. A Joule is 1/3600000 kilowatt hours.

    But how are other forms of entropy measured? That is a real problem. Rob Sheldon points out:

    My (Nobel nominated) college professor used to ask a rhetorical question in his thermo class, “what is the entropy change of a cow after the butcher put a 22 calibre bullet through the brain?” Yet this miniscule entropy change is supposed to tell us the difference between a “living” and “dead” cow. From a physics view point, there is almost no change in disorder, yet from a biological viewpoint it is all the difference in the world. Physics just doesn’t know how to measure this, and doesn’t even know if they ever can measure this quantity.

    A student of thermodynamics will be perplexed. We know disorder has been introduced, but yet we can’t make statements of how much the total entropy has increased. We can only measure the thermal entropy change (with things like thermometers and weight scales, etc.) but there is no objective measure of the real mechanical damage that has been done. It cannot be stated in terms of Joules/Kelvin, and that is a problem if one wishes to invoke physics….

    If you take a computer from inside warm hose out into the cold on a winter day and then crush it with a sledge hammer. While your are doing that, the cold winter is cooling the computer and thus lowering its thermal entropy. The result is that invocation of thermodynamics doesn’t really help our understanding of the deterioration of mechanical order especially when, formally speaking, thermal entropy can be going down while mechanical disorganization is going up. A physicist can estimate the change in thermal entropy of the smashed computer using a thermometer to a couple significant figures, but he can’t give you the increase in entropy of the mechanical disorganization introduced by the sledge hammer. The problem is that ORDER and ORGANIZATION of such macroscopic objects is in the eye of the beholder. This subjective quality of what we perceive as organized vs. disorganized has relevance to design arguments, but I digress…..

    This inabilty to measure and quantify the mechanical disorder makes me cringe at the thought of using the 2nd law to described the ravaging of Tornados. One can use the 2nd law to describe the change of thermal entropy, beyond that, we go into some serious controversy, imho.

    Greetings, Sal:
    I’m curious if you might agree that the Boltzman-Gibbs version of the second law would perhaps be easier to apply to the “reverse video” argument. I use it in my biology classes at Cornell when discussing the second law and found that students very quickly understand the concept of microstates and how they relate to entropy.

    I have some who will disagree with me (like Mike Elzinga) but in physics and engineering and biology where there are some notions of entropy other than thermal that are formally measured. I will also make the bold statement, that the problem of subjectivity is painfully creeping in. Non-thermal entropy can be meausred to various significant figures in Joules/Kelvin, but it is in the eye of the beholder. Let me list the notions that I’m aware of:

    1. Thermal Entropy
    2. Mixing Entropy
    3. Configurational Entropy (like the configuration of a protein)

    Configurational Entropy plays a role in ID, but it depends on what one wishes to label as ordered. Why? It is in the eye of the beholder when a protein (much less an organ) becomes functional or partially functional (recall the eyes of blind cave fish!) What one man says is ordered, another man can say is disordered (just like the arguments over non-coding DNA!)…..

    I gave an example of mixing entropy. The marbles were an illustration only, but the actual entropy of mixing with gases can be measured or inferred with great accuracy to significant figures in terms of Joules/Kelvin. This is one case where change in entropy can be measured without any recourse to things like thermometers.

    But even then, there is a subjective element creeping in because what we deem as ORDERED is somewhat in the eye of the beholder. Here is a quote from a wiki article:

    This insight suggests that the idea of thermodynamic state and entropy are somewhat subjective.
    http://en.wikipedia.org/wiki/Gibbs_paradox

    What is the problem. Perhaps the marble illustration will help. The red and blue marble mixing illustrates increase in entropy (not thermal entropy by mixing entropy). But what if I then soaked the box of marbles in dye so that all marbles become purple and thus indistinguishable? Has the mixing entropy suddendly disappered? The same result could happen if we merely wore glasses that would make us color blind.

    Or to further illustrate, let’s start out with the two compartments of red and blue marbles but also with unique numbers on them. We start shaking the box and mixing the marbles. We can say the entropy increased. But if we started out the experiment with color blind glasses we might be tempted to say that entropy has not increased unless we observe the fact that each marble is pretty much in a different location from where it started. We can do this because we can identify the individual marbles, but if there were no identifying marks (or if we ignored the identifying marks) on the marbles we’d be saying the entropy has not changed! Thus our statements about the change in entropy are driven in part by our ability to or willingness to measure it. This feels not so wholesome….

    On the one hand we could say in principle the entropy has increased, but since we can’t distinguish the particles, we might say it hasn’t. If there is no utility (or ability) in actually distinguishing the particles, then we arbitrarily say there is no change in entropy. And this is the Gibbs Paradox….

    The point being, the non-thermal forms of entropy are subjective, and once we get to macroscopic objects like bacteria, cows, cars, etc. the subjectivity of what constitute organized versus disorganized becomes painfully in evidence. Unlike thermal entropy which can be determined by thermometers, the non-thermal entropies (especially when measuring macroscopic organization like cars) is in the eye of the beholder. Measurement almost become meaningless except in terms maybe of utlity of the observer!

    with that on the table, back to your question:

    Greetings, Sal:
    I’m curious if you might agree that the Boltzman-Gibbs version of the second law would perhaps be easier to apply to the “reverse video” argument. I use it in my biology classes at Cornell when discussing the second law and found that students very quickly understand the concept of microstates and how they relate to entropy.

    With thermal entropy, the tornado analogy is not appropriate to describe the microstates. When heat is applied to a pile of dirt, its thermal entropy goes up. It is dubious to say the microstates become more disordered in the way the tornado goes through a town. Many physicists will not feel comfortable saying thermal entropy increase corresponds to more disorder in the microstates. The entropy in this case corresponds merely to the number of microstates that can achieve the macroscopic properties (like temperature) of the object. It’s a bit esoteric, but suffice to say, disorder is a very forced way to describe the situation. I wouldn’t feel comfortable using the term for thermal entropy.

    Contrast this to increase in mixing entropy or configurational entropy (such as a protein disintegrating) where position matters. It would be fair to say that the microstate become more disordered. I feel more comfortable with the notion of “disorder” here….

    The problem is the 2nd Law in engineering practice deals mostly with thermal entropy.

    The more foundational discipline of statistical mechanics deals with thermal and mixing and configurational entropy. Invocation of the 2nd law for non-thermal entropies is very forced, imho, and that is the current point of contention in this thread.

    I may not have answered your question, but I hope it provides insights into the subject matter.

  33. Allen,

    As a follow up, I want to go into why thermal entropy should not be associated with our notions of disorder. It is an abuse of language, imho. I will use an analogy, and it will make far more sense if you dissociate the word “entropy” from “disorder”.

    Consider that we have 5 people and I gave you 10 $1 bills to give to the 5 people. There are limited number of ways you could distribute it:

    1. Give all $10 to one person and $0 to the other 4
    2. Give $2 to all 5 people
    3. Give $6 to 1 person and $4 to the other 4

    etc.

    The amount of money that each person has is his microstate and the way the money is distributed among the population is the microstate of the population.

    There will be only a limited number of ways that money can be distributed and this number corresponds to the “money entropy” of the system. Notice “disorder” is not a very good way to described the simple counting method of the number of creative ways we can distribute the wealth!

    Now if we had more money to give, like say $100, there will be even more ways to spread the wealth. We can give:

    1. $100 to one person $0 to the the other 4
    2. $96 to one person $1 to the other 4
    3. $92 to one person $2 to the other 4
    etc.

    Clearly there are more ways to spread the wealth when there is more money to give. This increase in the number of ways to spread the wealth can be said to correspond to higher “money entropy”. The word “disorder” is not really appropriate to describe the increase of “money entropy”.

    How does this relate to thermal entropy? Let the money correspond to the total energy of the system. Thus instead of dollars we are talking Joules or KiloWatt Hours…

    The molecules are the people. The amount of kinetic energy that each molecule has is its energy microstate. Instead of dollars the microstate of each molecule is characterized by Joules or KiloWatt hours (well, electron volts are a more appropriate unit)…

    Entropy is simply a measure of the ways that the total energy (like wealth) can be distributed. In quantum mechanics, where energy comes in discrete packets (like $1 bills are discrete) the analogy fits well and is actually easier to see. It’s not so easy using classical mechanics which makes the counting analogy much more difficult, but Gibbs figured ways around this (since there was no mature Quantum theory in his day).

    The kinetic energy of a molecule in classical mechanics is

    1/2 m v^2

    suffice to say there is a corresponding quantum description of energy, but the nice thing is that in quantum mechanics, the energy is in nice discrete amounts, which makes the counting of microstates nicer.

    Hopefully this shows why “disorder” is not descriptive to the notion of thermal entropy.

    Mike Elzinga provided a means of testing understanding of these concepts. I fumbled some of my calculations to answer his questions about entropy in 16 molecules. His example illustrates the procedures for calculating entropy from the energy microstates of the atoms.

    See here for the discussion which illustrates the calculation (and some of my mistakes and subsequent corrections through others):
    The Skeptical Zone: 2nd Law, comment 14657

    That said, I’ve mentioned I think “disorder” may be appropriate to describe non-thermal entropies. This obviously will be a source of confusion!

  34. Yes I’m left wondering which mechanism of evolution, exactly, is compatible with the 2nd law.

    * is or isn’t compatible ^^

    The proper way to frame the argument is not whether evolution is compatible or incompatible with laws, rather wether life’s emergence is consistent with the supposed mechanisms of origin of life and evolution.

    Why is this distinction important? Because living systems transcends physical laws just like software transcends physical law and hardware by definition.

    If we suppose evolutionary mechanisms are based on physical law, but then demonstrate that life’s essential features are independent of physical law, then your question becomes moot.

    How, you may ask, can this be done? The outline was provided in a little known work that appeared in cell biology international.

    http://www.uncommondescent.com.....am-design/

    Natural mechanisms are all highly self-ordering. Reams of data can be reduced to very simple compression algorithms called the laws of physics and chemistry. No natural mechanism of nature reducible to law can explain the high information content of genomes. This is a mathematical truism, not a matter subject to overturning by future empirical data. The cause-and-effect necessity described by natural law manifests a probability approaching 1.0. Shannon uncertainty is a probability function (log2 p). When the probability of natural law events approaches 1.0, the Shannon uncertainty content becomes miniscule (log2 p = log2 1.0 = 0 uncertainty). There is simply not enough Shannon uncertainty in cause-and-effect determinism and its reductionistic laws to retain instructions for life. Prescriptive information (instruction) can only be explained by algorithmic programming. Such DNA programming requires extraordinary bit measurements often extending into megabytes and even gigabytes. That kind of uncertainty reflects freedom from law-like constraints.

  35. 35
    Chance Ratcliff

    “…rather wether life’s emergence is consistent with the supposed mechanisms of origin of life and evolution.”

    I suppose I largely agree. My point was rhetorical, in that evolution, and the origin of life for that matter, lack a verifiable natural mechanism which doesn’t itself rely upon the prior configuration of living systems – unless one includes RM+NS, which shows no sign of getting the job done. The supposed mechanisms are not forthcoming, which makes for difficult comparisons. Add to that the current academic environment, which has defined non-material causes for material effects as outside of scientific investigation, hence outside of verifiable knowledge.

  36. Sal et al:

    Kindly, cf my remarks here.

    KF

  37. Sal, what is your background in:

    1. Mathematics
    2. Thermodynamics

  38. kf,

    Have you read anything at all by Arieh Ben-Naim?

    http://www.ariehbennaim.com/

  39. Somebody is trying to popularise entropy! More power to him. (BTW, to help students see what was going on behind introductory stat mech stuff, I once tried an exercise with students using beads and small-cube ice trays, to communicate how the same sort of randomness would emerge. As number of beads grows from few relative cells, we can see a reverse-J turn gradually into a bell.)

  40. F/N: I should explain: beads = lumps of energy, and cells = atoms. Ground state being 0. Put beads, shake up, count trays with 0, 1, 2, 3 etc. The at-random distributions emerge quite naturally. With few beads, 0 dominates and a reverse J occurs. As number of beads goes up, we get to a point where the modal number of beads is more than 0. I found this little exercise was more helpful than a raft of abstruse, blindly memorised mathematics. It then helped students see what was going on behind the usual hieroglyphics that students struggle with across two alphabets and odd forms, For instance, the integral sign is an elongated S for sum. The delta used for a small increment is a Greek letter. so is the Sigma used in summation. And students often struggle to follow all those beloved subscripted indices etc. Not to mention the lore of algebra much less calculus and differential equation theorems. Then, mix in the approximations and frank virtuoso trickery — this is NOT a compliment — in too many derivations. Then, we have the tendency to skip and not even hint at key steps in derivations. Top off with the silly convention to now lay out the naked strings of formulae without explanation. (I grew up in the old fashioned Geometric system, where steps of reasoning were explained: Given, xyz = ABS, then ADB –> BC, by so and so’s principle, etc.) We have not touched on the even weirder symbols of logic!

  41. F/N 2: Oh yes, I forgot. The ice trays also allowed a model of diffusion to be explored, which is central to understanding the reason entropy and information are so closely linked.

    Start with a painted line down the middle, and have some beads on one side, A. B has none to begin with.

    Shake up — just a tad — and inspect, repeatedly. You can trace the strong tendency of beads to even out on both sides, with the same basic distribution.

    In short, we see a strong trend of migration from clusters of microstates that have low statistical weights in the config space, to those that have high weights. And not the reverse. We see time’s arrow, and we see why entropy is associated with disorder.

    Of course, it soon becomes evident, too, that the reason is that we have a config space, with vastly more dispersed than clumped together possibilities.

    Then, imagine that we have to get a given pattern of distributed specific parts in proper alignment for a function to emerge. Say, a triangle of cells, with the right colour (or its neighbouring colour: Position 1 must be red or orange or pink, two must be yellow or green, three must be blue, kight blue or purple, etc., where once the triangle pattern is there in any config of trays, upside down, right hand or left hand is acceptable) in the proper place, etc. This is strictly possible but so isolated in config space relative to scattered unaligned states that we have no right to expect such to emerge.

    The concept, islands of function in a space of mostly non-functional possibilities, naturally emerges.

    Then, we can step up to the parts moving about by Brownian motion in a vat thought exercise.

    We soon see why it is maximally improbable and utterly implausible for first life to emerge as a metabolic entity with a code based algorithmic self-replication facility by thermodynamic forces and chemistry, in a warm little pond etc.

    The FSCO/I involved is best explained as a result of design.

    Design is in the door and sitting at the table from the very root of the tree of life.

    So, why not call on it as a powerful means of explanation at later levels, where we see signs for which the only empirically observed and analytically warranted explanation is design?

    BTW, if one appeals to sidelined chunks of junk dna varying at random until voila, function emerges and is picked up, this means unconstrained variation. A random walk search for islands of function.

    And, once we have parts that must be right and in the right relative alignment, we have islands of function to face.

    KF

  42. Hey Groovamos,

    Have you been able to figure out where that “dT” is in the first equation. You know, the one you claim is missing but is still right there under you nose. This ain’t like the hunt for the God Particle you know. You just need the ability to see. If you don’t have that, then you shouldn’t be driving.

    Acknowledge, please. :-)

  43. Hey Groovamos,

    Have you been able to figure out where that “dT” is in the first equation.

    Acknowledge, please.

    :-)

  44. Thanks, Sal, for the thoughts. I still need to digest your essay and all the detailed comments.

    First, I’ve never quite understood Sewell’s position in the past. That is almost certainly a lack of effort and time on my part, and I’m willing to say at this point that he might have some cogent thoughts on the topic. I need to dig deeper when I get some time.

    Second, however, something in your essay caught my eye:

    “Strictly speaking, the earth is an open system, and thus the Second Law of Thermodynamics cannot be used to preclude a naturalistic origin of life.”

    With apologies to Bradley, this is nonsense. The whole issue of open/closed systems is a complete and utter red herring to the question of naturalistic origins. Whether a system is “closed” or “open” is purely one of convenient semantics, without any substance whatsoever.

    Specifically, we could say that the Earth is an “open” system, because it gets energy from the Sun. So what? I can also assert that the Earth/Sun system is a “closed” system. You want to include comets, asteroids, other planets, cosmic rays from deep space, etc.? Fine, I’ll just define my system more broadly and declare that it is a “closed” system, with all the available material resources of the universe of course being the ultimate closed system. The entire discussion is pointless, and anyone asserting that “evolution can happen because Earth is an open system” is spouting nonsense. I realize Bradley did not make that specific statement, but I’ve heard it many times and it rests on the same misunderstanding.

    There are two possibilities:

    1. If Bradley is making a very technical point that the second law of thermodynamics is simply irrelevant to the formation of information-rich biological structures from abiotic precursors, then fine, we can have a discussion about what the second law really is and what it really says and whether it applies to information and the like. (I suspect that this might be an area where Sewell’s arguments are weakest.)

    2. In contrast, if Bradley is asserting (as he seems to be) that even if life can’t arise in a “closed” system it could arise in an “open” system, then he is playing non-substantive semantic games and perpetuating nonsense.

  45. After a year, Groovamos is still pathologically incapable of admitting his mistakes.

    He still posts drivel at UD:
    http://www.uncommondescent.com.....ent-461286

    Hey groovamos, have you figured out you messed up.

    Hey Groovamos,

    Have you been able to figure out where that “dT” is in the first equation.

    Acknowledge, please.

    Hahaha!

Leave a Reply