Uncommon Descent Serving The Intelligent Design Community

FYI-FTR: The flawed open system thermodynamic entropy compensation argument

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over the past few days, ideological objectors to the design inference have been pushing the deeply flawed but superficially persuasive open system compensation argument. That is as long as mass and/or energy is flowing in and/or out, thermodynamics poses no problem for OOL or origin of major body plan innovations.

But, as there is a pivotal link between entropy and information once we duly factor in the microscopic, statistical view of matter, that cannot be right.

FYI-FTR, I clip Sewell’s crucial point in reply:

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid [–> i.e. diffuses] is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

This should be simple common sense, but common sense is at steep discount today.

Let me add some further thoughts from my online briefing note:

________________

>> 1] TMLO: In 1984, this well-received work provided the breakthrough critical review on the origin of life that led to the modern design school of thought in science. The three online chapters, as just linked, should be carefully read to understand why design thinkers think that the origin of FSCI in biology is a significant and unmet challenge to neo-darwinian thought. (Cf also Klyce’s relatively serious and balanced assessment, from a panspermia advocate. Sewell’s remarks here are also worth reading. So is Sarfati’s discussion of Dawkins’ Mt Improbable.)

2] But open systems can increase their order: This is the “standard” dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is:

a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it. Pardon the substitute for a real diagram, for now:

Isol System:

| | (A, at Thot) –> d’Q, heat –> (B, at T cold) | |

b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1

c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc

d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2

e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.

f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right:

================================================

||::::::::::::::::::::::::::::::::::::::::::||===

================================================

1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake).

2: Now, let the marbles all be at rest to begin with.

3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons].

4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right.

5: As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely.

6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve.

7: And, this pattern would emerge independent of the specific initial arrangement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue. (That is, the process is ergodic.)

8: A pressure would be exerted on the walls of the box by the average force per unit area from collisions of marbles bouncing off the walls, and this would be increased by pushing in the left or right walls (which would do work to push in against the pressure, naturally increasing the speed of the marbles just like a ball has its speed increased when it is hit by a bat going the other way, whether cricket or baseball). Pressure rises, if volume goes down due to compression. (Also, volume of a gas body is not fixed.)

9: Temperature emerges as a measure of the average random kinetic energy of the marbles in any given direction, left, right, to us or away from us. Compressing the model gas does work on it, so the internal energy rises, as the average random kinetic energy per degree of freedom rises. Compression will tend to raise temperature. (We could actually deduce the classical — empirical — P, V, T gas laws [and variants] from this sort of model.)

10: Thus, from the implications of classical, Newtonian physics, we soon see the hard little marbles moving at random, and how that randomness gives rise to gas-like behaviour. It also shows how there is a natural tendency for systems to move from more orderly to more disorderly states, i.e. we see the outlines of the second law of thermodynamics.

 11: Is the motion really random? First, we define randomness in the relevant sense:

In probability and statistics, a random process is a repeating process whose outcomes follow no describable deterministic pattern, but follow a probability distribution, such that the relative probability of the occurrence of each outcome can be approximated or calculated. For example, the rolling of a fair six-sided die in neutral conditions may be said to produce random results, because one cannot know, before a roll, what number will show up. However, the probability of rolling any one of the six rollable numbers can be calculated

12: This can be seen by the extension of the thought experiment of imagining a large collection of more or less identically set up boxes, each given the same push at the same time, as closely as we can make it. At first, the marbles in the boxes will behave very much alike, but soon, they will begin to diverge as to path. The same overall pattern of M-B statistics will happen, but each box will soon be going its own way. That is, the distribution pattern is the same but the specific behaviour in each case will be dramatically different.

 13: Q: Why?

 14: A: This is because tiny, tiny differences between the boxes, and the differences in the vibrating atoms in the walls and pistons, as well as tiny irregularities too small to notice in the walls and pistons will make small differences in initial and intervening states — perfectly smooth boxes and pistons are an unattainable ideal. Since the system is extremely nonlinear, such small differences will be amplified, making the behaviour diverge as time unfolds. A chaotic system is not predictable in the long term. So, while we can deduce a probabilistic distribution, we cannot predict the behaviour in detail, across time. Laplace’s demon who hoped to predict the future of the universe from the covering laws and the initial conditions, is out of a job.

15: To see diffusion in action, imagine that at the beginning, the balls in the right half were red, and those in the left half were black. After a little while, as they bounce and move, the balls would naturally mix up, and it would be very unlikely indeed — through logically possible — for them to spontaneously un-mix, as the number of possible combinations of position, speed and direction where the balls are mixed up is vastly more than those where they are all red to the right, all black to the left or something similar.

(This can be calculated, by breaking the box up into tiny little cells such that they would have at most one ball in them, and we can analyse each cell on occupancy, colour, location, speed and direction of motion. thus, we have defined a phase or state space, going beyond a mere configuration space that just looks at locations.)

 16: So, from the orderly arrangement of laws and patterns of initial motion, we see how randomness emerges through the sensitive dependence of the behaviour on initial and intervening conditions. There would be no specific, traceable deterministic pattern that one could follow or predict for the behaviour of the marbles, through we could work out an overall statistical distribution, and could identify overall parameters such as volume, pressure and temperature.

17: For Osmosis, let us imagine that the balls are of different size, and that we have two neighbouring boxes with a porous wall between them; but only the smaller marbles can pass through the holes. If the smaller marbles were initially on say the left side, soon, they would begin to pass through to the right, until they were evenly distributed, so that on average as many small balls would pass left as were passing right, i.e., we see dynamic equilibrium. [this extends to evaporation and the vapour pressure of a liquid, once we add in that the balls have a short-range attraction that at even shorter ranges turns into a sharp repulsion, i.e they are hard.]

 18: For a solid, imagine that the balls in the original box are now connected through springs in a cubical grid. The initial push will now set the balls to vibrating back and forth, and the same pattern of distributed vibrations will emerge, as one ball pulls on its neigbours in the 3-D array. (For a liquid, allow about 3% of holes in the grid, and let the balls slide over one another, making new connections, some of them distorted. The fixed volume but inability to keep a shape that defines a liquid will emerge. The push on the liquid will have much the same effect as for the solid, except that it will also lead to flows.)

 19: Randomness is thus credibly real, and naturally results from work on or energy injected into a body composed of microparticles, even in a classical Newtonian world; whether it is gas, solid or liquid. Raw injection of energy into a body tends to increase its disorder, and this is typically expressed in its temperature rising.

 20: Quantum theory adds to the picture, but the above is enough to model a lot of what we see as we look at bulk and transport properties of collections of micro-particles.

21: Indeed, even viscosity comes out naturally, as . . . if there are are boxes stacked top and bottom that are sliding left or right relative to one another, and suddenly the intervening walls are removed, the gas-balls would tend to diffuse up and down from one stream tube to another, so their drift velocities will tend to even out, The slower moving stream tubes exert a dragging effect on the faster moving ones.

 22: And many other phenomena can be similarly explained and applied, based on laws and processes that we can test and validate, and their consequences in simplified but relevant models of the real world.

 23: When we see such a close match, especially when quantum principles are added in, it gives us high confidence that we are looking at a map of reality. Not the reality itself, but a useful map. And, that map tells us that thanks to sensitive dependence on initial conditions, randomness will be a natural part of the micro-world, and that when energy is added to a body its randomness tends to increase, i.e we see the principle of entropy, and why simply opening up a body to receive energy is not going to answer to the emergence of functional internal organisation.

24: For, organised states will be deeply isolated in the set of possible configurations. Indeed, if we put a measure of possible configurations in terms of say binary digits, bits, if we have 1,000 two-state elements there are already 1.07*10^301 possible configs. The whole observed universe searching at one state per Planck time, could not go through enough states of its 10^80 or so atoms, across its thermodynamically credible lifespan — about 50 mn times the 13.7 BY said to have elapsed from the big bang — to go through more than about 10^150 states. That is, the whole cosmos could not search more than a negligible fraction of the space. The hay stack could be positively riddled with needles, but at that rate we have not had any serious search at all.

25: That is, there is a dominant distribution, not a detailed plan a la Laplace’s (finite) Demon who could predict the long term path of the world on its initial conditions and sufficient calculating power and time.

 26: But equally, since short term interventions that are subtle can have significant effects, there is room for the intelligent and sophisticated intervention; e.g. through a Maxwell’s Demon who can spot faster moving and slower moving molecules and open/shut a shutter to set one side hotter and the other colder in a partitioned box. Providing he has to take active steps to learn which molecules are moving faster/slower in the desired direction, Brillouin showed that he will be within the second law of thermodynamics. So, plainly, for the injection of energy to instead do predictably and consistently do something useful, it needs to be coupled to an energy conversion device.

 g] When such energy conversion devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but — from the above — negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B’ below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod’s “chance + necessity” [cf also Plato’s remarks] only.)

h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines — and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of intelligent design and also exhibit step-by-step problem-solving processes (even including “do-always” looping!)]:

| | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –>

 d’W [work done on say D] + d’Qo –> (C, sink at Tc) | |

i] A’s entropy: dSa >/= – d’Qi/Th

 j] C’s entropy: dSc >/= + d’Qo/Tc

k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law — unsurprisingly, given the studies on steam engines that lie at its roots — holds for heat engines.

 l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d’Qi. [The problem is to explain the origin of the heat engine — or more generally, energy converter — that does this, if it exhibits FSCI.]

 m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of complex, organised, algorithm-implementing energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI].

n] In short, the decisive problem is the [im]plausibility of the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI — on the direct import of the many cases where we do directly know the causal story of FSCI — becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch’s 8 – 9:

 While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . .

 3] So far we have worked out of a more or less classical view of the subject. But, to explore such a question further, we need to look more deeply at the microscopic level. Happily, there is a link from macroscopic thermodynamic concepts to the microscopic, molecular view of matter, as worked out by Boltzmann and others, leading to the key equation:

s = k ln W . . . Eqn.A.3

That is, entropy of a specified macrostate [in effect, macroscopic description or specification] is a constant times a log measure of the number of ways matter and energy can be distributed at the micro-level consistent with that state [i.e. the number of associated microstates; aka “the statistical weight of the macrostate,” aka “thermodynamic probability”]. The point is, that there are as a rule a great many ways for energy and matter to be arranged at micro level relative to a given observable macro-state. That is, there is a “loss of information” issue here on going from specific microstate to a macro-level description, with which many microstates may be equally compatible. Thence, we can see that if we do not know the microstates specifically enough, we have to more or less treat the micro-distributions of matter and energy as random, leading to acting as though they are disordered . . . .

4] Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that FSCI-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

5] The above sort of thinking has also led to the rise of a school of thought in Physics — note, much spoken against in some quarters, but I think they clearly have a point — that ties information and thermodynamics together. Robertson presents their case; in summary:

 . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . .

 [deriving informational entropy . . . ]

 S({pi}) = – C [SUM over i] pi*ln pi, [. . . “my” Eqn A.4]

 [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the “Holy Grail” of statistical thermodynamics]. . . .[pp.3 – 6]

 S, called the information entropy, . . . correspond[s] to the thermodynamic entropy, with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context [p. 7] . . . .

Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [p. 36.]

 [Robertson, Statistical Thermophysics, Prentice Hall, 1993. (NB: Sorry for the math and the use of text for symbolism. However, it should be clear enough that Roberson first summarises how Shannon derived his informational entropy [though Robertson uses s rather than the usual H for that information theory variable, average information per symbol], then ties it to entropy in the thermodynamic sense using another relation that is tied to the Boltzmann relationship above. This context gives us a basis for looking at the issues that surface in prebiotic soup or similar models as we try to move from relatively easy to form monomers to the more energy- and information- rich, far more complex biofunctional molecules.)]>>

 Also, we can note on the link between thermodynamics and entropy:

>> we may average the information per symbol in [a] communication system thusly (giving in terms of -H to make the additive relationships clearer):

– H = p1 log p1 + p2 log p2 + . . . + pn log pn

or, H = – SUM [pi log pi] . . . Eqn 5H,

the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information):

At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

. . . . as [Robertson, in Statistical Thermophysics,] astutely observes on pp. vii – viii:

 . . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .>>

___________

In short, it is time for re-thinking the compensation argument. END