Home » Chemistry, Comp. Sci. / Eng., Complex Specified Information, Engineering, News, Physics » Specified Entropy — a suggested convention for discussion of ID concepts

Specified Entropy — a suggested convention for discussion of ID concepts

In terms of textbook thermodynamics, a functioning Lamborghini has more thermal entropy than that same Lamborghini with its engine and other vital parts removed. But intuitively we view such a destruction of a functioning Lamborghini as an increase in entropy and not a decrease of entropy. Something about this example seems downright wrong…

To fix this enigma, and to make the notion of entropy line up to our intuitions, I’m suggesting that the notion of “specified entropy” be used to describe the increase in disorganization. I derive this coined phrase from Bill Dembski’s notions of specified information. In the case of the Lamborghini getting its vital parts removed, the specified entropy goes up by exactly the amount that the specified information goes down.

This isn’t a radical change in terms of ID literature, but may help to convey what is really meant by ID proponents when they say entropy is going up. What they really mean is not thermal entropy, but specified entropy.

As I mentioned in the comments of another thread, I tutor college and high school students in math, physics, and chemistry — many of them I met in church circles, and many are ID friendly. It would bother my conscience if I said things that grate against what they are learning. Here is a simple example.

QUESTION: According to textbook chemistry, physics, and engineering, which system has more entropy, a simple virus or an adult human?

ANSWER: Adult human if by entropy one means thermal entropy, and also the adult human if by entropy one means Shannon entropy.

I invite any UD reader to agree or disagree with my answer and state from principles of physics the reasons why they agree or disagree. State your case just as you would when explaining to a college chemistry, physics, or engineering student. Given all the discussion of entropy on the net, surely, it’s not unreasonable to pose a question that a college science student might consider. If the ID community wishes to help the next generation of ID friendly science students, these are the sort of basic science questions that are fair game.

So paradoxically, if an intelligent designer evolved a human from a simple virus, the designer had to add MORE entropy to the human design than what was in the virus. So if a science student asked me, “which has more entropy, a virus or a human?” I’d say the human.

Here is another question.

QUESTION: According to textbook college science, which state of the rat has more entropy, when it is a warm and living rat or when it is a dead and frozen rat near absolute zero?

ANSWER: The rat when it was warm and living had more entropy than when the rat was dead and frozen near absolute zero.

This seems down right wrong. What is the problem? The problem is the word “entropy” is being equivocated. Are we talking thermal entropy or specified entropy (related to ID concepts). Depending on what definition one is working from one will get a different answer.

Unless we adopt some convention for clarifying what type of entropy is being discussed, confusion will reign. One could mistakenly reason:

A human has more entropy than a virus, a living warm rat has more entropy than a frozen dead rat, and since the 2nd law says entropy is increasing, the 2nd law helps a virus with low entropy evolve into a human with high entropy, and dead things with low entropy evolve into living things with higher entropy

This would be the wrong conclusion, but the reason it is wrong is the notion of entropy is being equivocated, not to mention system boundaries are being redrawn on the fly — not exactly a wholesome way of analyzing systems.

By contrasting the notion of “specified entropy” (measured in bits) against the notion of “thermal entropy” (measured in Joules/Kelvin), at least some of the equivocations and confusions might be counteracted. This surely isn’t the end of the matter, but we have to start somewhere. What do you want me to tell my students? Here is your chance to serve the ID community by providing science-based answers to science students, otherwise these villains might get a hold of their minds.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

26 Responses to Specified Entropy — a suggested convention for discussion of ID concepts

  1. For the reader’s benefit, here is a quote from some textbook chem:

    Increase number of particles = increased entropy
    Increase temperature = increased entropy

    http://alevelchem.com/aqa_a_le.....351/05.htm

    In the case of the virus versus human, the human has trillions more particles than the virus, hence a human has more entropy. It’s no contest, we don’t even need exact figures to make that inference.

    In terms of the living rat having more entropy than the dead rat, the simple fact the dead rat is frozen near absolute zero says the frozen dead rat has less entropy than the warm living rat.

    But entropy in this context is thermal entropy not specified entropy.

  2. scordova: “I’m suggesting that the notion of “specified entropy” be used to describe the increase in disorganization.”

    Kudos to you. I am glad that you seem finally to near a little Prof. Sewell’s position about that. Your notion of “specified entropy” is how entropy is meant in statistical mechanics (how Prof. Sewell always means it, i.e. in its information sense).

    So we have in all irreversible processes an asymmetrical situation: increase in [specified] entropy destroys organization, while decrease in entropy doesn’t create organization. The final result is always that, without intelligent intervention, in nature things go unavoidably towards destruction. Things don’t evolve (in the sense that evolutionists mean: evolution = spontaneous construction).

  3. SC:

    The issue pivots on a boundary-system question.

    And the proper context for the question is one in which indubitably the only relevant forces would be those of thermal agitation etc and associated chemical kinetics: Darwin’s warm little pond. This, because it is known that there are information rich systems that do constructive work, and the only ones we have seen the origin of, are contrivances, i.e. designs. Also, as this is the root of the proposed tree of life. No roots, nothing else thereafter.

    As I noted in the thread, we go around the lamp post on the dark and stormy night because that is where we can see clearest, and that gives us some ideas about where we cannot see so clearly.

    Notice, that is actually how Sewell begins also, as I already cited in the thread you are evidently responding to and as summarised nicely:

    . . . The second law is all about probability [--> that is, the relative statistical weights of accessible microstates consistent with noted macrostates], it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible [--> consistent with macrostates] arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur. [--> this is an OOl context]

    The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone [--> without creative directing intelligence] could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 [--> that is the probabilities barriers are not credibly surmountable absent intelligence] . . . .

    What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. [--> note the issue of system boundaries and what crosses] As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

    So, the issue on the table is OOL on a barren planet and the question is to account for functionally specific configs of atoms and molecules organised in the systems of life, where such configs are deeply isolated in massively beyond astronomical config spaces, without designing intelligence. We already know designing intelligence creates FSCO/I, the issue is do blind chance and blind mechanical necessity do the same in our observation, and how does that relate to the implications of grossly disparate relative statistical weights of relevant clusters of microstates issues raised by thermodynamic reasoning once one moves beyond lab scale observations to the micro-scale. Where also entropy is a metric of numbers of accessible microstates consistent with given macro-conditions a la s = k log w etc.

    And where in recent decades, we have been led to look at the informational view of thermodynamics, following Jaynes et al down to Robertson et al. And where it is a reasonable way to view entropy that it is:

    . . . in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.

    Freezing a rat or asking how much thermal entropy is in a man vs a virus, does not begin to address that question.

    Where I have noted that the rat is a fine tuned FSCO/I rich system, and freezing it takes it out of the range of that fine tuning, which kills it. That has little to do with how you get to a rat starting from a barren planet orbiting a young star, without intelligent creative action.

    Similarly, the gross thermal entropy in a man is more than that in a virus, but that has little or nothing to do with the issue that is properly on the table: getting to man and virus, from a barren planet orbiting a young sun, without intelligent action.

    I don’t doubt that this sort of talking point has been used as a side track leading into a rhetorical ambush or two. I don’t doubt that it is rhetorically effective, but then a LOT of mistaken, misleading or outright deceptive arguments are.

    That still leaves on teh table the actual matter that lies there at the root of the tree of life: explain OOL and particularly FSCO/I in cell based life, on empirical observations and reasonable generalisations therefrom, without design and without violating the issue of statistical weight balances of clusters of microstates consistent with the relevant conditions at macro-level.

    And, finally, as I recall, Sewell long since spoke in terms of components of entropy, using IIRC the term X-entropy. Where we note that the classical view of entropy does use components and changes in entropy, not an overall ground up value. Cf. Steam tables as a simple example. That is what the expression ds >/= d’q/T leads to as suitably integrated in a system.

    The relevant situation, as is evident from the highlights int eh clip from Sewell above, is one where we need to give a good account for FSCO/I on blind chance and necessity in realistic conditions on a barren planet or the like, without requiring materialistic miracles.

    KF

  4. So we have in all irreversible processes an asymmetrical situation: increase in [specified] entropy destroys organization, while decrease in entropy doesn’t create organization. The final result is always that, without intelligent intervention, in nature things go unavoidably towards destruction.

    Excactly! Thank you. Thermal entropy does not capture this insight, Shannon Entropy doesn’t capture this insight, Algorithmic (Kolmogorov) Entropy does capture this insight — thus the ID community needs the notion of specified entropy, because no other notion of entropy captures the problem.

    Random mindless processes can increase all the other forms of entropy, and that doesn’t help the ID community. Yet we know intuitively when something dies, when a design breaks down, some measure of disorganization has to go up.

    If we use the other 3 entropies, we invite endless difficulty, whereas the one thing we know that does increase is the disorganization, but up till now we haven’t really had a phrase for it.

    So from this point on, at least provisionally I’ll refer to the concept.

  5. KF,

    Jayne’s work was important, but merely connecting thermodynamics to information still does not lead to the desired result of a design inference. Gange hand waved the issue, but it was a start.

    Take the rat example, according to Jaynes, when the rat freezes its information content should go up because now we have substantially less uncertainty in what microstate the system is in, yet it is clear the specified information went down. Gange didn’t make the connection between generalized information of Jaynes with specified information of life. The two notions were left disconnected.

    Thankfully, simply probability arguments will suffice. We don’t have to use thermodynamics to show if we put a wrecking ball on Lamborghini, it’s specified entropy will go up.

    Bradley Thaxton and Olsen used the phrase “configurational entropy” in their book. That is close to the notion of specified entropy, but the term can cause difficulty for the ID movement. It’s better for the ID movement to own the definition of specified entropy because otherwise it will lead to confusion.

    I asked a very sharp physicist about the convention of material scientists like Bradley using “configurational entropy”, and here was his response:
    http://theskepticalzone.com/wp.....ment-15460

    We might be able to use Jayne’s to analyze processes happening at constant temperature and in closed systems for starters, but once you start lowering temperatures, the information content goes up, and we’re right back where we started, fighting the problem of definitions. I think it can be fixable, and this essay was intended as part of the necessary work to lay out workable formalisms.

    Sal

  6. scordova

    Thanks. I appreciate very much your efforts in trying to get an agreement, on a topic I too consider extremely important for ID.

    You say:

    So paradoxically, if an intelligent designer evolved a human from a simple virus, the designer had to add MORE entropy to the human design than what was in the virus. So if a science student asked me, “which has more entropy, a virus or a human?” I’d say the human.

    Ok, I understand what you mean here. I would have only a doubt about the terminology. Why to say “add MORE entropy”? The designer has to add more organization. Consequently, to destroy such organization, it would take more entropy. A human has more organization than a virus, and yes it takes more entropy to destroy a human than a virus.

    Yours seems to me a too much “elliptic” expression, that’s all.

  7. Ok, I understand what you mean here. I would have only a doubt about the terminology. Why to say “add MORE entropy”? The designer has to add more organization. Consequently, to destroy such organization, it would take more entropy. A human has more organization than a virus, and yes it takes more entropy to destroy a human than a virus.

    Yours seems to me a too much “elliptic” expression, that’s all.

    Good comment, and I will attempt to clarify.

    If you are Craig Ventner or some other intelligent agency trying to evolve a virus into a human you need to:

    1. decrease its Specified Entropy
    2. increase its Shannon entropy
    3. increase its thermal entropy
    4. increase its algorithmic (Kolmogorov) entropy

    Note that some of the entropies must go up in order for the specified entropy to go down. There is a simple reason for this, more particles means

    2. increase its Shannon entropy
    3. increase its thermal entropy
    4. increase its algorithmic (Kolmogorov) entropy

    From the chem textbook cited earlier:

    Increase number of particles = increased entropy
    Increase temperature = increased entropy

    http://alevelchem.com/aqa_a_le…..351/05.htm

    That’s why a human has more thermal entropy than a virus even though a human has less specified entropy than a virus. The human simply has more parts, and more parts means higher thermal and Shannon entropy.

  8. scordova

    From the chem textbook cited earlier:
    Increase number of particles = increased entropy
    Increase temperature = increased entropy

    That’s why a human has more thermal entropy than a virus even though a human has less specified entropy than a virus. The human simply has more parts, and more parts means higher thermal and Shannon entropy.

    Interesting distinguo. Yes, in a sense entropy increases with multiplicity. But, there is an important thing to consider: what matters it is what degree of unification/organization such multiplicity has got. It is not multiplicity per se to have entropy. It is its less or more high degree of unification/organization. Example: 80 kg of rock has more entropy than a 80 kg human, because the degree of unification/organization of a human body is higher than rock’s. In both cases we have almost the same number of particles. Nevertheless the former has more entropy than the latter.

    So, it is not sufficient to consider the number of particles, rather what I called degree of unification/organization of them. I suppose that has something to do with your Specified Entropy, which is lower in a human than in a virus.

  9. So, it is not sufficient to consider the number of particles, rather what I called degree of unification/organization of them. I suppose that has something to do with your Specified Entropy, which is lower in a human than in a virus.

    Yes. Exactly.

    Without making the distinction of what entropy we are talking about there will be confusion.

    Now the Shannon entropy which relates to information is easy to see. 1 DNA base can take on 4 possible states, so each DNA base has 2 bits of information:

    log2 (4) = 2 bits

    Now if the DNA in a human cell has 3.5 giga bases, then it has 2 * 3.5 = 7.0 giga bits of Shannon entropy , or Shannon information.

    Shannon information says nothing of the quality of information. If a CD can store 700 megabytes of information, it actually has 700 megabytes of Shannon Entorpy, or Shannon Uncertainty, or Shannon Information. But Shannon Entropy says nothing of the quality of information.

    A CD can have 700 megabytes of garbage or specified complexity, it will always have 700 megabytes of Shannon entropy unless we physically destroy the CD.

    The more complex an organism is, the more Shannon entropy it must have since the amount of information it holds must also increase with complexity.

    When we take a 20 Gig disk drive (20 gigs of shannon entropy) and scramble the information on it, we are destroying any of the organization that might have been there, and the drive will have maximum specified entropy and essentially zero specified information.

    Now if we write meaningful code onto the disk, we have lowered its specified entropy and increased the amount of specified information on the disk.

    So again, a complex living organism compared to a simple virus, relatively speaking has:

    1. low specified entropy
    2. high thermal entropy
    3. high Shannon entropy
    4. high Kolmogorov entropy

    Or equivalently:

    1. high specified information
    2. high thermal entropy
    3. high Shannon entropy
    4. high Kolmogorov entropy

    I prefer to say high specified information rather than low specified entropy. I can more easily state the absence or deterioration of specified information, but to help the ID discussion along, if one wants some sort of entropy that describes the lack of design or the deterioration of design, “specified entropy” might be the way to conceptualize it.

  10. Yes, I have been trying to say this for quite some time now, though not very well. When Granville was arguing with some people about this point “If an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering (or leaving) which makes it NOT extremely improbable.”

    I tried to argue that the type of entropy he was talking about may not yet have scientific measurements or perfect scientific rigor, but it is something that we can detect intuitively. Just as we could detect light before we could really measure or rigorously define it. Thanks Sal for this idea.

  11. Sal @1:

    Increase number of particles = increased entropy
    Increase temperature = increased entropy

    Sigh.

    I suppose we could say that more particles in a pile of sand has more “entropy” than fewer particles of sand based on some vague notion that more of something unorganized is “more” than less of something unorganized. All very circular and utterly useless.

    But unless we are willing to define entropy as a measurement of the number of elementary particles (an absurd definition that is both unhelpful and confusing), it simply does not follow that more particles = more entropy.

    To pit the two quotes against each other, we can have (A) x number of particles at 32 degrees, and (B) x+y number of particles at temperature 32-z. Now which has more entropy? We haven’t the faintest idea, because simply having more or less particles, or more or less temperature is utterly beside the point.

    Neither of the quoted statements from the text makes any sense out of context of a particular system being measured. And neither addresses in the slightest what we are interested in: specified complexity.

  12. Sigh.

    I suppose we could say that more particles in a pile of sand has more “entropy” than fewer particles of sand based on some vague notion that more of something unorganized is “more” than less of something unorganized. All very circular and utterly useless.

    There is good reason this is the convention adopted by chemists, physicists, and engineers.

    Consider 1000 moles of liquid water versus 1 mole of liquid water at the freezing point of 273 Kelvin.

    The 1000 moles of water has 1000 times more thermal entropy than the 1 mole of liquid water, that means to change 1000 moles into ice, one has to remove 1000 times more entropy to change 1000 moles of liquid water into ice than changing 1 mole of liquid water into ice.

    Understandably, the convention of increasing the entropy number relative to the number of particles might be disconcerting to ID proponents, but Clausius and others (including creationists) involved in formulating thermodynamics weren’t interested in using Entropy to argue for ID, they were trying to build steam engines and understand chemistry and physics.

    Unlike evolutionary biology, the notions in chemistry and physics are not inherently hostile to ID. The conventions and definitions evolved in a way to make chemistry, physics, and engineering more effective.

    I would not be serving my students well if I passed along the following sentiment:

    Sigh.

    I suppose we could say that more particles in a pile of sand has more “entropy” than fewer particles of sand based on some vague notion that more of something unorganized is “more” than less of something unorganized. All very circular and utterly useless.

    They will have to calculate entropy change for their professors, and they will have to understand that more particles means more thermal entropy.

    More importantly than just making the grade, understanding thermal entropy in this way helps people do good science and engineering.

    The ID community can choose to recognize the conventions used in industry and adjust accordingly or they can complain and try to redefine long-used notions like entropy the way that suits them. I’ve suggested perhaps it would be simply better to coin words and phrases for our specialized concepts and thus the ID community owns the definitions.

    Some may have wondered why I seem so anal about definitions and conventions. Part of it is due to the ID community using terms that already have a certain meaning in industry. This causes confrontations we don’t need to be having.

  13. Sal, I’m not worried that you are being anal about definitions. That is OK. It is that the definitions being proposed — and the examples being offered — are removed from the issues at hand.

    Let’s take the “more particles” quote again. We could say that having more particles gives us more entropy than if we have less particles. Fine. That works if we define entropy with respect to the number of particles or a relationship that depends on the number of particles. That becomes circular and provides no meaningful insight, but whatever.

    It is about as interesting (and utterly as irrelevant) as saying “More particles = more mass.” Real valuable insight that. :)

    The question surrounding ID has nothing to do with that.

    Look at it this way: the only reason you are claiming that a human has more “entropy” than a virus is that the human is made up of more particles. But that doesn’t even begin to address the issue at hand. All you’ve done in that case is note that a human has more particles.

    By the same logic, a truck load of sand has more entropy than a human. And a whole beach has even more. Fine, whatever. We can pretend we are measuring something meaningful and call it “entropy” when in reality we are just counting particles and then substituting for our count of particles another fancy word — “entropy.”

    The comparison you need to make is not a small amount of particles (virus) to a larger amount of particles (human), but a comparison between two sets of an identical number of particles: say, a human and a pile of sand that has the same number of particles as a human.

    Now, under the quoted definition, they have the same entropy. Fine. And what does that tell us about the difference between the two? Precisely nothing. And yet there is a fantastical difference between the two — a difference that is recognized in the form of complex functional specified information. The number of particles, or the temperature, or the mass, or some other tangentially related measuring stick is irrelevant to the central distinction between the two. Worse, if we use this “entropy” as a tool to pretend we are measuring the difference between, say, a live rat and a dead rat, or a virus and a human, we are deluding ourselves.

    —–

    Look, at one level I am not sure we are that far apart on some of this. I hear you saying that we need to tie down definitions and that we need to come up with a better way to think about entropy. I’m all in favor of that. Indeed, I suspect you have spent more time than I thinking about Sewell’s approach, for example.

    I’m just a little surprised by what I see as sloppy examples or quotes taken out of context that serve to give the impression that there is some kind of fundamental issue with ID or ID terminology. I see it as hand wringing over a nonexistent issue.

    Again, I’m not here to support or reject something like Sewell’s particular use of terminology. Obviously he has caused something of a stir. However, from what I’ve seen thus far, his critics are harping on definitional nits, without considering his larger substantive argument.

    But maybe that is your point . . .

  14. SC:

    I must again draw attention to Sewell’s directly stated and/or implied context, with particular reference to OOL.

    What probability amplifier renders the access to functionally specific complex organised configs of atoms and molecules reasonably plausible, in absence of design?

    Remember, in that context, the only relevant forces and factors are strongly thermodynamic.

    Until there is a solid and fair answer to this, the observed pattern of exchanges and dismissals of Sewell amounts to little more than shifting the subject to a different context to attack and dismiss the man. Which has been pretty obvious in the case of far too many retorts I have seen, including the PT one raised by GD in my original thread of discussion.

    Now, the rat example can be seen in the context that heating it up sufficiently will ALSO kill it. So, doing +A and -A will both achieve the fundamentally same destructive result, though by different specific effects.

    That is, the rat’s biochemistry works in a given temp band.

    Heat up or cool down enough and it will die, as the chemistry breaks down.

    Simple thermal entropy calculation is largely irrelevant to the cause of death and moreso to the origin of the FSCO/I to be explained.

    Irreversibility is even less relevant as essentially all real world thermodynamic processes are irreversible; we often use entropy’s state function character to do a theoretical quasi-static equilibrium process calc between the relevant states and take the result.

    The Shannon metric of info-carrying capacity is not on the table, functionally specific complex info is. A DVD that says Windows XP but is garbled may have more Shannon info than a properly coded one. But that is besides the point.

    Maybe you were not watching when we had to deal with the MG attempt by Patrick May et al, but one upshot was to produce a simplification of the 2005 Dembski CSI metric applicable to this, Chi_500 = I*S – 500, bits beyond the solar system threshold. I is a relevant info metric [one may use Shannon or something like Durston et al's fits measure that brings to bear redundancies observed in life forms], and S a dummy variable that is 1 on having an objective reason to see functional specificity, 0 otherwise. (And yes that means doing the homework to identify whether or no you are sitting on one of Dembski’s zones T in fields of wider possibilities W. Functionally specific coding is a classic case, as in DNA and proteins derived therefrom.)

    I did use an earlier simple metric that does much the same job but this one is directly connected to Dembski’s work.

    I notice, it has been subjected to the same tactics Sewell has been, and that tells me that we are not dealing with something that is a matter of the merits (which is the sort of thing that has resulted in threats against my family, now culminating in attempts to reveal my personal residential address, which is target painting pure, ugly and simple — I am not interested in playing nicey-nice with people who are threatening my uninvolved family like that, or who by going along with such are enablers . . . ).

    Going back to the point being made on the nature of entropy, the informational view is a serious one, one that should not be simply brushed aside as though it is not saying anything worth bothering to listen to.

    As you know, we routinely partition entropy calcs, implicitly ruling out components that we do not deem directly relevant to a case. For instance, nuclear related forces and factors — which would immediately and decisively drown out anything we are doing with chemical, thermal, magnetic, electrical and optical or the like effects [being about 6 orders of magnitude beyond and often with a degree of imprecision that would make the things we are interested in vanish in the noise . . . ] — are routinely left out. But as Hiroshima, Nagasaki and nuke reactors routinely show, such are thermally relevant. What happens is the contexts and couplings or lack thereof save under exceptional circumstances, allow us to routinely ignore these factors.

    Likewise, there is no doubt that thermal factors are relevant to anything to do with micro-scale arrangements of mass and energy. However, we can see that in relevant systems, certain arrangements are stabilised against thermal agitation effects and related phenomena — noting maybe a radiation triggered generation of ions and free radicals in a cell, leading to destructive reactions that destabilise it leading to radiation sickness and death as an “exception” that brings to bear nuke forces . . . — and can be similarly isolated from the context as there is a loose enough coupling. (Where also, such radiation and/or chemically induced random variation, at a much lower level of incidence, is a known major source of genetic mutations. Such chance mutations in turn being the supposed main source of incremental creative information generation used in neo-Darwinian and related theories. Differential reproductive success leading to pop shifts being not an info source but an info remover through lack of reproductive success leading to elimination of the failed varieties.)

    We can then use the state function additivity of entropy to address the arrangements of mass and energy at micro-level in these highly informational systems.

    Only, we don’t usually do it under a thermodynamic rubric, we do it under the heading: information-bearing macromolecules and their role in biochemistry and the life of the cell. That is we are applying the old ceteris paribus implicit step of setting other things equal common in economics and implicit in a lot of scientific work otherwise.

    However, as cooking [or a high enough, long enough sustained fever] shows, thermal agitation can affect the process. The same would hold for cooling down a rat sufficiently to kill it.

    Let me cite the just linked Wiki article on cet par:

    A ceteris paribus assumption is often fundamental to the predictive purpose of scientific inquiry. In order to formulate scientific laws, it is usually necessary to rule out factors which interfere with examining a specific causal relationship. Under scientific experiments, the ceteris paribus assumption is realized when a scientist controls for all of the independent variables other than the one under study, so that the effect of a single independent variable on the dependent variable can be isolated. By holding all the other relevant factors constant, a scientist is able to focus on the unique effects of a given factor in a complex causal situation.

    Such assumptions are also relevant to the descriptive purpose of modeling a theory. In such circumstances, analysts such as physicists, economists, and behavioral psychologists apply simplifying assumptions in order to devise or explain an analytical framework that does not necessarily prove cause and effect but is still useful for describing fundamental concepts within a realm of inquiry . . .

    Cet par, we deal with the micro-state clusters relevant to the dynamics of cell based life, which are informational and functionally specific.

    We see too that the informational view can be extended to the analysis of the underlying processes of thermodynamics. Again citing that Informational Entropy Wiki article as a useful first level 101:

    At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. [--> Just as, nuke level forces are again right off the chart relative to normal chemical, optical, magnetic and thermal interactions . . . ]

    But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

    Thus, we see the direct relevance to Darwin’s warm little pond or the like.

    1 –> Open system, subject to mass inflows and outflows and energy inflows and outflows.

    2 –> Likely to pick up various chemicals from the environment, and subject to light etc.

    3 –> Absence of coupling mechanisms and systems that turn light, heat etc into shaft work that performs constructive tasks, yes.

    4 –> Presence of brownian motion etc and through molecular random motions, diffusion etc, yes.

    5 –> Presumed presence of monomers relevant to life, granted per argument.

    6 –> Presumed viable duration of same granted per argument.

    7 –> Problem: chirality, as there is no sustained effective thermal difference between L- and D- handed forms, so we normally see racemic forms. Import: apart from Glycine, 1 bit of info per monomer, where life relevant ones are at about 200 – 300 long in the chain for proteins. result: 2 – 3 typical proteins is already at the solar system threshold.

    8 –> Problem, cross reactions and formation of tars etc. Major issue, but ignored per argument. Though, the issue of encapsulation, appropriate cell membranes and smart gating are implicated here.

    9 –> Focus, proteins and D/RNA to code for them. Life forms require 100′s of diverse mutually fitted proteins in correct functional organisation, at minimum involving genomes of 100 – 1,000 bases, read as an equivalent number of bits. This is well beyond the blind search capacity of the solar system and the observed cosmos.

    10 –> Presumed source of info: chance variations and chemistry, driven by thermodynamic forces and related reaction kinetics to form the relevant co-ordinated cluster of macromolecules exhibiting in themselves large amounts of FSCO/I.

    11 –> Say, 1,000 protein molecules of typical length at 4.32 functionally specific bits per character, say 1/5 of these being essential, others allowing for substitution while folding to function. 216,000 bits, plus for chirality (5% allowance for Glycine, 20 different monomers) with 1,000 proteins avg 250 monomers, 237,000 bits.

    12 –> 453,000 bits, order of mag consistent with a genome length of order as discussed. the solar system-cosmos scale threshold of blind search is 500 – 1,000 bits. Where also the search space DOUBLES per additional bit.

    13 –> 453,000 bits implies a search space of 3.87*10^136,366, vastly beyond astronomical blind search capcity.

    14 –> However, in bytes that is 56, 525. Such a file size is well within reach of an intelligent source.

    15 –> The process also involves in effect undoing of diffusion to clump and properly organise the cell. The scope of that challenge is also well beyond astronomical blind search capacity.

    _________

    We need answers to this, not distractions, dismissals, target-painting threats and hand waving. Remember, OOL is the root of the tree of life, and without the root on credible blind chance mechanisms, nothing else is feasible.

    Where, if design is at the root of the tree of life, there is no good reason to exclude it from empirical evidence anchored comparative difficulties based inferences to best explanation thereafter across the span of the tree of life to us.

    KF

  15. Sal, I’m not worried that you are being anal about definitions. That is OK. It is that the definitions being proposed — and the examples being offered — are removed from the issues at hand.

    The issue at hand is the appropriateness or inappropriateness of various kinds of entropy for characterizing design. I went to some lengths to describe how in general an object that has substantially more particles in general will have more thermal entropy than an object with fewer, i.e. a human versus a virus.

    Let’s take the “more particles” quote again. We could say that having more particles gives us more entropy than if we have less particles. Fine. That works if we define entropy with respect to the number of particles or a relationship that depends on the number of particles. That becomes circular and provides no meaningful insight, but whatever.

    It is about as interesting (and utterly as irrelevant) as saying “More particles = more mass.” Real valuable insight that. :)

    That’s not the view of physicists, chemists, and engineers who actually work with thermal entropy. The number of particles in a system is pretty important.

    The question surrounding ID has nothing to do with that.

    I didn’t say thermal entropy (as used in Chemistry and construction of steam engines) should be used to characterize design. In fact, I’m pointing out that thermal entropy as defined by Clausius, Boltzman, etc. is inappropriate to characterize design, or have you not figured that out yet! :roll:

  16. I’m just a little surprised by what I see as sloppy examples or quotes taken out of context that serve to give the impression that there is some kind of fundamental issue with ID or ID terminology. I see it as hand wringing over a nonexistent issue.

    No, you’re not comprehending why this issue is raised at all. The examples are not sloppy and the quotes were not out of context … i.e. more moles of the same substance under the same conditions imply more thermal entropy, period!

    If we are dealing with different substances, some adjustments need to be made, but in the case of human versus virus, because humans are trillions of time larger than a virus, it’s rather pointless to get into the minutia the exact substances involved….

    that serve to give the impression that there is some kind of fundamental issue with ID or ID terminology.

    Criticizing ID wasn’t the point of the discussion, criticizing the use of thermal entropy and the Kelvin-Plank law to defend ID was the point of the discussion.

    I see it as hand wringing over a nonexistent issue.

    It is non-existent to someone who’ll say this:

    Sigh.

    I suppose we could say that more particles in a pile of sand has more “entropy” than fewer particles of sand based on some vague notion that more of something unorganized is “more” than less of something unorganized. All very circular and utterly useless.

    But it is not non-existent to people who recognized relating entropy to the number of particles is not circular or useless. A giant refrigerator has more impact than a puny one because it has more particles.

    Apparently you don’t seem to appreciate the importance of the number of particles in a thermodynamics system, otherwise you would not be saying its utterly useless to relate total thermal entropy to the number of particles in a thermodynamic system.

    A notion of thermal entropy that couldn’t distinguish the impact of a giant coal powered electrical plant versus a flashlight generator isn’t a very useful notion thermal entropy. Hence, thermal entropy must relate to the number of particles in some way.

  17. SC: Kindly, note my remarks in light of the ceteris paribus principle at 14 above. There is a legitimacy in looking at the underlying molecular and energy picture in Darwin’s pond, then asking how to reasonably get to FSCO/I in cell based life on the thermodynamics constrained forces in such a situation. For, though there is a tendency to rule convenient datum lines, this is the root of the Darwinist tree of life. And, as we can apply the point of reasonably isolable components of the overall thermodynamic picture, let us do so. KF

  18. KF,

    I think you’re making a misinterpretation of Jaynes. I’ve said before, the ID community has yet to really be able to apply Jaynes effectively.

    To clarify my point. Consider several litres of Hydrogen gas at room temperature. Now cool it till it liquefies and occupies only a small volume.

    Does the cooled, liquefied hydrogen provide more information (information as defined by Jaynes) or less? I say more.

    But then, if cooling a system increases the Jaynes information, then this is really no different than thermal entropy, which is of little help in characterizing design (i.e. thermal entropy can’t tell you a frozen rat is dead). Increasing or decreasing thermodynamic information is not the same as increasing or decreasing specified information. I think you’re conflating the two concepts (a concern I also have for Gange’s otherwise excellent book).

  19. SC:

    Thanks for thoughts.

    However, first, I am not trying to interpret Jaynes, if you had said I began from Robertson, that would be much closer to the facts. (Cf my always linked note. I suggest you read a copy of his Statistical Thermophysics. The thing is, there are diverse schools on thermodynamics and it looks like there is a cross-paradigm cultural/perceptual clash.)

    A container of H or He cooled to near 0 K will indeed be in a state where the description length to specify microstate given macrostate will be much shorter, leading to a lower estimate of entropy per the log of y/n questions to specify position and momentum given macrostate will be shorter. Through the statistical approach interpreted in light of information theory, the apparent coincidence of average info per symbol is given a much deeper view.

    Let me clip a bit from my note as I discuss themes raised by Robertson:

    ____________

    >> . . . we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer):

    – H = p1 log p1 + p2 log p2 + . . . + pn log pn

    or, H = – SUM [pi log pi] . . . Eqn 5

    H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor [--> a British telecomms author] notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] . . . .

    [clip used above from Wiki]

    . . . as [Robertson] astutely observes on pp. vii – viii:

    . . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .

    . . . in more details [albeit clipped, summarised and ruthlessly compressed], (pp. 3 – 6, 7, 36, cf Appendix 1 below for a more detailed development . . . ):

    . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .

    [deriving informational entropy, . . . ]

    H({pi}) = – C [SUM over i] pi*ln pi, [. . . "my" Eqn 6]

    [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .

    [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .

    Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]

    As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life’s Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then — again following Brillouin — identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously “plausible” primordial “soups.” In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale.

    By many orders of magnitude, we don’t get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis . . . >>
    _____________

    This is, in a nutshell, a good slice of where I am coming from.

    In addition, I underscore to you how our discussions of entropy take advantage of its state functional basis and additivity, so that we can apply cet par and focus on relevant aspects, bearing in mind how the molecular behaviour is influencing the situation.

    KF

  20. Sal, I’m willing to grant that we can count particles and say we are counting “entropy,” or measure temperature and say we are measuring “entropy.”

    The problems with your examples are are not so much that, but rather that they confuse, rather than enlighten.

    Sure, the rat dies when it is frozen to a solid block. But to argue that that is “caused” by a decrease in entropy is just confusing things. The same rat will die when heated to a particular temperature. Now its death is “caused” by an increase in entropy? Furthermore, if I deprive the rat of oxygen it will die, even though it is at the same exact temperature, with no change in “entropy.” If I behead the rat it will die, even though it is at the same temperature and has the same exact number of particles and no change in “entropy.” There are myriad ways to kill a rat. All that tells us is that there are specified conditions required for an organism to live.

    Furthermore, there are organisms with more or less particles and organisms that live just fine, thank you very much, at higher or lower temperatures. Thus, it is simply not the case that higher or lower entropy (in the senses you are discussing) has anything whatsoever to do with living organisms. I might as well strangle the rat and then claim that a loss of reddish hue in his nose, ears and feet “caused” his death. A change in color is every bit as irrelevant as the change in temperature or the change in particle numbers “entropy” examples you are giving.

    Treating an increase or decrease in temperature or the number of molecules as the “cause” of anything meaningful about what really makes a series of molecules come together to form a living, breathing, rat just confuses the issue.

    —–

    I take it you agree that measuring the temperature or measuring the number of particles has very little to do with whether something is a jumbled mass of particles or a highly coordinated living organism. I take it you also agree that talking about “entropy” in terms of the temperature or the number of particles isn’t helpful to ID. Agreed. So I’m not sure we are in all that much disagreement.

  21. Thanks to everyone for their comments. The issue is not whether ID is true or not. I believe ID is true.

    The issue in this discussion is the appropriateness or inappropriateness of using original definitions of thermal entropy as defined by Clausius and Boltzman as a defense of ID.

    The Lamborghini illustration was intended to illustrate that using Clausius and Boltzman’s notions of entropy won’t help the design argument especially since the functioning Lamborghini needed more thermal entropy than the one that was partially dismantled!

    Hence I suggest, for the ID community, they use a different notion of entropy that is more in line with what they are trying to prove. Boltzman’s version of thermal entropy:

    S = k log w

    is not the tool the ID community should use to make its case.

  22. I take it you agree that measuring the temperature or measuring the number of particles has very little to do with whether something is a jumbled mass of particles or a highly coordinated living organism. I take it you also agree that talking about “entropy” in terms of the temperature or the number of particles isn’t helpful to ID

    Exactly. That’s why ID proponents need to stop using arguments based on thermal entropy when what they really mean is some sort of specified entropy. I suggested (not insisted) on a loose definition of specified entropy based on specified information.

  23. How much entropy is in a pure vacuum? Thus, it seems a Lamborghini with fewer parts than a fully functional one is approaching that value. :P

    Seems specified information is the difference.

  24. SC: The concepts linked to the underlying molecular picture are crucial, as discussed. Also, the Darwin’s pond or the like is effectively appealing to thermal agitation to do organising work to get to first cell based life. We need to explain why that fails, and why the compensation talking point fails. Note also, the formulation for entropy used in detail work. KF

  25. PS: Discussion of Maxwell’s Demon in the context of Szilard is a stock item for the informational school of thought on thermodynamics. Cf. Robertson.

  26. KF,

    Thanks for all your comments. I hope you and your family are well.

    Sal

Leave a Reply