Uncommon Descent Serving The Intelligent Design Community

Thoughts on the Second Law

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A couple of days ago Dr. Granville Sewell posted a video (essentially a summary of his 2013 Biocomplexity paper).  Unfortunately, he left comments off (as usual), which prevents any discussion, so I wanted to start a thread in case anyone wants to discuss this issue.

Let me say a couple of things and then throw it open for comments.

1. I typically do not argue for design (or against the blind, undirected materialist creation story) by referencing the Second Law.  I think there is too much misunderstanding surrounding the Second Law, and most discussions about the Second Law tend to generate more heat (pun intended) than light.  Dr. Sewell’s experience demonstrates, I think, that it is an uphill battle to argue from the Second Law.

2. However, I agree with Dr. Sewell that many advocates of materialistic evolution have tried to support their case by arguing that the Earth is an open system, so I think his efforts to debunk that nonsense are worthwhile, and I applaud him for the effort.  Personally, I am astounded that he has had to spend so much time on the issue, as the idea of life arising and evolution proceeding due to Earth being an open system is so completely off the mark and preposterous as to not even be worthy of much discussion.  Yet it raises its head from time to time.  Indeed, just two days ago on a thread here at UD, AVS made essentially this same argument.  Thus, despite having to wade into such preposterous territory, I appreciate Dr. Sewell valiantly pressing forward.

3. Further, whatever weaknesses the discussion of the Second Law may have, I believe Dr. Sewell makes a compelling case that the Second Law has been, and often is, understood in the field as relating to more than just thermal entropy.  He cites a number of examples and textbook cases of the Second Law being applied to a broader category of phenomena than just thermal flow, categories that could be applicable to designed objects.  This question about the range of applicability of the Second Law appears to be a large part of the battle.

Specifically, whenever someone suggests that evolution should be scrutinized in light of the Second Law, the discussion gets shut down because “Hey, the Second Law only applies to heat/energy, not information or construction of functional mechanical systems, etc.”  Yet, ironically, some of those same objectors will then refer to the “Earth is an open system, receiving heat and energy from the Sun” as an answer to the conundrum – thereby essentially invoking the Second Law to refute something to which they said the Second Law did not apply.

—–

I’m interested in others’ thoughts.

Can the Second Law be appropriately applied to broader categories, to more than just thermal entropy?  Can it be applied to information, to functional mechanical structures?

Is there an incoherence in saying the Second Law does not apply to OOL or evolution, but in the same breath invoking the “Earth is an open system” refrain?

What did others think of Dr. Sewell’s paper, and are there some avenues here that could be used productively to think about these issues?

Comments
How interesting is this? From the Lewis/Randall text:
After the extremely practical considerations in the preceding chapters, we now turn to a concept of which neither the practical significance nor the theoretical import can be fully comprehended without a brief excursion into the fundamental philosophy of science.
This is the leading paragraph in Chapter X: The Second Law Of Thermodynamics And The Concept Of Entropy.Mung
April 6, 2014
April
04
Apr
6
06
2014
09:00 PM
9
09
00
PM
PDT
Continuing from the Lewis/Randall text:
Even if we recognize the possible validity of such exceptions, and attempt to express the second law in a form which would meet such objections, it would still be difficult to make a really satisfactory statement until we have indicated the connection between the law of entropy and another fundamental generalization which is sometimes called the law of probability. p 122
Remember, this is from 1923! Long before Shannon/Weaver. But to cut to the chase:
Hence in this simple case we find a very simple relation between the entropy and the logarithm of the probability... (p 125)
Mung
April 4, 2014
April
04
Apr
4
04
2014
09:53 PM
9
09
53
PM
PDT
continuing later:
There certainly can be no question as to the great difference in trend which exists between the living organism, and matter devoid of life. The trend of ordinary systems is toward simplification, toward a certain monotony of form and substance; while living organisms are characterized by continued differentiation, by the evolution of greater and greater complexity of physical and chemical structures. (p 121)
Salvador argues in another thread that organisms tend to evolve towards simpler forms and less differentiation, just like non-living matter. Who is right, and why?Mung
April 4, 2014
April
04
Apr
4
04
2014
09:11 PM
9
09
11
PM
PDT
I have a textbook on Thermodynamics from 1923 by G.N. Lewis and Merle Randall which has as Chapter XI. Entropy and Probability.
The second law of thermodynamics is not only a principle of wide-reaching scope and application, but also it is one which has never failed to satisfy the severest test of experiment. The numerous quantitative relations derived from this law have been subjected to more and more accurate experimental investigation without detection of the slightest inaccuracy. Nevertheless, if we submit the second law to a rigorous logical test, we are forced to admit that, as it is ordinarily stated, it cannot be universally true. (p 120)
In referring to Maxwell's demon they write:
Of course even in this hypothetical case one might maintain the law of entropy increase by asserting an increase of entropy within the demon more than sufficient to compensate for the decrease in question. Before conceding this point it might be well to know something more of the demon's metabolism. Indeed a suggestion of Helmholtz raises a serious scientific question of this character. He inquires whether micro-organisms may not possess the faculty of choice which characterizes the hypothetical demon of Maxwell.
The compensation argument has apparently been around for a while. :)Mung
April 4, 2014
April
04
Apr
4
04
2014
08:49 PM
8
08
49
PM
PDT
Chris haynes #61:
For those who disgaree [about the relationship between thermal vs information entropy], I need to say I get nothing from your long essays, becausue I dont believe you understand the terms you throw around. I urge you to do three things that Dr Sewell hasnt done. 1) State the second law in terms that one can understand. 2.) Define entropy precisely. 3.) Explain what Btu’s and degrees of temperature have to do with information. Based on my present understanding, here is what I would give: 1) Stable States Exist. A system is in a “Stable State” if it is hopelessly improbable that it can do measurable work without a finite and permanent change in its environment. 2) Entropy is a property of a system that is equal to the difference between i) the system’s energy, and ii) the maximum amount of work the system can produce in coming to equilibrium with an indefinitely large tank of ice water.
I tend to take a far less formal approach to thermodynamics, but I think I can work with this. I do have to quibble a bit on #2, though: you need to divide by temperature (the temp of the ice bath in your definition), change the sign (so that more work = less entropy), and probably also set the zero-entropy point (I prefer to think in third-law-based absolute entropies, where zero entropy corresponds with an ordered state at a temperature of absolute zero). With those caveats, let me take a stab at question 3: Consider a (theoretical) information-driven heat engine proposed by Charles H. Bennett (in section 5 of "The Thermodynamics of Computation -- A Review," originally published in International Journal of Theoretical Physics, vol. 21, no. 12, pp. 905-940, 1982). He imagines a heat engine that takes in blank data tape and heat, and produces work and tape full of random data. The principle is fairly general, but let's use a version in which each bit along the tape consists of a container with a single gas molecule in it, and a divider down the middle of the container. If the molecule is on the left side of the divider, it represents a zero; if it's on the right, it represents a one. The engine is fed a tape full of zeros, and what it does with each one is to put it in contact with a heat bath (in this case a large bath of ice water) at temperature T, replace the divider with a piston, allow the piston to move slowly to the right, and then withdraw the piston and replace the divider in the middle (trapping the gas molecule on a random side). While the piston moves to the right, the gas does (on average) k*T*ln(2) (where k is the Boltzmann constant) of work on it, and absorbs (on average) k*T*ln(2) of heat from the bath (via the walls of the container). Essentially, it's a single-molecule ideal gas undergoing reversible isothermal expansion. And while the results on a single bit will vary wildly (as usual, you get thermal fluctuations on the order of k*T, which is as big as the effect we're looking at), if you do this a large number of times, the average will tend to dominate, and things start acting more deterministic. Now, apply this to your definition of entropy in #2: Suppose we can get work W_random from a random tape of length N as it comes into equilibrium with the ice water. If we convert a blank (all-zeroes) tape into a random tape and then bring *that* into equilibrium with the ice water, the work we we get is W_blank = N*k*T*ln(2) + W_random... which implies that the blank data tape has N*k*ln(2) less entropy than the random tape. (Actually, that's just a lower bound; to show it's an exact result, you have to run the process in reverse as well -- which can be done, since Bennett's heat engine is reversible.) Essentially, each bit of Shannon-entropy in the data on the tape can be converted to (or from) k*ln(2) = 9.57e-24 J/K = 2.29e-24 cal/K = 9.07e-17 BTU/K of thermal entropy. That is the connection. Now, let me try to relate this to some of the other topics under discussion. WRT the state-counting approach Sal Cordova is taking, this makes perfect sense: when the heat engine converts thermal entropy to Shannon-entropy, it's decreasing the number of thermally-distinct states the system might be in, but increasing the number of informationally-distinct states (by the same ratio), leaving the total number of states (and hence total entropy) unchanged. Sal has also mentioned a possible link to algorithmic entropy (aka Kolmogorov complexity) (Bennett also discusses this in section 6 of the paper I cited). I've not found this approach convincing, although I haven't looked into it enough to have a serious opinion on the subject. Essentially, it looks to me like this approach resolves some of the definitional ambiguities with the Shannon approach -- but at the cost of introducing a different (and worse) set of problems with the algorithmic definition. What about a relation to FSCI, CSI, etc? Well, I don't think you can draw a real connection there. There are actually a couple of problems in the way here: The first is that entropy (in pretty much all forms -- thermal, Shannon, algorithmic, etc) has to do (loosely) with order & disorder, not organization. The second is that it doesn't really have to do with order either, just disorder. (Actually, as Sal has argued, entropy doesn't quite correspond to disorder either, although I don't think it's as bad as he makes it out to be. Anyway it's much closer to that than any of the others, so I'll claim it's close enough for the current discussion.) To clarify the difference between organization, order, and disorder, let me draw on David Abel and Jack Trevors' paper, "Three subsets of sequence complexity and their relevance to biopolymeric information" (published in Theoretical Biology and Medical Modelling 2005, 2:29). Actually, I'll mostly draw on their Figure 4, which tries to diagram the relationships between a number of different types of (genetic) sequence complexity -- random sequence complexity (RSC -- roughly corresponding to disorder), ordered (OSC), and functional (FSC -- roughly corresponging to organization). What I'm interested in here is the ordered-vs-random axis (horizontal on the graph), and functional axis (Y2/vertical on the graph). I'll ignore the algorithmic compressibility axis (Y1 on the graph). Please take a look at the graph before continuing... I'll wait... Back? Good, now, the point I want to make is that the connection between thermal and information entropy only relates to the horizontal (ordered-vs-random) axis, not the vertical (functional, or organizational) axis. The point of minimum entropy is at the left-side bottom of the graph, corresponding to pure order. The point of maximum entropy is at the right-side bottom of the graph, corresponding to pure randomness. The functional/ordered region is in between those, and will have intermediate entropy. Let me give some examples to illustrate this. For consistency, I'll use Bennett-style data tapes, but you could use any other information-bearing medium (say, DNA sequences) and get essentially the same results. Consider three tapes, each 8,000 bits long: Tape 1: a completely blank (all zeroes) tape. Tape 2: a completely random tape. Tape 3: a tape containing an 8,000-character essay in English (I'll assume UTF-8 character encoding). Let's compute the entropy contributions from their information content in bits; if you want thermodynamic units, just multiply by k*ln(2). Their entropy contribution is going to be base-2 logarithm of the number of possible sequences the tape might have. For tape 1, there is only one possible sequence, so the entropy is log2(1) = 0 bits. For tape 2, there are 2^8000 possible sequences, so the entropy is log2(2^8000) = 8000 bits. (I know, kind of obvious...) Tape 3 is a little more complicated to analyze There have been studies of the entropy of English text that put its Shannon-entropy content at around one bit per letter, so I'll estimate that there are around 2^1000 possible English essays of that length, and so the entropy is around log2(2^1000) = 1000 bits. As I said, both the minimum and maximum entropy correspond to a complete lack of organization. Organized information (generally) corresponds to intermediate entropy density. But it's worse than that; let be add a fourth tape... Tape 4: a tape containing a random sequence of 8,000 "A"s and "B"s (again, UTF-8 character encoding). There are 2^1000 possible sequences consisting of just "A" and "B", so again the entropy is 1000 bits. Tape 4 has the same entropy as tape 3, despite having no organized and/or functional information at all. From a thermodynamic point of view, the content of tapes 3 and 4 are equivalent because the have the same order/disorder content. The organization of their content is simply irrelevant here. But it's even worse than that. Let me add a fifth tape, this time a longer one... Tape 5: a tape containing an 10,000-character essay in English. Tape 5 is like tape 3, just ten times as big. Because it's ten times bigger, it has ten times as much of everything: ten times the OSC, ten times the RSC, and ten times the FSC. And ten times the entropy contribution (by the same argument as in tape 3, there are around 2^10000 essays that length, so its entropy will be 10000 bits). Comparing tapes 3 and 5 indicates that, at least in this case, an increase in functional complexity actually corresponds to an increase in entropy. This is what I meant when I said that entropy doesn't have to do with order either, just disorder. (I think this is also essentially the same as Sal's argument that entropy must increase for design to emerge.)Gordon Davisson
April 4, 2014
April
04
Apr
4
04
2014
04:14 PM
4
04
14
PM
PDT
Joe, AVS, billmaz, gpuccio, UB et al.: I've just posted a new thread to discuss abiogenesis, as it deserves its own discussion, and maybe we can let this one focus more on the Second Law. Take a look and let me know your thoughts: https://uncommondescent.com/origin-of-life/in-the-beginning-were-the-particles-thoughts-on-abiogenesis/ Thanks!Eric Anderson
April 4, 2014
April
04
Apr
4
04
2014
11:53 AM
11
11
53
AM
PDT
gpuccio:
So, just to be simple, you are telling us that a tornado in a junkyard will not assemble a Boeing 747, even if sufficient energy is added to the system, and all the physical laws are respected?
It's only because all of the mall components, like the screws and washers used to hold things together, keep getting blown away. If it weren't for that... ;)Joe
April 4, 2014
April
04
Apr
4
04
2014
10:56 AM
10
10
56
AM
PDT
F/N (attn Sal C et al): It is time, again, to highlight the significance of the informational approach to entropy, which makes sense of the order/disorder/organisation conundrum, and draws out the link between entropy and information.
KF, I showed the connection with straightforward math above. If one is willing to rework the base of the logarithims and normalize entropy by dividing by Boltzmann's constant Firguratively speaking S(normalized) = S(Claisius) = S(Boltzman) = S(Shannon) = S(Dembski) or S(normalized) = Integral (dS) = log W = log W = -logP The connection was made possible by the Liouville theorem and a little luck, and then put on an even more rigorous foundation when statistical mechanics is framed in terms of quantum mechanics instead of classical mechanics. I even went through sample calculations. To make this connection possible one has to make a few other assumptions like equiprobability of microstates, but the same basic insight is there. Shannon's entropy is probably the most general because one can pick and choose the symbols, whereas for Boltzmann one is restricted to elementary particles like atoms. Shannon/Dembski can work with things like coin flips, but Boltzmann and Clausius can't. So the above equality holds if one is only considering energy or position-and-momentum microstates of molecules. Shannon and Dembski can consider other kinds of microstates like coins and computer bits in addition to molecules. The reason the disciplines of thermodynamics and information theory seem so disparate is that counting atomic microstates even for small macroscopic systems is impossible (on the order of Avogadro's number squared factorial) ! So instead we use thermometers and whatever necessary calorimetry to measure entropy for air conditioning systems. For example, my calculation for number of microstates in the boiling of water with a 1000 watt heater for 100 seconds resulted in this increase in the number of microstates: delta-W = e^(1.94*10^25) = 10^(8.43*10^24) which dwarfs the UPB, and most ID literature doesn't even touch numbers that large. For readers wanting to get tortured with the details in more formal terms: Entropy examples connecting Clausius, Boltzmann, Dembskiscordova
April 4, 2014
April
04
Apr
4
04
2014
06:13 AM
6
06
13
AM
PDT
GP: Always so good to hear from you. On the Hoyle rhetorical flourish, the problem starts at a much lower level, say the instruments on the instrument panel, or the clock. Which is where Paley was 200+ years ago: The FSCO/I of a clock has but one vera causa . . . design. And BTW, I notice the objectors never seriously address his Ch 2 discussion on a self replicating clock, seen as a FURTHER manifestation of the functionally specific complexity and contrivance that point to design. As in, an irreducibly complex von Neumann self replicator [vNSR] dependent on codes, algorithms and coded algorithm implementing machines, plus detailed blueprints, is plainly chock full of the FSCO/I that is per empirical and analytical reasons, a strong sign of design as only empirically warranted adequate cause. Over in the next thread I give part of why:
It is a bit sadly revealing to still see the “someone MUST win the lottery” fallacy. Lotteries are winnable because of a very carefully fine tuned design: just right balance of probabilities so that SOMEONE wins, but profits are made from the many who lose after paying the winner. Ironically, a lottery is a case of fine tuning tracing to design. But, as the proverbial search for a needle in a haystack challenge shows, it is rather easy to have a situation where the explosive exponential growth of a config space utterly overwhelms available search resources. For just 500 bits, were we to set up 10^57 strings of as many coins and flip them every 10^-14 s, with one each of the 10^57 atoms of our solar system watching, we would be searching a space of 3.27*10^150 possibilities, doubling for every additional bit. The search challenge, carried out for the 10^17 s that may be reasonably available for the lifespan to date, is such that all of that searching is as a straw sized sample to a cubical haystack 1,000 light years across, comparable to how thick our galaxy is at its central bulge. Such a sample with all but absolute certainty will come up with the bulk: straw. This is an unwinnable lottery. So, when we see a complex, locally fine tuned functionally specific entity we have abundant good reason to infer design as cause, not just on induction but on this sampling result. (Locally is used to bring out Leslie’s point that if in a local patch of wall a very isolated fly is swatted by a bullet, that other patches elsewhere may be carpeted so such a hit is unsurprising, is irrelevant. For the lone fly, that hit points to a marksman with a tack driver of a rifle and first class match grade ammunition matched to the rifle, which are collectively no mean fine tuning feat. Similarity to Robin Collins’ fine tuned cosmos bakery argument is not coincidental.) The no surprise fallacy pivots on ignoring the needle in haystack challenge. (And in the case of longstanding ID objector sites and their major denizens, refusal to do patent duties of care regarding warrant. As in, willfully selective hyperskepticism.)
And:
BTW, the exercise just described would easily exhaust the atomic resources of the observable cosmos. This points to one reason why 1 in 10^150 odds are a reasonable threshold for not empirically plausible. Thus the rule of thumb threshold that 500 – 1,000 bits of FSCO/I is beyond the credible reach of blind chance and mechanical necessity on any reasonable basis; it matters not if a 5th or 6th basic force or some strange interaction is identified, so long as it is blind the needle in haystack search challenge obtains. So, the hoped for blind watchmaker magic force that overwhelms these odds has an obvious interpretation: cosmological programming that writes C-chemistry, aqueous medium, D/RNA and protein using cell based life into the physics of the observed cosmos beyond what we already see from cosmology. That’s fine tuning for function driven by credible foresight on steroids. Design is the only credible explanation. But, it sticks cross-ways in the gullet of those committed to a priori materialism multiplied by a visceral hostility to the idea that there may be serious empirical pointers to design of the natural world. That such visceral hostility is all too real, is abundantly documented in ever so many ideological materialist fever swamp sites. In short, it looks like blinding anger at design and credible candidate cosmos designers is in the driving seat for a lot of what we are seeing.
Yes, I know I know, for years objectors have been brushing this aside or studiously ignoring it. That has not changed its force one whit. KFkairosfocus
April 4, 2014
April
04
Apr
4
04
2014
05:08 AM
5
05
08
AM
PDT
PS: My earlier comment on Sewell's article and video is here, it includes Sewell's video. PPS: Functionally specific organisation is not a particularly unusual phenomenon, it is there all around us including in posts in this thread which use strings of symbols to function in English language text based discussions. Similarly, computer software is like that. Likewise, the PCs used to read this exhibit the same, in the form of a nodes and joining arcs 3-d structure, such as can be represented by a set of digitised blueprints. For reasons linked tot he needle in haystack search challenge to find islands of function in vast config spaces beyond the atomic re=sources of our solar system or observable universe to search any more than a tiny fraction of, there is good reason tha the only empirically known cause of such FSCO/I is design. What we see above is an inversion of the vera causa principle that phenomena should be explained on causes known from observation to be adequate, because of imposition of ideological a priori materialism. Often this is disguised as a seemingly plausible principle of seeking a "natural" explanation. But when the empirically grounded explanation is not in accord with blind chance and mechanical necessity, but is consistent with what we do observe and experience, intelligent causes acting by design, we should be willing to ace this frankly and fairly. This is the essential point of the intelligent design view.kairosfocus
April 4, 2014
April
04
Apr
4
04
2014
04:36 AM
4
04
36
AM
PDT
KF: Thank you for your good work! So, just to be simple, you are telling us that a tornado in a junkyard will not assemble a Boeing 747, even if sufficient energy is added to the system, and all the physical laws are respected? What a disappointment... :) (By the way, have you read the Wikipedia article for "Junkyard tornado"? Well, that's really a good example of junk, and no tornado will ever be able to get any sense from it!)gpuccio
April 4, 2014
April
04
Apr
4
04
2014
04:35 AM
4
04
35
AM
PDT
F/N (attn Sal C et al): It is time, again, to highlight the significance of the informational approach to entropy, which makes sense of the order/disorder/organisation conundrum, and draws out the link between entropy and information. (And yes, Virginia, there IS a legitimate link between entropy and info, from Jaynes et al, I will highlight harry S Robertson in his Statistical Thermophysics.) First, a Wiki clip from its article on informational entropy [as linked above but obviously not attended to], showing a baseline we need to address:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
In short, once we have an observable macrostate, there is an associated cluster of possible microstates tied to the number of possible ways energy and mass can be arranged at micro level compatible with the macro conditions. This includes the point that when a cluster is specifically functional [which patently will be observable], it locks us down to a cluster of configs that allow function. This means that from the field of initially abstractly possible states, we have been brought to a much tighter set of possibilities, thus such an organised functional state is information-rich. In a way that is tied to the statistical underpinnings of thermodynamics. Now, let me clip my note that is always linked through my handle, section A, where I clipped and summarised from Robertson: ____________ >>Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as [Robertson] astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design . . . ): . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions . . . ]
H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .
[H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.] As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously "plausible" primordial "soups." In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale. By many orders of magnitude, we don't get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel . . . ], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis. As the third major step, we now turn to information technology, communication systems and computers, which provides a vital clarifying side-light from another view on how complex, specified information functions in information processing systems:
[In the context of computers] information is data -- i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. -- that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.]
That is, we have now made a step beyond mere capacity to carry or convey information [--> what Shannon info strictly is], to the function fulfilled by meaningful -- intelligible, difference making -- strings of symbols. In effect, we here introduce into the concept, "information," the meaningfulness, functionality (and indeed, perhaps even purposefulness) of messages -- the fact that they make a difference to the operation and/or structure of systems using such messages, thus to outcomes; thence, to relative or absolute success or failure of information-using systems in given environments. And, such outcome-affecting functionality is of course the underlying reason/explanation for the use of information in systems. [Cf. the recent peer-reviewed, scientific discussions here, and here by Abel and Trevors, in the context of the molecular nanotechnology of life.] Let us note as well that since in general analogue signals can be digitised [i.e. by some form of analogue-digital conversion], the discussion thus far is quite general in force. So, taking these three main points together, we can now see how information is conceptually and quantitatively defined, how it can be measured in bits, and how it is used in information processing systems; i.e., how it becomes functional. In short, we can now understand that: Functionally Specific, Complex Information [FSCI] is a characteristic of complicated messages that function in systems to help them practically solve problems faced by the systems in their environments. Also, in cases where we directly and independently know the source of such FSCI (and its accompanying functional organisation) it is, as a general rule, created by purposeful, organising intelligent agents. So, on empirical observation based induction, FSCI is a reliable sign of such design, e.g. the text of this web page, and billions of others all across the Internet. (Those who object to this, therefore face the burden of showing empirically that such FSCI does in fact -- on observation -- arise from blind chance and/or mechanical necessity without intelligent direction, selection, intervention or purpose.) Indeed, this FSCI perspective lies at the foundation of information theory:
(i) recognising signals as intentionally constructed messages transmitted in the face of the possibility of noise, (ii) where also, intelligently constructed signals have characteristics of purposeful specificity, controlled complexity and system- relevant functionality based on meaningful rules that distinguish them from meaningless noise; (iii) further noticing that signals exist in functioning generation- transfer and/or storage- destination systems that (iv) embrace co-ordinated transmitters, channels, receivers, sources and sinks.
That this is broadly recognised as true, can be seen from a surprising source, Dawkins, who is reported to have said in his The Blind Watchmaker (1987), p. 8:
Hitting upon the lucky number that opens the bank's safe [NB: cf. here the case in Brown's The Da Vinci Code] is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. [NB: originally, this imagery is due to Sir Fred Hoyle, who used it to argue that life on earth bears characteristics that strongly suggest design. His suggestion: panspermia -- i.e. life drifted here, or else was planted here.] Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [Emphases and parenthetical note added, in tribute to the late Sir Fred Hoyle. (NB: This case also shows that we need not see boxes labelled "encoders/decoders" or "transmitters/receivers" and "channels" etc. for the model in Fig. 1 above to be applicable; i.e. the model is abstract rather than concrete: the critical issue is functional, complex information, not electronics.)]
Here, we see how the significance of FSCI naturally appears in the context of considering the physically and logically possible but vastly improbable creation of a jumbo jet by chance. Instantly, we see that mere random chance acting in a context of blind natural forces is a most unlikely explanation, even though the statistical behaviour of matter under random forces cannot rule it strictly out. But it is so plainly vastly improbable, that, having seen the message -- a flyable jumbo jet -- we then make a fairly easy and highly confident inference to its most likely origin: i.e. it is an intelligently designed artifact. For, the a posteriori probability of its having originated by chance is obviously minimal -- which we can intuitively recognise, and can in principle quantify. FSCI is also an observable, measurable quantity; contrary to what is imagined, implied or asserted by many objectors.>> ____________ So, while this is a technical topic and so relatively inaccessible (though that is part of why UD exists, to address technical issues in a way that is available to the ordinary person through being in a place s/he may actually look at . . . ), and as well there is a lot of distractive use of red herrings led out to strawmen soaked in ad hominems and set alight with incendiary rhetoric that clouds, confuses, poisons and polarises the atmosphere, there is a patent need to show the underlying reasonable physical picture. First, for its own sake, second, as it is in fact the underlying context that shows the unreasonableness of the blind watchmaker thesis. There is but one good, empirically and analytically warranted causal explanation for functionally specific complex organisation and/or associated information -- FSCO/I for short, i.e. design. And the link between functional organisation, information and entropy is a significant part of the picture that points out why that is so. KFkairosfocus
April 4, 2014
April
04
Apr
4
04
2014
04:23 AM
4
04
23
AM
PDT
I'm with billmaz on this. Science gave up way to soon on Stonehenge. Heck it's only rocks and mother nature makes rocks in abundance. So there isn't any reason why mother nature, give billions of years, couldn't have produced many Stonehenge-type formations. For that matter we can never say there was a murder because we know that mother nature also can be involved with death. And she can produce fires, so forget about arson. Heck we can now rid ourselves of many investigative venues because we have to wait for the future to unveil the truth. We are just rushing to judgment with our meager "knowledge". Obviously the we of today don't know anything but the we of tomorrow will figure it all out. The science of today is meaningless and should just stay out of the way of the science of tomorrow. So let all of those alleged criminals out of prisons and let time sort it all out. Thank you and good day. (removes tongue from cheek)Joe
April 4, 2014
April
04
Apr
4
04
2014
04:21 AM
4
04
21
AM
PDT
Bill, I'm glad you post here. Referencing your thoughtful comments at 121, particularly about fully exhausting the capacities of natural forces. It really should be considered that the real-world capacity of a physical system to organize matter into a living thing is wholly dependent on that system incorporating a local independence from physical determinism. This is very clearly accomplished by the way in which the system is organized, and the system simply could not function without this remarkable feature - which just so happens to be found nowhere in the physical world except during the translation of language, mathematics and in the genome. In the end, this is the central identifying feature of the system - the capacity to create physical effects that are not determined by physical law. Thus far, the alternative to adopting a coherent model of the system, is to stand there (metaphorically, as science currently does) staring at the system producing those effects, and simply ignoring what is taking place before our very eyes. Instead, (almost as if a deep distraction is suddenly called for) we scratch our collective heads and say "We just don't know yet how this happened". It's horrible waste of discovery.Upright BiPed
April 3, 2014
April
04
Apr
3
03
2014
09:51 PM
9
09
51
PM
PDT
Thank you, Eric, for your response. Your reference to the "current state of knowledge" is the key. Science is a process, as you well know. The "current state of knowledge" in the past was negligible, so man thought the sun was a god. Why do you believe that our current state of knowledge is the ultimate, final knowledge of mankind? Who knows what wonderful things we will discover in the future? Why do we have to give up and say "That's it, we can't explain it with our current state of knowledge, therefore 'God dunnit', or 'some intelligent force'. In terms of knowledge, we are children. We just started to learn about our universe. Imagine our civilization a hundred thousand years from now, if we survive that long. As to your third reference that 'we do know of a cause that can produce the kind of effects' I speak about, that's not a scientific answer. Yes, the easy answer is that an intelligent designer created it all. The tough question is to ask 'could it have happened without an intelligent designer, and how? Let's try to find out.' Let us see what we can discover in that realm first, since history tells us that many of our previously unanswered questions were eventually answered by logic and science. It is too early to say that science has failed. Even if science does succeed in answering that question, there are many other questions which will arise. I do believe that there is an ultimate Mind that exists, but I don't want to get bogged down with details that will prove to be well described by science in the future, which will then be another arrow in the quiver of those who want to debunk God or an eternal Mind. The ultimate questions are larger: who and what are we, is there something beyond our materialistic world, do we exist as something other than our bodies, is there meaning in our existence? These little details about whether and how evolution works are just that, details, which can be easily explained if there is a God. Imagine, for instance, billions of galaxies, with billions and billions of planets, with billions of intelligent life forms. Are they any less God's creatures? Each adapted to their own environment. They are all 'created in His image.' There is no contradiction here. The method of creation is irrelevant. The environment is irrelevant. Their form is irrelevant. God is either the God of all, or He is no god. So, let's not quibble over details. Let's let ourselves be filled with the wonder of creation, whatever the mechanism He has devised.billmaz
April 3, 2014
April
04
Apr
3
03
2014
08:20 PM
8
08
20
PM
PDT
Thanks, billmaz. You are asking the right kinds of questions and you highlight some of the key challenges.
It’s just a challenge to exhaust all the known forces to explain it before I go hunting for an other-wordly one.
Which known forces would you be referring to? The known ones have been pretty well exhausted. Is there some previously-unknown force hoped for? Or perhaps some previously-unknown interaction that will hopefully bridge the gap? And the inference is not: "Abiogenesis is hard, so deus ex machina." The inference is: (i) naturalistic abiogenesis fails on multiple accounts, based on the current state of knowledge, (ii) there are good scientific reasons to conclude it isn't possible given the resources of the known universe, furthermore (iii) we do know of a cause that can produce the kinds of effects at issue (the kinds of things you note in your #121). Even then, we can't conclude that "God dunnit"; but, yes, we can can draw a reasonable inference that some intelligent cause was responsible.Eric Anderson
April 3, 2014
April
04
Apr
3
03
2014
06:18 PM
6
06
18
PM
PDT
Nobody has figured out abiogenesis. Let's start with that. But it is also unscientific to immediately turn to deus ex machina to explain it. It is still a work in progress. The issue, as I see it, is not that certain molecules can spontaneously combine to form proteins, or RNA, but how did they "evolve" to actually correspond to information exchange? Which came first, the RNA or the proteins? And how did a code in the RNA come to correspond to a specific protein? And how the heck did all the other proteins evolve that are needed to translate the code from RNA (or later DNA) into proteins without there being an evolutionary advantage in any of the intervening steps? Damn difficult questions, but that doesn't drive me to design yet. It's just a challenge to exhaust all the known forces to explain it before I go hunting for an other-wordly one.billmaz
April 3, 2014
April
04
Apr
3
03
2014
03:24 PM
3
03
24
PM
PDT
I want use that to better understand what you call the Clausius view, and how it fits in.
The Clausius view dominates in practice because it is practical. You mostly need thermometers and some basic lab equipment. The Boltzman view is important when we start dealing with things at the molecular level. The Clausius view emerged without any need to even assume atoms existed, and might well have been compatible with Caloric heat theory where heat was viewed as some sort of physical fluid that got transferred. The Clausius view is important in building heat pumps, refrigerators, steam engines. I'm not quite sure, but I think the Boltzmann entropy was possibly conceived without fully realizing it was connected to Cluasius entropy. It took a lot of miserably difficult math to figure out that two different calculations could arrive at the same quantity: Integral (dS) = k log W where dS = dq/T Had that miserable math not been done by scientists like Josiah Gibbs, we probably could not connect thermodynamics to information theory and probability at all. It was made possible by the Liouville theorem, but ironically, that's sort of flawed because using the Liouville theorem assumes atoms behave classically, which isn't quite the case since they behave quantum mechanically! Gibbs got the right answer without exactly the right assumptions! LOL! His famous work: Elementary Principles in Statistical Mechanics, developed with especial reference to the rational foundation of thermodynamics. was sheer genius combined with a little luck that atoms behave just classically enough for his theory to work. By classically, I mean, he modeled molecules like infinitesimal billiard balls and glossed over the contradiction that by modeling them as infinitesimal points rather finite size spheres, they wouldn't ever collide with each other!scordova
April 3, 2014
April
04
Apr
3
03
2014
12:16 PM
12
12
16
PM
PDT
Dear AVS Book your flight to Stockholm, Sweden. You'll get the Nobel Prize You say "I have outlined what I think is a plausible mechanism for the generation of the first living organisms." The next step is a piece of cake. Just demonstrate it. All you need would be in a two bit chemistry lab. Go show these pompous clowns, from Oparin, to Miller and Urey, to Szostak how to do what they've bungled for 90 years. As a Creationist, I acknowledge complete defeat Good work!!!!chris haynes
April 3, 2014
April
04
Apr
3
03
2014
12:16 PM
12
12
16
PM
PDT
Misrepresentation?
AVS over several days: My point is that with this constant huge input of energy of the sun … that with a massive and constant source of energy, the early Earth was continually being pushed to a more ordered state, making the eventual formation of what we call life not only likely, but inevitable … as molecules become more complex, they gain properties that allow them to have certain functions … Replication in the earliest of organisms would simply be driven by the increasing instability of the membrane as it increases in size … bicelles have been shown to form spontaneously in water and can be forced to replicate simply by vibrating the water … but their lack of much order and constraint allows them to undergo molecular evolution at a rapid pace
Upright BiPed
April 3, 2014
April
04
Apr
3
03
2014
10:01 AM
10
10
01
AM
PDT
Thank you for misrepresenting my argument, Upright. External energy inputs from the sun and the environment on Earth drive only a very low level of organization, after more complex molecules start to form, their interactions are what slowly allow slightly more and more organized systems to arise. After the generation of the first living cells, the external energy is not as important as what is going on internally in the cell. And don't worry, there won't be a next time.AVS
April 3, 2014
April
04
Apr
3
03
2014
09:26 AM
9
09
26
AM
PDT
AVS:
Experiments have been done that more complex molecules from the more simple, including amino acids, protein lattices with catalytic activity, sugars, and simple amphipathic lipid molecules.
Yet nothing that demonstrates a living organism can arise from just matter,energy and their interactions. So thanks for demonstrating your position is pseudoscience BS.Joe
April 3, 2014
April
04
Apr
3
03
2014
09:13 AM
9
09
13
AM
PDT
...you guys would just sneer and say “oh it’s a just-so fairytale.”
Next time, lose the unecessary condescending BS and try addressing the actual issues (as they actually exist in nature) then we will see.Upright BiPed
April 3, 2014
April
04
Apr
3
03
2014
09:12 AM
9
09
12
AM
PDT
I'm sure you're right AVS. With a little sunlight and agitation, functional organization sets in, and living things arise that can replicate themselves. That whole information thingy - where scripts of prescriptive information get distributed from memory storage out to the cell and start doing stuff - all that gets tacked on later somehow. ;)Upright BiPed
April 3, 2014
April
04
Apr
3
03
2014
09:08 AM
9
09
08
AM
PDT
EA, your Mars example is a poor comparison. It is built on discovering and building new things. My outline, while I know is not detailed, is built on things we have already discovered. Experiments have been done that more complex molecules from the more simple, including amino acids, protein lattices with catalytic activity, sugars, and simple amphipathic lipid molecules. Yes upright, we all know you were just asking a question that you know is extremely complex and that no one currently has an answer to. It's a typical UD tactic. You just word your questions differently. I very much realize that the rise of order and complex systems in the cell is an important question, but I never intended to answer that question when I originally posted. Even if I did try to think out a mechanism that answers your question, as EA has just demonstrated, you guys would just sneer and say "oh it's a just-so fairytale." Have a good one, guys.AVS
April 3, 2014
April
04
Apr
3
03
2014
08:38 AM
8
08
38
AM
PDT
cue the sales pitchUpright BiPed
April 3, 2014
April
04
Apr
3
03
2014
07:58 AM
7
07
58
AM
PDT
AVS @102:
I have outlined what I think is a plausible mechanism for the generation of the first living organisms
No you haven't. You haven't outlined any plausible mechanism. You have just restated a vague, detail-free, unspecified, hypothetical just-so story. It isn't remotely plausible. Let's say I go around telling people that I have a detailed explanation for how we can travel to other star systems in just a few days. When they ask for particulars I say: First we build a test ship to travel to Mars, then we discover new and stronger alloys for the ship's hull, then we invent a small and massively-powerful fusion power source, then we discover how to bend space-time so that we can travel faster than light, then we build a ship incorporating all that technology and use it to travel to other star systems. If people, quite reasonably, point out that I haven't provided any meaningful detail, that the end goal faces serious technological and basic conceptual hurdles, that all I've done is describe the basic science-fiction hypotheticals that have been around for a hundred years, it won't do to say: "Now you're moving the goalposts. I've given you a 'plausible mechanism' for getting to other star systems." I would be laughed off the stage. Until you actually delve into the details of abiogenesis, until you actually start looking at what it takes to get from A to B to C, your fantasy story is just that -- pure fantasy. It has no basis in actual chemistry or physics; it has no basis in reality; it is nothing more than a materialistic creation miracle story. I invite you to take some time to actually look into the details of what would be required for abiogenesis. It is a fascinating intellectual journey and one well worth embarking on.Eric Anderson
April 3, 2014
April
04
Apr
3
03
2014
07:51 AM
7
07
51
AM
PDT
My questions to AVS were not intended to illicit any particularly meaningful answers from him. We all recognize that those answers don't exist in the materialist framework. However... after watching him willfully misrepresent ID proponents as a means to attack them, then follow on by being a wholly beligerent ass in virtually every comment he types out, I just wanted to demonstrtate that not only does he not have any answers, he doesn't even realize the questions. I also wanted to demonstrate that the absolute personal certainty that underlies his unecessary and bigoted belligerance - is completely without empirical foundation. It ain't about the science. Of course, none of this will mediate AVS in the least. He is a complete thumper, and a clever one. Nothing will knock him off his course, particularly the science. Or honesty. Or logic. Or mystery. Or reality. Expect nothing else.Upright BiPed
April 3, 2014
April
04
Apr
3
03
2014
07:11 AM
7
07
11
AM
PDT
AVS couldn't outline how to get the first living cell via blind processes if its life depended on it.Joe
April 3, 2014
April
04
Apr
3
03
2014
05:52 AM
5
05
52
AM
PDT
AVS at 102:
I’m not sure why a certain amount or type of energy would be required for anything I have said.
Not all systems require the same source of energy. But say your hypothetical example needs just the sun as energy source. The problem is that too much sun can lead to destruction. Homeostasis is a concern. Also, the issue of homochirality and its relationship to energy needs to be considered.seventrees
April 3, 2014
April
04
Apr
3
03
2014
02:43 AM
2
02
43
AM
PDT
1 2 3 5

Leave a Reply