Home » Biophysics, Comp. Sci. / Eng., Complex Specified Information, ID Foundations, Informatics, Physics, Self-Org. Theory » A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1

A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1

The common belief is that adding disorder to a designed object will destroy the design (like a tornado passing through a city, to paraphrase Hoyle). Now if increasing entropy implies increasing disorder, creationists will often reason that “increasing entropy of an object will tend to destroy its design”. This essay will argue mathematically that this popular notion among creationists is wrong.

The correct conception of these matters is far more nuanced and almost the opposite of (but not quite) what many creationists and IDists believe. Here is the more correct view of entropy’s relation to design (be it man-made or otherwise):

1. increasing entropy can increase the capacity for disorder, but it doesn’t necessitate disorder

2. increasing an object’s capacity for disorder doesn’t imply that the object will immediately become more disordered

3. increasing entropy in a physical object is a necessary (but not sufficient) condition for increasing the complexity of the design

4. contrary to popular belief, a complex design is a high entropy design, not a low entropy design. The complex organization of a complex design is made possible (and simultaneously improbable) by the high entropy the object contains.

5. without entropy there is no design

If there is one key point it is: Entropy makes design possible but simultaneously improbable. And that is the nuance that many on both sides of the ID/Creation/Evolution controversy seem to miss.

The notion of entropy is foundational to physics, engineering, information theory and ID. These essays are written to provide a discussion on the topic of entropy and its relationship to other concepts such as uncertainty, probability, microstates, and disorder. Much of what is said will go against popular understanding, but the aim is to make these topics clearer. Some of the math will be in a substantially simplified form, so apologies in advance to the formalists out there.

Entropy may refer to:

1. Thermodynamic (Statistical Mechanics) entropy – measured in Joules/Kelvin, dimensionless units, degrees of freedom, or (if need be) bits

2. Shannon entropy – measured in bits or dimensionless units

3. Algorithmic entropy or Kolmogorov complexity – measured also in bits, but deals with the compactness of a representation. A file that can be compressed substantially has low algorithmic entropy, whereas files which can’t be compressed evidence high algorithmic entropy (Kolmogorov complexity). Both Shannon entropy and algorithmic entropies are within the realm of information theory, but by default, unless otherwise stated, most people associate Shannon entropy as the entropy in information theory.

4. disorder in the popular sense – no real units assigned, often not precise enough to be of scientific or engineering use. I’ll argue the term “disorder” is a misleading way to conceptualize entropy. Unfortunately, the word “disorder” is used even in university science books. I will argue mathematically why this is so…

The reason the word entropy is used in the disciplines of Thermodynamics, Statistical Mechanics and Information Theory is that there are strong mathematical analogies. The evolution of the notion of entropy began with Clausius who also coined the term for thermodynamics, then Boltzmann and Gibbs related Clausius’s notions of entropy to Newtonian (Classical) Mechanics, then Shannon took Boltzmann’s math and adapted it to information theory, and then Landauer brought things back full circle by tying thermodynamics to information theory.

How entropy became equated with disorder, I do not know, but the purpose of these essays is to walk through actual calculations of entropy and allow the reader to decide for himself whether disorder can be equated with entropy. My personal view is that Shannon entropy and Thermodynamic entropy cannot be equated with disorder, even though the lesser-known algorithmic entropy can. So in general entropy should not be equated with disorder. Further, the problem of organization (which goes beyond simple notions of order and entropy) needs a little more exploration. Organization sort of stands out as a quality that seems difficult to assign numbers to.

The calculations that follow are to give an illustration how I arrived at some my conclusions.

First I begin with calculating Shannon entropy for simple cases. Thermodynamic entropy will be covered in the Part II.

Bill Dembski actually alludes to Shannon entropy in his latest offering on Conservation of Information Made Simple

In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy).

William Dembski

To elaborate on what Bill said, if we have a fair coin, it can exist in two microstates: heads (call it microstate 1) or tails (call it microstate 2).

After a coin flip, the probability of the coin emerging in microstate 1 (heads) is 1/2. Similarly the probability of the coin emerging in microstate 2 (tails) is 1/2. So let me tediously summarize the facts:

N = Ω(N) = Ω = Number of microstates of a 1-coin system = 2

x1 = microstate 1 = heads
x2 = microstate 2 = tails

P(x2) = P(microstate 2)= P(tails) = probability of tails = 1/2

Here is the process for calculating the Shannon Entropy of a 1-coin information system starting with Shannon’s famous formula:

$\large I=-\sum_{i=0}^n {p({x}_{i})\log_{2}p(x_{i})}$

$\large =-p({x}_{1})\log_{2}p(x_{1})-p({x}_{2})\log_{2}p(x_{2})$

$\large =-p(\text{heads})\log_{2}p(\text{heads})-p(\text{tails})\log_{2}p(\text{tails})$

$\large =-(\frac{1}{2})\log_{2}(\frac{1}{2})-(\frac{1}{2})\log_{2}(\frac{1}{2})$

$\large =\frac{1}{2}+\frac{1}{2}= 1 = 1 \text { bit}$

where I is the Shannon entropy (or measure of information).

This method seems a rather torturous way to calculate the Shannon entropy of a single coin. A slightly simpler method exists if we take advantage of the fact that each microstate of the coin (heads or tails) is equiprobable, and thus conforms to the fundamental postulate of statistical mechanics, and thus we can calculate the number of bits by simply taking the logarithm of the number of microstates as is done in statistical mechanics.

$\large I=-\sum_{i=0}^n {p({x}_{i})\log_{2}p(x_{i})}$

$\large =\log_{2}\Omega =\log_{2}(2)=1=1 \text{ bit}$

Now compare this equation of the Shannon entropy in information theory

$\large I=\log_{2}\Omega$

to Boltzmann entropy from statistical mechanics and thermodynamics

$\large S=k_{b}\ln\Omega$

and even more so using different units whereby kb=1

$\large S=\ln\Omega$

The similarities are not an accident. Shannon’s ideas of information theory are a descendant of Boltzmann’s ideas from statistical mechanics and thermodynamics.

To explore Shannon entropy further, let us suppose we have a system of 3 distinct coins. The Shannon entropy relates the amount of information that will be gained by observing the collective state (microstate) of the 3 coins.

First we have to compute the number of microstates or ways the system of coins can be configured. I will lay them out specifically.

microstate 1 = H H H
microstate 2 = H H T
microstate 3 = H T H
microstate 4 = H T T
microstate 5 = T H H
microstate 6 = T H T
microstate 7 = T T H
microstate 8 = T T T

N = Ω(N) = Ω = Number of microstates of a 3-coin system = 8

So there are 8 microstates or outcomes the system can realize. The Shannon entropy can be calculated in the torturous way:
$\small I=-\sum_{i=0}^n {p({x}_{i})log_{2}p(x_{i})}$

$=-p(\text{hhh})\log_{2}p(\text{hhh}) -p(\text{hht})\log_{2}p(\text{hht}) -p(\text{hth})\log_{2}p(\text{hth}) -p(\text{htt})\log_{2}p(\text{htt}) -p(\text{thh})\log_{2}p(\text{thh}) -p(\text{tht})\log_{2}p(\text{tht}) -p(\text{tth})\log_{2}p(\text{tth}) -p(\text{ttt})\log_{2}p(\text{ttt})$

$= -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8}) -\frac{1}{8}\log_{2}(\frac{1}{8})$

$=3=3 \text{ bits}$

or simply taking the logarithm of the number of microstates:

$I=\log_{2}\Omega =\log_{2}8=3=3 \text{ bits}$

It can be shown that for the Shannon entropy of a system of N distinct coins is equal to N bits. That is, a system with 1 coin has 1 bit of Shannon entropy, a system with 2 coins has 2 bits of Shannon entropy, a system of 3 coins has 3 bits of Shannon entropy, etc.

Notice, the more microstates there are, the more uncertainty exists that the system will be found in any given microstate. Equivalently, the more microstates there are, the more improbable the system will be found in a given microstate. Hence, sometimes entropy is described in terms of improbability or uncertainty or unpredictability. But we must be careful here, uncertainty is not the same thing as disorder. That is subtle but important distinction.

So what is the Shannon Entropy of a system of 500 distinct coins? Answer: 500 bits, or the Universal Probability Bound.

By way of extension, if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy. This illustrates the principle that more complex designs require larger Shannon entropy to support the design. It cannot be otherwise. Design requires the presence of entropy, not absence of it.

Suppose we found that a system of 500 coins were all heads, what is the Shannon entropy of this 500-coin system? Answer: 500 bits. No matter what configuration the system is in, whether ordered (like all heads) or disordered, the Shannon entropy remains the same.

Now suppose a small tornado went through the room where the 500 coins resided (with all heads before the tornado), what is the Shannon entropy after the tornado? Same as before, 500-bits! What may arguably change is the algorithmic entropy (Kolmogorov complexity). The algorithmic entropy may go up, which simply means we can’t represent the configuration of the coins in a compact sort of way like saying “all heads” or in the Kleene notation as H*.

Amusingly, if in the aftermath of the tornado’s rampage, the room got cooler, the thermodynamic entropy of the coins would actually go down! Hence the order or disorder of the coins is independent not only of the Shannon entropy but also the thermodynamic entropy.

Let me summarize the before and after of the tornado going through the room with the 500 coins:

BEFORE : 500 coins all heads, Temperature 80 degrees
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): low
Thermodynamic Entropy : some finite starting value

AFTER : 500 coins disordered
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy : lower if the temperature is lower, higher if the temperature is higher

Now to help disentangle concepts a little further consider three 3 computer files:

File_A : 1 gigabit of binary numbers randomly generated
File_B : 1 gigabit of all 1′s
File_C : 1 gigabit encrypted JPEG

Here are the characteristics of each file:

File_A : 1 gigabit of binary numbers randomly generated
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly disorganized
inference : not designed

File_B : 1 gigabit of all 1′s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : designed (with qualification, see note below)

File_C : 1 gigabit encrypted JPEG
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : extremely designed

Notice, one cannot ascribe high levels of improbable design based on the Shannon entropy or algorithmic entropy without some qualification. Existence of improbable design depends on the existence of high Shannon entropy, but is somewhat independent of algorithmic entropy. Further, to my knowledge, there is not really a metric for organization that is separate from Kolmogorov complexity, but this definition needs a little more exploration and is beyond my knowledge base.

Only in rare cases will high Shannon entropy and low algorithmic entropy (Kolmogorov complexity) result in a design inference. One such example is 500 coins all heads. The general method to infer design (including man-made designs), is that the object:

1. has High Shannon Entropy (high improbability)
2. conforms to an independent (non-postdictive) specification

In contrast to the design of coins being all heads where the Shannon entropy is high but the algorithmic entropy is low, in cases like software or encrypted JPEG files, the design exists in an object that has both high Shannon entropy and high algorithmic entropy. Hence, the issues of entropy are surely nuanced, but on balance entropy is good for design, not always bad for it. In fact, if an object evidences low Shannon entropy, we will not be able to infer design reliably.

The reader might be disturbed at my final conclusion in as much as it grates against popular notions of entropy and creationist notions of entropy. But well, I’m no stranger to this controversy. I explored Shannon entropy in this thread because it is conceptually easier than its ancestor concept of thermodynamic entropy.

In the Part II (which will take a long time to write) I’ll explore thermodynamic entropy and its relationship (or lack thereof) to intelligent design. But in brief, a parallel situation often arises: the more complex a design, the higher its thermodynamic entropy. Why? The simple reason is that more complex designs involve more parts (molecules) and more molecules in general imply higher thermodynamic (as well as Shannon) entropy. So the question of Earth being an open system is a bit beside the point since entropy is essential for intelligent designs to exist in the first place.

[UPDATE: the sequel to this thread is in Part 2]

Acknowledgements (both supporters and critics):

1. Elizabeth Liddle for hosting my discussions on the 2nd Law at TheSkepticalZone

2. physicist Olegt who offered generous amounts of time in plugging the holes in my knowledge, particularly regarding the Liouville Theorem and Configurational Entropy

3. retired physicist Mike Elzinga for his pedagogical examples and historic anecdotes. HT: the relationship of more weight to more entropy

4. An un-named theoretical physicist who spent many hours teaching his students the principles of Statistical Mechanics and Thermodynamics

5. physicists Andy Jones and Rob Sheldon

6. Neil Rickert for helping me with Latex

7. Several others that have gone unnamed

NOTE:
[UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn't necessarily warranted. It's possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer's intention.]

[UPDATE 9/7/2012]
Boltzmann

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

62 Responses to A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1

1. but on balance entropy is good for design, not always bad for it.

One problem, as with neo-Darwinists, you don’t have any physical example of ‘not always bad’. i.e. you have not one molecular machine or one functional protein coming about by purely material processes. But the IDists and creationists have countless examples of purely material processes degrading as such.

2. SC:

S = k*log W, per Boltzmann

where W is the number of ways that mass and/or energy at ultra-microscopic level may be arranged, consistent with a given Macroscopic [lab-level observable] state.

That constraint is crucial and brings out a key subtlety in the challenge to create functionally specific organisation on complex [multi-part] systems through forces of blind chance and mechanical necessity.

FSCO/I is generally deeply isolated in the space of raw configurational possibilities, and is not normally created by nature working freely. Nature, working freely, on the gamut of our solar system or of the observed cosmos, will blindly sample the space from some plausible, typically arbitrary initial condition, and thereafter it will undergo a partly blind random walk, and there may be mechanical dynamics at work that will impress a certain orderly motion, or the like.

(Think about molecules in a large parcel of air participating in wind and weather systems. The temperature is a metric of avg random energy per degree of freedom of relevant particles, usually translational, rotational and vibrational. At the same time, the body of air as a whole is drifting along in the wind that may reflect planetary scale convection.)

Passing on to Shannon’s entropy in the information context (and noting Jaynes et al on the informational view of thermodynamics that I do not see adequately reflected in your remarks above — there are schools of thought here, cf. my note here on), what Shannon was capturing is average info per symbol transmitted in the case of non equiprobable symbols; the normal state of codes. This turns out to link to the Gibbs formulation of entropy you cite. And, I strongly suggest you look at Harry S Robertson’s Statistical Thermophysics Ch 1 (Prentice) to see what it seems from appearances that your interlocutors have not been telling you. That is, there is a vigorous second school of thought within physics on stat thermo-d, that bridges to Shannon’s info theory.

Wikipedia bears witness to the impact of this school of thought:

At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

So, when we see the value of H in terms of uncommunicated micro- level information based on lab observable state, we see that entropy, traditionally understood per stat mech [degrees of micro-level freedom], is measuring the macro-micro info-gap [MmIG], NOT the info we have in hand per macro-observation.

The subtlety this leads to is that when we see a living unicellular species of type x, providing we know the genome, through lab level observability, we know a lot about the specific molecular states from a lab level observation. The MmIG is a lot smaller, as there is a sharp constraint on possible molecular level configs, once we have a living organism in hand. When it dies, the active informationally directed maintenance of such ceases, and spontaneous changes take over. The highly empirically reliable result is well known: decay and breakdown to simpler component molecules.

We also know that in the period of historic observation and record — back to the days of early microscopy 350 years back, this is passed on from generation to generation by algorithmic processes. Such a system is in a programmed, highly constrained state governed by gated encapsulation, metabolic automata that manage an organised flow-through of energy and materials [much of this in the form of assembled smart polymers such as proteins] backed up by a von Neumann self-replicator [vNSR].

We can also infer on this pattern right back to the origins of cell based life, on the relevant macro-traces of such life.

So, how do we transition from Darwin’s warm pond with salts [or the equivalent] state, to the living cell state?

The dominant OOL school, under the methodological naturalism imposition, poses a claimed chem evo process of spontaneous cumulative change. This runs right into the problem of accessing deeply isolated configs spontaneously.

For, sampling theory and common sense alike tell us that pond state — due to the overwhelming bulk of configs and some very adverse chemical reaction equilibria overcome in living systems by gating, encapsulation and internal functional organisation that uses coded data and a steady flow of ATP energy battery molecules to drive algorithmic processes — will be dominant over spontaneous emergence at organised cell states (or any reasonable intermediates).

There is but one empirically confirmed means of getting to FSCO/I, namely design.

In short, on evidence, the info-gap between pond state and cell state, per the value of FSCO/I as sign, is best explained as being bridged by design that feeds in the missing info and through intelligently directed organising work [IDOW] creates in this case a self replicating micro-level molecular nanotech factory. That self replication also uses an information and organisation-rich vNSR, and allows a domination of the situation by a new order of entity, the living cell.

So, it is vital for us to understand at the outset of discussion that the entropy in a thermodynamic system is a metric of missing information on the microstate, given the number of microstate possibilities consistent with the macro-observable state. That is, entropy measures the MmIG.

Where also, the living cell is in a macro-observable state that initially and from generation to generation [via vNSR in algorithmically controlled action on coded information], locks down the number of possible states drastically relative to pond state. The debate on OOL, then is about whether it is a credible argument on observed evidence in the here and now, for pond state, via nature operating freely and without IDOW, to go to cell-state. (We know that IDOW routinely creates FSCO/I, a dominant characteristic of living cells.)

A common argument is that raw injection of energy suffices to bridge the info-gap without IDOW, as the energy flow and materials flows allow escape from “entropy increases in isolated systems.” What advocates of this do not usually disclose, is that raw injection of energy tends to go to heat, i.e. to dramatic rise in the number of possible configs, given the combinational possibilities of so many lumps of energy dispersed across so many mass-particles. That is, MmIG will strongly tend to RISE on heating. Where also, for instance, spontaneously ordered systems like hurricanes are not based on FSCO/I, but instead on the mechanical necessities of Coriolis forces acting on large masses of air moving under convection on a rotating spherical body.

(Cf my discussion here on, remember, I came to design theory by way of examination of thermodynamics-linked issues. We need to understand and visualise step by step what is going on behind the curtain of serried ranks of algebraic, symbolic expressions and forays into calculus and partial differential equations etc. Otherwise, we are liable to miss the forest for the trees. Or, the old Wizard of Oz can lead us astray.)

A good picture of the challenge was posed by Shapiro in Sci AM, in challenging the dominant genes first school of thought, in words that also apply to his own metabolism first thinking:

RNA’s building blocks, nucleotides, are complex substances as organic molecules go. They each contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern. Many alternative ways exist for making those connections, yielding thousands of plausible nucleotides that could readily join in place of the standard ones but that are not represented in RNA. That number is itself dwarfed by the hundreds of thousands to millions of stable organic molecules of similar size that are not nucleotides [--> and he goes on, with the issue of assembling component monomers into functional polymers and organising them into working structures lurking in the background] . . . .

[--> Then, he flourishes, on the notion of getting organisation without IDOW, merely on opening up the system:] The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.

Orgel’s reply in a post-humus paper, is equally revealing on the escape from IDOW problem:

If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . Could a nonenzymatic “metabolic cycle” have made such compounds available in sufficient purity to facilitate the appearance of a replicating informational polymer?

It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield. Each proposed metabolic cycle, therefore, must be evaluated in terms of the efficiencies and specificities that would be required of its hypothetical catalysts in order for the cycle to persist. Then arguments based on experimental evidence or chemical plausibility can be used to assess the likelihood that a family of catalysts that is adequate for maintaining the cycle could have existed on the primitive Earth . . . .

Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [6]? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . .

The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.

So, we have to pull back the curtain and make sure we first understand that the sense in which entropy is linked to information in a thermodynamics context is that we are measuring missing info on the micro-state given the macro-state. So, we should not allow the similarity of mathematics to lead us to think that IDOW is irrelevant to OOL, once a system is opened up to energy and mass flows.

In fact, given the delicacy and unfavourable kinetics and equilibria involved — notice all those catalysing enzymes and ATP energy battery molecules in life? — the challenge of IDOW is the elephant standing in the middle of the room that ever so many are desperate not to speak bout.

KF

3. 3
gpuccio

Sal:

Great post!

a) Shannon entropy is the basis for what we usually call the “complexity” of a digital string.

b) Regarding the exmaple in:

File_B : 1 gigabit of all 1?s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Organizational characteristics: highly organized
inference : designed

I would say that the inference of design is not necessarily warrnted. According to the explanatory filter, in the presence of this kind of compressible order we must first ascertain that no deterministic effect is the cause of the apparent order. IOWs, many simple deterministic causes could explain a series of 1s, however long. Obviously, such a scenario would imply that the system that generates the string is not random, or that the probabilities of 0 and 1 are extremely different. I agree that, if we have assurance that the system is really random and the probabilities are as described, then a long series of 1 allows the design inference.

c) A truly pseudo-random string, which has no formal evidence of order (no compressibility), like the jpeg file, but still conveys very specific information, is certainly the best scenario for design inference. Indeed, as far as I know, no deterministic system can explain the emergence of that kind of object.

d) Regarding the problem of specification, I paste here what I posted yesterday in another thread, as I believe it is pertinent to the discussion here:

“I suppose much confusion derives from Shannon’s theory, which is not, and never has been, a theory about information, but is often considered as such.

Contemporary thought, in the full splendor of its dogmatic reductionism, has done its best to ignore the obvious connection between information and meaning. Everybody talks about information, but meaning is quite a forbidden word. As if the two things could be separated!

I have discussed for days here with darwinists just trying to have them admit that sucg a thing as “function” does exist. Another forbidden word.

And even IDist often are afraid to admit that meaning and function cannot even be defined if we do not refer to a conscious being. I have challenged evrybody I know to give a definition, any definition, of meaning, function and intent without recurring to conscious experience. How strange, the same concepts on which all our life, and I would say also all our science and knowledge, are based, have become forbidden in modern thought. And consciousness itself, what we are, the final medium that cognizes everything, can scarcely be mentioned, if not to affirm that it is an unscientific concept, or even better a concept completely reducible to non conscious aggregations of things (!!!).

The simple truth is: there is no cognition, no science, no knowledge, without the fundamental intuition of meaning. And that intuition is a conscious event, and nothing else.

There is no understanding of meaning in stones, rivers or computers. Only in conscious beings. And information is only a way to transfer menaing from one conscious being to another. Through material systems, that carry the meaning, but have no understanding of it.

That’s what Shannon considered: what is necessary to transfer information through a material system. In that context, meaning is not relevant, because what we are measuring is only a law of transmission.

The same is true in part for ID. The measure of complexity is a Shannon measure, it has nothing to do with meaning. A random string can be as complex as a meaningful string.

But the concept of specification does relate to meaning, in one of its many aspects, for instance as function. The beautiful simplicity of ID theory is that it measures the complexity necessary to convey a specific meaning. That is simple and beautiful, beacuse it connects the quantitative concept of Shannon complexity to the qualitative aspect of meaning and function.”

4. F/N: I have put the above comment up with a diagram here.

5. F/N 2: We should bear in mind that information arises when we move from an a priori state to an a posteriori one where with significant assurance we are in a state that is to some degree or other surprising. Let me clip my always linked note, here on:

let us now consider in a little more detail a situation where an apparent message is received. What does that mean? What does it imply about the origin of the message . . . or, is it just noise that “got lucky”?

If an apparent message is received, it means that something is working as an intelligible — i.e. functional — signal for the receiver. In effect, there is a standard way to make and send and recognise and use messages in some observable entity [e.g. a radio, a computer network, etc.], and there is now also some observed event, some variation in a physical parameter, that corresponds to it. [For instance, on this web page as displayed on your monitor, we have a pattern of dots of light and dark and colours on a computer screen, which correspond, more or less, to those of text in English.]

Information theory, as Fig A.1 illustrates, then observes that if we have a receiver, we credibly have first had a transmitter, and a channel through which the apparent message has come; a meaningful message that corresponds to certain codes or standard patterns of communication and/or intelligent action. [Here, for instance, through HTTP and TCP/IP, the original text for this web page has been passed from the server on which it is stored, across the Internet, to your machine, as a pattern of binary digits in packets. Your computer then received the bits through its modem, decoded the digits, and proceeded to display the resulting text on your screen as a complex, functional coded pattern of dots of light and colour. At each stage, integrated, goal-directed intelligent action is deeply involved, deriving from intelligent agents -- engineers and computer programmers. We here consider of course digital signals, but in principle anything can be reduced to such signals, so this does not affect the generality of our thoughts.]

Now, it is of course entirely possible, that the apparent message is “nothing but” a lucky burst of noise that somehow got through the Internet and reached your machine. That is, it is logically and physically possible [i.e. neither logic nor physics forbids it!] that every apparent message you have ever got across the Internet — including not just web pages but also even emails you have received — is nothing but chance and luck: there is no intelligent source that actually sent such a message as you have received; all is just lucky noise:

“LUCKY NOISE” SCENARIO: Imagine a world in which somehow all the “real” messages sent “actually” vanish into cyberspace and “lucky noise” rooted in the random behaviour of molecules etc, somehow substitutes just the messages that were intended — of course, including whenever engineers or technicians use test equipment to debug telecommunication and computer systems! Can you find a law of logic or physics that: [a] strictly forbids such a state of affairs from possibly existing; and, [b] allows you to strictly distinguish that from the “observed world” in which we think we live? That is, we are back to a Russell “five- minute- old- universe”-type paradox. Namely, we cannot empirically distinguish the world we think we live in from one that was instantly created five minutes ago with all the artifacts, food in our tummies, memories etc. that we experience. We solve such paradoxes by worldview level inference to best explanation, i.e. by insisting that unless there is overwhelming, direct evidence that leads us to that conclusion, we do not live in Plato’s Cave of deceptive shadows that we only imagine is reality, or that we are “really” just brains in vats stimulated by some mad scientist, or we live in a The Matrix world, or the like. (In turn, we can therefore see just how deeply embedded key faith-commitments are in our very rationality, thus all worldviews and reason-based enterprises, including science. Or, rephrasing for clarity: “faith” and “reason” are not opposites; rather, they are inextricably intertwined in the faith-points that lie at the core of all worldviews. Thus, resorting to selective hyperskepticism and objectionism to dismiss another’s faith-point [as noted above!], is at best self-referentially inconsistent; sometimes, even hypocritical and/or — worse yet — willfully deceitful. Instead, we should carefully work through the comparative difficulties across live options at worldview level, especially in discussing matters of fact. And it is in that context of humble self consistency and critically aware, charitable open-mindedness that we can now reasonably proceed with this discussion.)

In short, none of us actually lives or can consistently live as though s/he seriously believes that: absent absolute proof to the contrary, we must believe that all is noise. [To see the force of this, consider an example posed by Richard Taylor. You are sitting in a railway carriage and seeing stones you believe to have been randomly arranged, spelling out: "WELCOME TO WALES." Would you believe the apparent message? Why or why not?]

Q: Why then do we believe in intelligent sources behind the web pages and email messages that we receive, etc., since we cannot ultimately absolutely prove that such is the case?

ANS: Because we believe the odds of such “lucky noise” happening by chance are so small, that we intuitively simply ignore it. That is, we all recognise that if an apparent message is contingent [it did not have to be as it is, or even to be at all], is functional within the context of communication, and is sufficiently complex that it is highly unlikely to have happened by chance, then it is much better to accept the explanation that it is what it appears to be — a message originating in an intelligent [though perhaps not wise!] source — than to revert to “chance” as the default assumption. Technically, we compare how close the received signal is to legitimate messages, and then decide that it is likely to be the “closest” such message. (All of this can be quantified, but this intuitive level discussion is enough for our purposes.)

In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources.

Thus, if we then try to dismiss the study of such inferences to design as “unscientific,” when they may cut across our worldview preferences, we are plainly being grossly inconsistent.

Further to this, the common attempt to pre-empt the issue through the attempted secularist redefinition of science as in effect “what can be explained on the premise of evolutionary materialism – i.e. primordial matter-energy joined to cosmological- + chemical- + biological macro- + sociocultural- evolution, AKA ‘methodological naturalism’ ” [ISCID def'n: here] is itself yet another begging of the linked worldview level questions.

For in fact, the issue in the communication situation once an apparent message is in hand is: inference to (a) intelligent — as opposed to supernatural — agency [signal] vs. (b) chance-process [noise]. Moreover, at least since Cicero, we have recognised that the presence of functionally specified complexity in such an apparent message helps us make that decision. (Cf. also Meyer’s closely related discussion of the demarcation problem here.)

More broadly the decision faced once we see an apparent message, is first to decide its source across a trichotomy: (1) chance; (2) natural regularity rooted in mechanical necessity (or as Monod put it in his famous 1970 book, echoing Plato, simply: “necessity”); (3) intelligent agency. These are the three commonly observed causal forces/factors in our world of experience and observation. [Cf. abstract of a recent technical, peer-reviewed, scientific discussion here. Also, cf. Plato's remark in his The Laws, Bk X, excerpted below.]

Each of these forces stands at the same basic level as an explanation or cause, and so the proper question is to rule in/out relevant factors at work, not to decide before the fact that one or the other is not admissible as a “real” explanation.

This often confusing issue is best initially approached/understood through a concrete example . . .

A CASE STUDY ON CAUSAL FORCES/FACTORS — A Tumbling Die: Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.

But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious — as some are tempted to imagine or assert. [More details . . .] . . . .

The second major step is to refine our thoughts, through discussing the communication theory definition of and its approach to measuring information. A good place to begin this is with British Communication theory expert F. R Connor, who gives us an excellent “definition by discussion” of what information is:

From a human point of view the word ‘communication’ conveys the idea of one person talking or writing to another in words or messages . . . through the use of words derived from an alphabet [NB: he here means, a "vocabulary" of possible signals]. Not all words are used all the time and this implies that there is a minimum number which could enable communication to be possible. In order to communicate, it is necessary to transfer information to another person, or more objectively, between men or machines.

This naturally leads to the definition of the word ‘information’, and from a communication point of view it does not have its usual everyday meaning. Information is not what is actually in a message but what could constitute a message. The word could implies a statistical definition in that it involves some selection of the various possible messages. The important quantity is not the actual information content of the message but rather its possible information content.

This is the quantitative definition of information and so it is measured in terms of the number of selections that could be made. Hartley was the first to suggest a logarithmic unit . . . and this is given in terms of a message probability. [p. 79, Signals, Edward Arnold. 1972. Bold emphasis added. Apart from the justly classical status of Connor's series, his classic work dating from before the ID controversy arose is deliberately cited, to give us an indisputably objective benchmark.]

To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the “Shannon sense” – never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a “typical” long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M –> pj, and in the limit attains equality. We term pj the a priori — before the fact — probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori — after the fact — probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver:

I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1

This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that:

I total = Ii + Ij . . . Eqn 2

For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is:

I = log [1/pj] = – log pj . . . Eqn 3

This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so:

Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4

So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is – log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see “wueen” it is most likely to have been “queen.”)

Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer):

- H = p1 log p1 + p2 log p2 + . . . + pn log pn

or, H = – SUM [pi log pi] . . . Eqn 5

H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form . . . [--> previously discussed]

A baseline for discussion.

KF

6. It is interesting to note that in the building of better random number generators for computer programs, a better source of entropy is required:

Cryptographically secure pseudorandom number generator
Excerpt: From an information theoretic point of view, the amount of randomness, the entropy that can be generated is equal to the entropy provided by the system. But sometimes, in practical situations, more random numbers are needed than there is entropy available.
http://en.wikipedia.org/wiki/C....._generator

And Indeed we find:

Thermodynamics – 3.1 Entropy
Excerpt:
Entropy – A measure of the amount of randomness or disorder in a system.

And the maximum source of randomness in the universe is found to be,,,

Entropy of the Universe – Hugh Ross – May 2010
Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated.
http://www.reasons.org/entropy-universe

Roger Penrose – How Special Was The Big Bang?
“But why was the big bang so precisely organized, whereas the big crunch (or the singularities in black holes) would be expected to be totally chaotic? It would appear that this question can be phrased in terms of the behaviour of the WEYL part of the space-time curvature at space-time singularities. What we appear to find is that there is a constraint WEYL = 0 (or something very like this) at initial space-time singularities-but not at final singularities-and this seems to be what confines the Creator’s choice to this very tiny region of phase space.”

,,, there is also a very strong case to be made that the cosmological constant in General Relativity, the extremely finely tuned 1 in 10^120 expansion of space-time, drives, or is deeply connected to, entropy as measured by diffusion:

Big Rip
Excerpt: The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of universe, from stars and galaxies to atoms and subatomic particles, are progressively torn apart by the expansion of the universe at a certain time in the future. Theoretically, the scale factor of the universe becomes infinite at a finite time in the future.
http://en.wikipedia.org/wiki/Big_Rip

Thus, though neo-Darwinian atheists may claim that evolution is as well established as Gravity, the plain fact of the matter is that General Relativity itself, which is by far our best description of Gravity, testifies very strongly against the entire concept of ‘random’ Darwinian evolution.

also of note, quantum mechanics, which is even stronger than general relativity in terms of predictive power, has a very different ‘source for randomness’ which sets it as diametrically opposed to materialistic notion of randomness:

Can quantum theory be improved? – July 23, 2012
Excerpt: However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (conscious observation) parameters can be chosen independently (free choice, free will assumption) of the other parameters of the theory.,,,
,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random.
http://phys.org/news/2012-07-quantum-theory.html

Needless to say, finding ‘free will conscious observation’ to be ‘built into’ quantum mechanics as a starting assumption, which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands randomness as the driving force of creativity! Could these two different sources of randomness in quantum mechanics and General relativity be one of the primary reasons of their failure to be unified???

Further notes, Boltzman, as this following video alludes to,,,

BBC-Dangerous Knowledge

,,,being a materialist, thought of randomness, entropy, as ‘unconstrained’, as would be expected for someone of the materialistic mindset. Yet Planck, a Christian Theist, corrected that misconception of his:

The Austrian physicist Ludwig Boltzmann first linked entropy and probability in 1877. However, the equation as shown, involving a specific constant, was first written down by Max Planck, the father of quantum mechanics in 1900. In his 1918 Nobel Prize lecture, Planck said:This constant is often referred to as Boltzmann’s constant, although, to my knowledge, Boltzmann himself never introduced it – a peculiar state of affairs, which can be explained by the fact that Boltzmann, as appears from his occasional utterances, never gave thought to the possibility of carrying out an exact measurement of the constant. Nothing can better illustrate the positive and hectic pace of progress which the art of experimenters has made over the past twenty years, than the fact that since that time, not only one, but a great number of methods have been discovered for measuring the mass of a molecule with practically the same accuracy as that attained for a planet.
http://www.daviddarling.info/e.....ation.html

Related notes:

“It from bit symbolizes the idea that every item of the physical world has at bottom – at a very deep bottom, in most instances – an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin.”
John Archibald Wheeler

Zeilinger’s principle
Zeilinger’s principle states that any elementary system carries just one bit of information. This principle was put forward by Austrian physicist Anton Zeilinger in 1999 and subsequently developed by him to derive several aspects of quantum mechanics. Some have reasoned that this principle, in certain ways, links thermodynamics with information theory. [1]
http://www.eoht.info/page/Zeilinger%27s+principle

“Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
Tom Siegfried, Dallas Morning News, 5/14/90 – Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article

In the beginning was the bit – New Scientist
Excerpt: Zeilinger’s principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron’s spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg’s uncertainty principle.

Is it possible to find the radius of an electron?
The honest answer would be, nobody knows yet. The current knowledge is that the electron seems to be a ‘point particle’ and has refused to show any signs of internal structure in all measurements. We have an upper limit on the radius of the electron, set by experiment, but that’s about it. By our current knowledge, it is an elementary particle with no internal structure, and thus no ‘size’.

7. F/N: Let’s do some boiling down, for summary discussion in light of the underlying matters above and in onward sources:

1: In communication situations, we are interested in information we have in hand, given certain identifiable signals (which may be digital or analogue, but can be treated as digital WLOG)

2: By contrast, in the thermodynamics situation, we are interested in the Macro-micro info gap [MmIG], i.e the “missing info” on the ultra-microscopic state of a system, given the lab-observable state of the system.

3: In the former, the inference that we have a signal, not noise, is based on an implicit determination that noise is not credibly likely to be lucky enough to mimic the signal, given the scope of the space of possible configs, vs the scope of apparently intelligent signals.

4: So, we confidently and routinely make that inference to intelligent signal not noise on receiving an apparent signal of sufficient complexity, and indeed define a key information theory metric signal to noise power ratio, on the characteristic differences between the typical observable characteristics of signals and noise.

5: Thus, we are routinely inferring that signals involving FSCO/I are not improbable on intelligent action (intelligently directed organising work, IDOW) but that they are so maximally improbable on “lucky noise” that we typically assign what looks like typical signals to real signals, and what looks like noise to noise on a routine and uncontroversial basis.

6: In the context of spontaneous OOL etc, we are receiving a signal in the living cell, which is FSCO/I rich.

7: But because there is a dominant evo mat school of thought that assumes or infers that at OOL no intelligence was existing or possible to direct organising work, it is presented as if it were essentially unquestionable knowledge, that without IDOW, FSCO/I arose.

8: In other words, despite never having observed FSCO/I arising in this way and despite the implications of the infinite monkeys/ needle in haystack type analysis, that such is essentially unobservable on the gamut of our solar system or the observed cosmos, this ideological inference is presented as if it were empirically well grounded knowledge.

9: This is unacceptable, for good reasons of avoiding question-begging.

10: By sharpest contrast, on the very same principles of inference to best current explanation of the past in light of dynamics of cause and effect in the present that we can observe as leaving characteristic signs that are comparable to traces in deposits from the past or from remote reaches of space [astrophysics], design theorists infer from the sign, FSCO/I to its cause in the remote past etc being — per best explanation on empirical warranting grounds — being design, or as I am specifying for this discussion: IDOW.

Let us see how this chain of reasoning is handled, here and elsewhere.

KF

8. 8
scordova

Sal:

Great post!

Thank you!

a) Shannon entropy is the basis for what we usually call the “complexity” of a digital string.

In Bill Dembski’s literature, yes. Some other’s will use a different metric for complexitly, like Algorithmic complexity. Phil Johnson and Stephen Meyer actually refer to algorithmic complexity if you read what they say carefully. In my previously less enlightened writings on the net I used algorithmic complexity.

The point is, this confusion needs a little bit of remedy. Rather than use the word “complexity” it is easier to say what actual metric one is working from. CSI is really based on Shannon Entropy not algorithmic or thermodynamic entropy.

b) Regarding the exmaple in:

File_B : 1 gigabit of all 1?s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Organizational characteristics: highly organized
inference : designed

I would say that the inference of design is not necessarily warrnted.

Yes, thank you. I’ll have to revisit this example. It’s possible a programmer had the equivalent of stuck key’s. I’ll update the post accordingly. That’s why I post stuff like this at UD, to help clean up my own thoughts.

9. 9
scordova

gpuccio,

In light of your very insightful criticism, I amended the OP as follows:

inference : designed (with qualification, see note below)

….
NOTE:
[UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn't necessarily warranted. It's possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer's intention.]

10. 10

Complete and utter nonsense. I assume you have absolutely no experience with the specification and development of new systems.

A baseball’s design is refined to eliminate every single ounce of weight or space that does not satisfy the requirements for a baseball.

An airliner’s design is refined to eliminate every single ounce of weight or space that does not satifsy the requirements for an airliner.

But the airliner is much more complex than the baseball and didn’t get that way by accident.

I assume that you assume that an entropic design is launched by its designers like a Mars probe but expected to change/evolve after launch (by increasing its entropy). But as far as we know, most biologic systems are remarkably stable in their designs (um, the oldest known bat fossils are practically identical to modern bats). In “The Edge of Evolution”, Behe in fact bases his argument against Evolution on the fact that there are measurably distinct levels of complexity in biologic systems, and that no known natural mechanism, most especially random degradation of the original design, will get you from a Level 2 system to a more complex Level 3 system.

11. 11
scordova

mahuna

I assume you have absolutely no experience with the specification and development of new systems.

Before becoming a financeer I was an engineer. I have 3 undergraduate degrees in electrical engineering and computer science and mathematics and a graduate engineering degree in applied physics. Of late I try to minimize mentioning it because there are so many things I don’t understand which I ought to with that level of academic exposure. I fumble through statistical mechanics and thermodynamics and even basic math. I have to solicit expertise on these matters, and I have to admit that I’m wrong many times or don’t know something, or misunderstand something — and willingness to admit mistakes or lack of understanding is a quality which I find lacking among many of my creationist brethren, and even worse among evolutionary biologists.

I worked on aerospace systems, digital telephony, unmanned aerial vehicles, air traffic control systems, security systems. I’ve written engineering specifications and carried them out. Thus

I assume you have absolutely no experience with the specification and development of new systems.

is utterly wrong and a fabrication of your own imagination.

Besides, my experience is irrelevant to this discussion. At issue are the ideas and calculations.

Do you have any comment on my calculations of Shannon entropy or the other entropy scores for the objects listed?

12. kf:

What advocates of this do not usually disclose, is that raw injection of energy tends to go to heat, i.e. to dramatic rise in the number of possible configs, given the combinational possibilities of so many lumps of energy dispersed across so many mass-particles. That is, MmIG will strongly tend to RISE on heating.

Interesting thought and worth considering. I think it is a useful point to bring up when addressing the “open system” red herring put forth by some OOL advocates, but at the end of the day it is really a rounding error on the awful probabilities that already exist. Thus, it probably makes sense to mention it in passing (“Adding energy without direction can actually make things worse.”) if someone is pushing the “just add energy” line of thought, but then keep the attention focused squarely on the heart of the matter.

13. Also, kf, the rejoinder by the “just add energy” advocate will be that the energy typically increases the reaction rate. Therefore, even if there are more states possible, the prebiotic soup can move through the states more quickly.

It is very difficult to analyze and compare the probabilities (number of states and increased reaction time of various chemicals in the soup) and how they would be affected by adding energy. Perhaps impossible, without making all kinds of additional assumptions about the particular soup and amount/type of energy, which assumptions would themselves be subject to debate.

Anyway, I think you make an interesting point. The more I think about it, however, the more I think it could lead to getting bogged down in the ‘add energy’ part of the discussion. Seems it might be better to stick with a strategy that forcefully states that the ‘add energy’ argument is a complete red herring and not honor the argument by getting into a discussion of whether adding energy would decrease or increase the already terrible odds with specific chemicals in specific situations.

Anyway, just thinking out loud here . . .

14. 14
scordova

Regarding the “Add Energy” argument. Set off an source equal in energy and power to an atomic bomb — the results are predictable in terms of the designs (or lack thereof) that will emerge in the aftermath.

That is an example where Entropy increases, but so does disorder.

The problem, as illustrated with the 500-coins, is that Shannon Entropy and Thermodynamic Entropy have some independence from the notions of disorder.

A designed system can have 500 bits of Shannon entropy but so can an undesigned system. Having 500 bits of Shannon entropy says little (in and of itself) whether something is desiged. An independent specification is needed to identify a design, the entropy score is only a part.

We can have:

1. entropy rise and more disorder
2. entropy rise and more order
3. entropy rise and more disorganization
4 entropy rise and more organization
5. entropy rise and destroying design
6. entropy rise and creating design

We can’t make a general statement about what will happen to a design or a disordered system merely because the entropy rises. There are too many other variables to account for before we can say something useful.

15. EA:

When the equilibria are as unfavourable as they are, a faster reaction rate will favour breakdown, as is seen from how we refrigerate to preserve. In effect around room temp, activation processes double for every 8 K increase in temp.

And, the rate of state sampling used in the FSCI calc at 500 bits as revised is actually that for the fastest ionic reactions, not the slower rates appropriate to organic ones. For 1,000 bits, we are using Planck times which are faster than anything else physical. The limits are conservative.

KF

16. F/N; Please note how I speak of a sampling theory result on a config space, which is independent of precise probability calculations; we have only a reasonable expectation to pick up the bulk of the distribution. Remember we are sampling on the order of one straw to a cubical hay bale 1,000 light years on the side, i.e comparably thick to our Galaxy. KF

17. SC: Please note the Macro-micro info gap issue I have highlighted above. KF

18. OlegT helped you? Is this the same olegt that now quote-mines you for brownie points?

olegt’s quote-mine earns him 10 points (out of 10) on the low-integrity scale

19. 20
scordova

The fact that Oleg and Mike went beyond their natural dislike of creationists and were generous to teach me things is something I’m very appreciative of. I’m willing to endure their harsh comments about me because they have scientific knowledge that is worth learning and passing on to everyone.

20. OT:

Amazing — light filmed at 1,000,000,000,000 Frames/Second! – video (this is so fast that at 9:00 Minute mark of video the time dilation effect of relativity is caught on film)

21. 22
butifnot

Sal, something’s missing, don’t you think? Does it not ‘feel’ that when we get to thermo and information and design, there is *more*, that will not to be admitted from a basic rehash, which is where it looks like you’re at.

The bridge between thermo and ‘information’ is fascinating, but here is where it could become really interesting – [what if] actual information has material and non material components! Our accounting may, and may have to, meet this reality.

The difference in entropy of a ‘live’ brain and and the same brain dead with a small .22 hole in it is said to be very small, but is it? Perhaps something is missing.

22. 23
scordova

Part two is now available:

Part II

23. 24
butifnot

Sal, the time is ripe for a bold new thermo-entropy synthesis! Practically the sum of human knowledge is available in an instant for free. A continuing and wider survey, far wide of materialists, is needed before this endeavor can (should) be launched to fruition.

Shannon’s concept of information is adequate to deal with the storage and transmission of data, but it fails when trying to understand the qualitative nature of information.

Theorem 3: Since Shannon’s definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning.

In order to be able adequately to evaluate information and its processing in different systems, both animate and inanimate, we need to widen the concept of information considerably beyond the bounds of Shannon’s theory. Figure 4 illustrates how information can be represented as well as the five levels that are necessary for understanding its qualitative nature.
Level 1: statistics

Shannon’s information theory is well suited to an understanding of the statistical aspect of information. This theory makes it possible to give a quantitative description of those characteristics of languages that are based intrinsically on frequencies. However, whether a chain of symbols has a meaning is not taken into consideration. Also, the question of grammatical correctness is completely excluded at this level.

http://creation.com/informatio.....nd-biology

The distinction (good question) between data and information (and much else) must be addressed to get to thermo-design-info theory.

24. 25
scordova

Hi butifnot,

I don’t believe that evolutionists have proven their case.

There are fruitful ways to criticize OOL and Darwinism, I just think that creationists will hurt themselves using the 2nd Law and Entropy arguments (for the reasons outlined in these posts). They need to move on to arguments that are more solid.

What is persuasive to me are the cases of evolutionsits leaving the Darwin camp or OOL camp:

Micahel Denton
Jerry Fodor
Masimo Piantelli
Jack Trevors
Hubert Yockey
Richard Sternberg
Dean Kenyon
James Shapiro

etc.

Their arguments I find worthwhile. I don’t have any new theories to offer. Such an endeavor would be over my head anyway. I know too little to make much of a contribution to the debate beyond what you have seen at places like UD. Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…

Sal

25. kf @16:

You make a good point about breakdown.

I’m just looking at the typical approach by abiogenesis proponents from a debating standpoint. I have rarely seen an abiogenesis proponent take careful stock of the many problems with their own preferred OOL scenario, including not only breakdown but also problems with interfering cross reactions, construction of polymers only on side chains, etc. The typical abiogenesis proponent, when they are willing to debate the topic, are almost wholly engrossed with the raw probabilistic resources — amount of matter in the universe, reaction rates, etc. Rarely do they consider the additional probabilistic hurdles that come with things like breakdown.

Indeed, one of the favorite debating tactics is to assert that because we don’t know all the probabilistic hurdles that need to be overcome we can’t therefore draw any conclusion about the unlikelihood of abiogenesis taking place. Despite the obvious logical failure of such an argument, this is a favorite rhetorical tactic of, for example, Elizabeth Liddle. This is of course absurd, to say the least, but it underscores the mindset.

As a result, when we talk about increased energy, the only thing the abiogenesis proponent will generally allow into their head is the hopeful glimmer of faster reaction rates. That is all they are interested in — more opportunities for chance to do its magic. The other considerations — including things like interfering cross reactions and breakdown of nascent molecules — are typically shuffled aside or altogether forgotten. The unfortunate upshot is that pointing out problems with additional energy (like faster breakdown), typically, will fall on deaf ears.

That, coupled with the fact that any definitive answer on the point requires a detailed analysis of precisely which OOL scenario is being discussed, how dilute the solution is, what kind of environment is present, the operative temperature, the type of energy infused, etc., means that it is nearly impossible to convince the recalcitrant abiogenesis proponent that additional energy can in fact be worse. Thus, from a practical standpoint, we seem better off just focusing on real issue — information — and note that energy does nothing to help with that key aspect.

Anyway, way more than you wanted to hear. I’m glad you shared your thoughts on additional energy. I think you have something there worth considering, including a potential hurdle for the occasional abiogenesis proponent who is actually willing to think about things like breakdown.

26. Sal:

Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…

Spoken like a true academic elitist!

27. 28
scordova

Trevors and Abel point out the necessity of Shannon entropy (uncertainty) to store information for life to replicate. Hence, they recognize that a sufficient amount of Shannon entropy is needed for life:

No natural mechanism of nature reducible to law canexplain the high information content of genomes. This is a mathematical truism, not a matter subject to over-turning by future empirical data. The cause-and-e?ect necessity described by natural law manifests a probability approaching 1.0. Shannon uncertainty is a probability function (-log2 p). When the probability of naturallaw events approaches 1.0, the Shannon uncertaintycontent becomes miniscule (-log2p = log2 1.0=0 uncertainty). There is simply not enough Shannon uncertainty in cause-and-e?ect determinism and its reductionistic laws to retain instructions for life.

28. 29
butifnot

Their arguments I find worthwhile. I don’t have any new theories to offer. Such an endeavor would be over my head anyway. I know too little to make much of a contribution to the debate beyond what you have seen at places like UD. Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…

Sorry, I have to bring it down a notch. Just something that has been on my mind a long time

29. EA:

Notice, I consistently speak of sampling a distribution of possibilities in a config space, where the atomic resources of solar system or observed cosmos are such that only a very small fraction can be sampled. For 500 bits, we talk of a one straw size sample to a cubical haystack 1,000 LY on the side, about as thick as the galaxy.

With all but certainty, a blind, chance and necessity sample will be dominated by the bulk of the distribution. In short, it is maximally implausible that special zones will be sampled.

KF

PS: Have I been sufficiently clear in underscoring that in stat thermo-d the relevant info metric associated with entropy is a measure of the missing info to specify micro state given macro state?

30. 31
EndoplasmicMessenger

if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy

Surely you mean “if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain 32 bits or so of Shannon entropy”

31. 32
scordova

Surely you mean “if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain 32 bits or so of Shannon entropy”

Surely not.

32-bits (or 64 bits) refers to the number of bits available to address memeory, not the actual amount of memory Windows-7 requires.

32 bits can address 2^32 bytes of memory or 4 gigabytes directly.

From the Windows website describing Vista (and the comment applies to other Windows operating systems)

One of the greatest advantages of using a 64-bit version of Windows Vista is the ability to access physical memory (RAM) that is above the 4-gigabyte (GB) range. This physical memory is not addressable by 32-bit versions of Windows Vista.

Windows x64 occupies about 16gigabytes. A byte being 8 bits implies 16 gigabytes is 16*8 = 128 gigabits.

Thus the Shannon entropy required to represent windows-7 x64 is on the order of 128 gigabits.

Shannon entropy is the amount of information that can be represented, not the number of bits required to locate an address in memory.

32. To elaborate on what Bill said, if we have a fair coin, it can exist in two microstates: heads (call it microstate 1) or tails (call it microstate 2).

I have to disagree with Bill. I have a coin in my pocket and it’s not in either the heads state or the tails state.

33. Entropy:

The notion of entropy is foundational to physics, engineering, information theory and ID. These essays are written to provide a discussion on the topic of entropy and its relationship to other concepts such as uncertainty, probability, microstates, and disorder. Much of what is said will go against popular understanding, but the aim is to make these topics clearer.

ok, so what is entropy?

First I begin with calculating Shannon entropy for simple cases.

ok, but first, what is “Shannon entropy”?

2. Shannon entropy – measured in bits or dimensionless units

Telling me it’s measured in bits doesn’t tell me what “it” is.

I is the Shannon entropy (or measure of information).

So “Shannon entropy” is a measure of information?

Hence, sometimes entropy is described in terms of improbability or uncertainty or unpredictability.

So Shannon entropy is a measure of what we don’t know? More like a measure of non-information?

34. 35
scordova

FROM MUNG:

No Sal, 500 pennies gets you 500 bits of copper plated zinc, not 500 bits of information (or Shannon entropy).

Contrast to Bill Dembski’s recent article:

FROM BILL DEBMSKI

In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits,

I just did a comparable calculation more elaborately, and you missed it. Instead of tossing a single coin 3 times, I had 3 coins tossed 1 time.

FROM WIKI

A single toss of a fair coin has an entropy of one bit. A series of two fair coin tosses has an entropy of two bits. The entropy rate for the coin is one bit per toss

I wrote the analogous situation, except insted of making multiple tosses of a single coin, I did the formula for single tosses of multiple coins. The Shannon entropy is analogous.

I wrote:

It can be shown that for the Shannon entropy of a system of N distinct coins is equal to N bits. That is, a system with 1 coin has 1 bit of Shannon entropy, a system with 2 coins has 2 bits of Shannon entropy, a system of 3 coins has 3 bits of Shannon entropy, etc.

35. 36
timothya

Sal posted this:

Now to help disentangle concepts a little further consider three 3 computer files:

File_A : 1 gigabit of binary numbers randomly generated
File_B : 1 gigabit of all 1?s
File_C : 1 gigabit encrypted JPEG

Here are the characteristics of each file:

File_A : 1 gigabit of binary numbers randomly generated
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly disorganized
inference : not designed

File_B : 1 gigabit of all 1?s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : designed (with qualification, see note below)

File_C : 1 gigabit encrypted JPEG
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : extremely designed

Please tell me that you are joking.

If you didn’t know in advance what the origin of File A and File C were, then you would have no useful evidence from the contents of the two files to decide that one was “highly disorganised” and the other was “highly organised”. Hint: the purpose of encryption is to make the contents of the file approach as closely as possible to a randomly generated string.

File B supports an inference of “highly organised”? How? Why? What if the ground state of the signal is just the continuous emission of something interpreted digitally as “ones” (or zeroes” for that matter). Your argument appears to say that if a system transmits a constant signal, then it must be organised.

36. 37
timothya

Correction, I posted this:

Your argument appears to say that if a system transmits a constant signal, then it must be organised.

I meant to use the term from your post that the valid inference for File B was that the file contents were designed. Clearly a gigabit of “ones” is organised in the sense that it has an evident pattern.

37. 38
scordova

If you didn’t know in advance what the origin of File A and File C were, then you would have no useful evidence from the contents of the two files to decide that one was “highly disorganised” and the other was “highly organised”.

The fact that I knew File C was a JPEG suggests that I had some advanced knowledge of the file being designed. And even if I didn’t know that in advance, the fact that it could be parsed and processed as a JPEG indicates that it is organized.

The fact that I specified in advance that FILE A was created by a random number generator ensures a high probability it will not be designed.

File B had to be restated with qualification as gpuccio pointed out.

The inference of design or lack thereof was based on advanced prior knowledge, not some explantory filter after the fact.

38. 39
timothya

If you have a means of distinguishing between File X, (which contains a genuine random strong), and File Y (which contains a pseudorandom random string encoding a human-readable sentence), then fill your boots and publish the method.

The sound you can hear is that of computer security specialists the world over shifting uncomfortably in their seats. Or perhaps of computer security specialists laughing their faces off.

The point is this: if you want to infer “design” solely from the evidence (of the contents of the files, with no a priori knowledge of their provenance), then what is your method?

39. If you have a means of distinguishing between File X, (which contains a genuine random strong), and File Y (which contains a pseudorandom random string encoding a human-readable sentence), then fill your boots and publish the method.

I would bet that both strings are the product of agency involvement as blind and undirected processes cannot construct a file.

40. 41
timothya

Waiting for Sal’s response, I noticed that he posted this:

The fact that I knew File C was a JPEG suggests that I had some advanced knowledge of the file being designed. And even if I didn’t know that in advance, the fact that it could be parsed and processed as a JPEG indicates that it is organized.

Exactly. You knew in advance that the file was JPEG-encoded. But even if you didn’t know in advance, the fact that a JPEG decoder could produce a meaningful image proves only that the message was encoded using the JPEG protocol. A magnificent feat of inference.

It might be interesting if you could prove that the message originated from a non-human source. Otherwise not.

But what if you only have the encoded string to work upon, and the JPEG codec generates an apparently random string as output? How do you tell whether the output signal is truly random or that it contains a human-readable message encoded using some other protocol?

If I understand your original post, you claim that design is detectable from the pattern of the encoded message, independent of its mode of encoding.

41. 42
timothya

Joe posted this:

I would bet that both strings are the product of agency involvement as blind and undirected processes cannot construct a file.

Forget the container and consider the thing contained (I mean, really, do I have to define every parameter of the discussion?). Scientists sensing signals from a pulsar store the results in a computer “file” via a series of truth-preserving transformations (light data to electronics to magnetic marks on a hard drive). Are you arguing that the stored data does not correlate reliably to the original sense data?

42. I’m saying that if you find a file on a compuetr then it a given some agency put it there.

43. And timothya- I am still waiting for evidence that natural selection is non-random….

44. 45
timothya

Joe posted this:

I’m saying that if you find a file on a compuetr then it a given some agency put it there.

Brilliant insight.

Users of computers generate artefacts that are stored in a form determined by the operating system of the computer that they are using (in turn determined by the human designers of the operating system involved). I would be a little surprised if it proved to be otherwise.

However, the reliable transformation of input data to stored data in computer storage doesn’t help Sal with his problem of how to assign “designedness” to an arbitrary string of input data.

He has to show that there is a reliable way to distinguish between a genuinely random string and a pseudorandom string that is hiding a human-readable message, when all he has to go on is the string itself, with no prior knowledge.

If he has such a method, I would be fascinated to know what it is.

45. 46
timothya

Joe posted this:

And timothya- I am still waiting for evidence that natural selection is non-random….

This thread seems to be focussed on the “how to identify designedness”, so perhaps we should stick to that subject.

46. timothya- there isn’t any evidence that natural selection is non-random- just so that we are clear.

47. 48
timothya

Joe

I am clear that you think so. You are in disagreement with almost every practising biologist in the world of science. But that is your choice.

In the meantime, can we focus on Sal’s proposal?

48. No timothya- I don’t think so. It is obvious. And not one of those biologists can produce any evidence that demonstrates otherwise.

49. TA:

Why not look over in the next thread 23 – 24 (with 16 in context as background)?

Kindly explain the behaviour of the black box that emits ordered vs random vs meaningful text strings of 502 bits:

|| BLACK BOX || –> 502 bit string

As in, explain to us, how emitting the string of ASCII characters for the first 72 or so letters of this post is not an excellent reason to infer to design as the material cause of the organised string. As in, intelligently directed organising work, which I will label for convenience, IDOW.

Can you justify a claim that lucky noise plus mechanical necessity adequately explains such an intelligible string, in the teeth of what sampling theory tells us on the likely outcome of samples on the gamut of the 10^57 atoms of the solar system for 10^17 s, at about 10^14 sa/s — comparable to fast chemical ionic reaction rates — relative to the space of possible configs of 500 bits. (As in 1 straw-size to a cubical hay bale of 1,000 LY on the side about as thick as our galaxy.)

As in, we have reason to infer on FSCO/I as an empirically reliable sign of design, no great surprise, never mind your recirculation of long since cogently answered objections.

(NB: This is what often happens when a single topic gets split up by using rapid succession of threads with comments. That is why I posted a reference thread, with a link back and no comments.)

KF

50. 51
scordova

But even if you didn’t know in advance, the fact that a JPEG decoder could produce a meaningful image proves only that the message was encoded using the JPEG protocol.

And JPEG encoders are intellignetly deisgned, so the files generated are still products of intelligent design.

A magnificent feat of inference.

Indeed.

It might be interesting if you could prove that the message originated from a non-human source

Humans can make JPEGs, so no need to invoke non-human sources.

51. 52
scordova

But what if you only have the encoded string to work upon, and the JPEG codec generates an apparently random string as output? How do you tell whether the output signal is truly random or that it contains a human-readable message encoded using some other protocol?

You can’t tell if a string is truly the product of mindless purposeless forces (random is your word), so you have to be agnostic about that. So one must accept that one can make a false inference to randomness (such as when someone wants to be extremely stealthy and encrypt the data).

If it parses with another codec that is avaiable to you, you have good reason to accept the file is designed.

Beyond that, one might have other techniques such as those that team Norton Symantec used to determine that Stuxnet was the product of an incredible level of intelligent design:

How Digital Detectives Deciphered Stuxnet

Several layers of masking obscured the zero-day exploit inside, requiring work to reach it, and the malware was huge — 500k bytes, as opposed to the usual 10k to 15k. Generally malware this large contained a space-hogging image file, such as a fake online banking page that popped up on infected computers to trick users into revealing their banking login credentials. But there was no image in Stuxnet, and no extraneous fat either. The code appeared to be a dense and efficient orchestra of data and commands.

….
Instead, Stuxnet stored its decrypted malicious DLL file only in memory as a kind of virtual file with a specially crafted name.

It then reprogrammed the Windows API — the interface between the operating system and the programs that run on top of it — so that every time a program tried to load a function from a library with that specially crafted name, it would pull it from memory instead of the hard drive. Stuxnet was essentially creating an entirely new breed of ghost file that would not be stored on the hard drive at all, and hence would be almost impossible to find.

O Murchu had never seen this technique in all his years of analyzing malware. “Even the complex threats that we see, the advanced threats we see, don’t do this,” he mused during a recent interview at Symantec’s office.

Clues were piling up that Stuxnet was highly professional, and O Murchu had only examined the first 5k of the 500k code. It was clear it was going to take a team to tackle it. The question was, should they tackle it?

….
But Symantec felt an obligation to solve the Stuxnet riddle for its customers. More than this, the code just seemed way too complex and sophisticated for mere espionage. It was a huge adrenaline-rush of a puzzle, and O Murchu wanted to crack it.

“Everything in it just made your hair stand up and go, this is something we need to look into,” he said.

….
As Chien and O Murchu mapped the geographical location of the infections, a strange pattern emerged. Out of the initial 38,000 infections, about 22,000 were in Iran. Indonesia was a distant second, with about 6,700 infections, followed by India with about 3,700 infections. The United States had fewer than 400. Only a small number of machines had Siemens Step 7 software installed – just 217 machines reporting in from Iran and 16 in the United States.

The infection numbers were way out of sync with previous patterns of worldwide infections — such as what occurred with the prolific Conficker worm — in which Iran never placed high, if at all, in infection stats. South Korea and the United States were always at the top of charts in massive outbreaks, which wasn’t a surprise since they had the highest numbers of internet users. But even in outbreaks centered in the Middle East or Central Asia, Iran never figured high in the numbers. It was clear the Islamic Republic was at the center of the Stuxnet infection.

The sophistication of the code, plus the fraudulent certificates, and now Iran at the center of the fallout made it look like Stuxnet could be the work of a government cyberarmy — maybe even a United States cyberarmy.

And that illustrates how a non-random string in a computer might be deduced as the product of some serious ID.

52. F/N: This from OP needs comment:

what is the Shannon Entropy of a system of 500 distinct coins? Answer: 500 bits, or the Universal Probability Bound.

By way of extension, if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy. This illustrates the principle that more complex designs require larger Shannon entropy to support the design. It cannot be otherwise. Design requires the presence of entropy, not absence of it.

Actually, in basic info theory, H strictly is a measure of average info content per element in a system or symbol in a message. Hence its being estimated on a weighted average of information per relevant element.

This, I illustrated earlier from a Shannon 1950/1 paper, in comment 15 in the part 2 thread:

The entropy is a statistical parameter which measures, in a certain sense, how much information is produced on the average for each letter of a text in the language. If the language is translated into binary
digits (0 or 1) in the most efficient way, the entropy is the average number of binary digits required per letter of the original language. The redundancy, on the other hand, measures the amount of constraint imposed on a text in the language due to its statistical structure, e.g., in English the high fre-quency of the letter E, the strong tendency of H to follow T or of V to follow Q. It was estimated that when statistical effects extending over not more than eight letters are considered the entropy is roughly 2.3 bits per letter, the redundancy about 50 per cent.

So, we see the context of usage here.

But what happens when you have a message of N elements?

In the case of a system of complexity N elements, then the cumulative, Shannon metric based information — notice how I am shifting terms to avoid ambiguity — is, logically, H + H + . . . H N times over, or N * H.

And, as was repeatedly highlighted, in the case of the entropy of systems that are in clusters of microstates consistent with a macrostate, the thermodynamic entropy is usefully measured by and understood on terms of the Macro-micro information gap (MmIG], not on a per state or per particle basis but a cumulative basis: we know macro quantities, not the specific position and momentum of each particle, from moment to moment, which given chaos theory we could not keep track of anyway.

A useful estimate per the Gibbs weighed probability sum entropy metric — which is where Shannon reputedly got the term he used from in the first place, on a suggestion from von Neumann — is:

>>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate. >>

Where, Wiki gives a useful summary:

The macroscopic state of the system is defined by a distribution on the microstates that are accessible to a system in the course of its thermal fluctuations. So the entropy is defined over two different levels of description of the given system. The entropy is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if E_i is the energy of microstate i [--> Notice, summation is going to be over MICROSTATES . . . ], and p_i is its probability that it occurs during the system’s fluctuations, then the entropy of the system is

S_sys = – k_B [SUM over i's] P_i log p_i

Also, {- log p_i} is an information metric, I_i, i.e the information we would learn on actually coming to know that the system is in microstate i. Thus, we are taking a scaled info metric on the probabilistically weighted summmation of info in each microstate. Let us adjust:

S_sys = k_B [SUM over i's] p_i * I_i

This is the weighted average info per possible microstate, scaled by k_B. (Which of course is where the Joules per Kelvin come from.)

In effect the system is giving us a message, its macrostate, but that message is ambiguous over the specific microstate in it.

After a bit of mathematical huffing and puffing, we are seeing that the entropy is linked to the average info per possible microstate.

Where this is going is of course that when a system is in a state with many possible microstates, it has enormous freedom of being in possible configs, but if the macro signals lock us down to specific states in small clusters, we need to account for how it could be in such clusters, when under reasonable conditions and circumstances, it could be easily in states that are far less specific.

In turn that raises issues over IDOW.

Which then points onward to FSCO/I being a sign of intelligent design.

KF

53. PS: As I head out, I think an estimate of what it would take to describe the state of 1 cc of monoatomic ideal gas at 760 mm HG and 0 degrees C, i.e. 2.687 * 10^19 particles with 6 degrees of positional and momentum freedom would help us. Let us devote 32 bits — 16 bits to get 4 hex sig figs, and a sign bit plus 15 bits for the binary exponent to each of the (x, y, z) and (P_x, P-y and P-z) co-ordinates in the phase space. We are talking about:

2.687 * 10^19 particles

x 32 bits per degree of freedom

x 6 degrees of freedom each
_____________

5.159 * 10^21 bits of info

That is, to describe the state of the system at a given instant, we would need 5.159 * 10^21 bits, or 644.9 * 10^18 bits. That is how many yes/no quest5ions, in teh correct order, would have to be amnswered and processed every clock tick we update. And with 10^-14 s as a reasonable chemical reaction rate, we are seeing a huge amount of required processing to keep track. As to how that would be done, that is anybody’s guess.

54. OOPS, 600 + Exa BYTES

55. As I have said above, the adoption of the term “entropy” for SMI was an unfortunate event, not because entropy is not SMI, but because SMI is not entropy!.

*SMI – Shannon’s Measure of Information

http://www.worldscientific.com......1142/7694

56. From the OP:

How entropy became equated with disorder, I do not know …

Arieh Ben-Naim writes:

“It should be noted that Boltzmann himself was perhaps the first to use the “disorder” metaphor in his writing:

…are initially in a very ordered – therefore very improbable – state … when left to itself it rapidly proceeds to the disordered most probable state.
– Boltzmann (1964)

You should note that Boltzmann uses the terms “order” and “disorder” as qualitative descriptions of what goes on in the system. When he defines entropy, however, he uses either the number of states or probability.

Indeed, there are many examples where the term “disorder” can be applied to describe entropy. For instance, mixing two gases is well described as a process leading to a higher degree of disorder. However, there are many examples for which the disorder metaphor fails.”

57. 58
scordova

Boltzmann

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

58. …and when left to itself it rapidly proceeds to the most probable state.

There, I fixed it fer ya!

As a bonus you get the “directionality” of entropy.

Ordered and disordered gots nothing to do with it.

59. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask.

What is Entropy, really?

So, entropy is the answer to the age-old question, why me?

60. Mung:

One more time [cf. 56 above, which clips elsewhere . . . ], let me clip Shannon, 1950/1:

The entropy is a statistical parameter which measures, in a certain sense, how much information is produced on the average for each letter of a text in the language. If the language is translated into binary
digits (0 or 1) in the most efficient way, the entropy is the average number of binary digits required per letter of the original language. The redundancy, on the other hand, measures the amount of constraint imposed on a text in the language due to its statistical structure, e.g., in English the high fre-quency of the letter E, the strong tendency of H to follow T or of V to follow Q. It was estimated that when statistical effects extending over not more than eight letters are considered the entropy is roughly 2.3 bits per letter, the redundancy about 50 per cent.

Going back to my longstanding, always linked note, which I have clipped several times over the past few days, here on is how we measure info and avg info per symbol:

To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the “Shannon sense” – never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a “typical” long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M –> pj, and in the limit attains equality. We term pj the a priori — before the fact — probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori — after the fact — probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver:

I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1

This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that:

I total = Ii + Ij . . . Eqn 2

For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is:

I = log [1/pj] = – log pj . . . Eqn 3

This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so:

Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4

So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is – log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see “wueen” it is most likely to have been “queen.”)

Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer):

- H = p1 log p1 + p2 log p2 + . . . + pn log pn

or, H = – SUM [pi log pi] . . . Eqn 5

H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form . . .

What this last refers to is the Gibbs formulation of entropy for statistical mechanics, and its implications when the relationship between probability and information is brought to bear in light of the Macro-micro views of a body of matter. That is, when we have a body, we can characterise its state per lab-level thermodynamically significant variables, that are reflective of many possible ultramicroscopic states of constituent particles.

Thus, clipping again from my always linked discussion that uses Robertson’s Statistical Thermophysics, CH 1 [and do recall my strong recommendation that we all acquire and read L K Nash's elements of Statistical Thermodynamics as introductory reading):

Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) . . . .

For, as he astutely observes on pp. vii - viii:

. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .

And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):

. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .

[deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ]

H({pi}) = – C [SUM over i] pi*ln pi, [. . . "my" Eqn 6]

[--> This is essentially the same as Gibbs Entropy, once C is properly interpreted and the pi's relate to the probabilities of microstates consistent with the given lab-observable macrostate of a system at a given Temp, with a volume V, under pressure P, degree of magnetisation, etc etc . . . ]

[where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .

[H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .

Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]

As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life’s Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then — again following Brillouin — identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously “plausible” primordial “soups.” In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale.

By many orders of magnitude, we don’t get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics . . .

Now, of course, as Wiki summarises, the classic formulation of the Gibbs entropy is:

The macroscopic state of the system is defined by a distribution on the microstates that are accessible to a system in the course of its thermal fluctuations. So the entropy is defined over two different levels of description of the given system. The entropy is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if E_i is the energy of microstate i, and p_i is its probability that it occurs during the system’s fluctuations, then the entropy of the system is:

S = -k_B * [sum_i] p_i * ln p_i

This definition remains valid even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates on which the sum is to be done is called a statistical ensemble. Each statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system’s exchanges with the outside, from an isolated system to a system that can exchange one more quantity with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article).

Neglecting correlations between the different possible states (or, more generally, neglecting statistical dependencies between states) will lead to an overestimate of the entropy[1]. These correlations occur in systems of interacting particles, that is, in all systems more complex than an ideal gas.

This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum mechanical case.

It has been shown that the Gibb’s Entropy is numerically equal to the experimental entropy[2] dS = delta_Q/{T} . . .

Looks to me that this is one time Wiki has it just about dead right. Let’s deduce a relationship that shows physical meaning in info terms, where (- log p_i) is an info metric, I-i, here for microstate i, and noting that a sum over i of p_i * log p_i is in effect a frequency/probability weighted average or the expected value of the log p_i expression, and also moving away from natural logs (ln) to generic logs:

S_Gibbs = -k_B * [sum_i] p_i * log p_i

But, I_i = – log p_i

So, S_Gibbs = k_B * [sum_i] p_i * I-i

i.e. S-Gibbs is a constant times the average information required to specify the particular microstate of the system, given its macrostate, the MmIG (macro-micro info gap.

Or, as Wiki also says elsewhere:

At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

So, immediately, the use of “entropy” in the Shannon context, to denote not H but N*H, where N is the number of symbols (thus, step by step states emitting those N symbols involved), is an error of loose reference.

Similarly, by exploiting parallels in formulation and insights into the macro-micro distinction in thermodynamics, we can develop a reasonable and empirically supportable physical account of how Shannon information is a component of the Gibbs entropy narrative. Where also Gibbs subsumes the Boltzmann formulation and onward links to the lab-measurable quantity. (Nash has a useful, relatively lucid — none of this topic is straightforward — discussion on that.)

Going beyond, once the bridge is there between information and entropy, it is there. It is not going away, regardless of how inconvenient it may be to some schools of thought.

We can easily see that, for example, information is expressed in the configuration of a string, Z, of elements z1 -z2 . . . zN in accordance with a given protocol of assignment rules and interpretation & action rules etc.

Where also, such is WLOG as AutoCAD etc show us that using the nodes and arcs representation and a list of structured strings that record this, essentially any object can be described in terms of a suitably configured string or collection of strings.

So now, we can see that string Z (with each zi possibly taking b discrete states) may represent an island of function that expresses functionally specific complex organisation and associated information. Because of specificity to achieve and keep function, leading to a demand for matching, co-ordinated values of zi along the string, that string has relatively few of the N^b possibilities for N elements with b possible states being permissible. We are at isolated islands of specific function i.e cases E from a zone of function T in a space of possibilities W.

(BTW, once b^N exceeds 500 bits on the gamut of our solar system, or 1,000 bits on the gamut of our observable cosmos, that brings to bear all the needle in the haystack, monkeys at keyboards analysis that has been repeatedly brought forth to show why FSCO/I is a useful sign of IDOW — intelligently directed organising work — as empirically credible cause.)

We see then that we have a complex string to deal with, with sharp restrictions on possible configs, that are evident from observable function, relative to the general possibility of W = b^N possibilities. Z is in a highly informational, tightly constrained state that comes form a special zone specifiable on macro-level observable function (without actually observing Z directly). That constraint on degrees of freedom contingent on functional, complex organisation, is tantamount to saying that a highly informational state is a low entropy one, in the Gibbs sense.

Going back to the expression, comparatively speaking there is not a lot of MISSING micro-level info to be specified, i.e. simply by knowing the fact of complex specified information-rich function, we know that we are in a highly restricted special Zone T in W. This immediately applies to R/DNA and proteins, which of course use string structures. It also applies tot he complex 3-D arrangement of components in the cell, which are organised in ways that foster function.

And of course it applies to the 747 in a flyable condition.

Such easily explains why a tornado passing through a junkyard in Seattle will not credibly assemble a 747 from parts it hits, and it explains why the raw energy and forces of the tornado that hits another formerly flyable 747, and tearing it apart, would render its resulting condition much less specified per function, and in fact result in predictable loss of function.

We will also see that this analysis assumes the functional possibilities of a mass of Al, but is focussed on the issue of functional config and gives it specific thermodynamics and information theory context. (Where also, algebraic modelling is a valid mathematical analysis.)