Home » Education, ID Foundations, Physics, science education, thermodynamics and information » The famous Feynman Lectures on Physics hosted free for all by Caltech (and taking a peek at entropy . . . )

The famous Feynman Lectures on Physics hosted free for all by Caltech (and taking a peek at entropy . . . )

Christmas is early this year.

Here are the famous Feynman Lectures on Physics (Vol II is forthcoming) hosted for free by Caltech.

A useful point of reference for one and all.

Just for fun, note here on on entropy, irreversibility and the rise of disorder:

Where does irreversibility come from? It does not come from Newton’s laws . . . . We already know . . .  that the entropy is always increasing. If we have a hot thing and a cold thing, the heat goes from hot to cold. So the law of entropy is one such law . . . .

Suppose we have a box with a barrier in the middle. On one side is neon (“black” molecules), and on the other, argon (“white” molecules). Now we take out the barrier, and let them mix. How much has the entropy changed? It is possible to imagine that instead of the barrier we have a piston, with holes in it that let the whites through but not the blacks, and another kind of piston which is the other way around. If we move one piston to each end, we see that, for each gas, the problem is like the one we just solved. So we get an entropy change of Nkln2, which means that the entropy has increased by kln2 per molecule. The 2 has to do with the extra room that the molecule has, which is rather peculiar. It is not a property of the molecule itself, but of how much room the molecule has to run around in. This is a strange situation, where entropy increases but where everything has the same temperature and the same energy! The only thing that is changed is that the molecules are distributed differently. We well know that if we just pull the barrier out, everything will get mixed up after a long time, due to the collisions, the jiggling, the banging, and so on . . . .

Everyone knows that if we started with white and with black, separated, we would get a mixture within a few minutes. If we sat and looked at it for several more minutes, it would not separate again but would stay mixed. So we have an irreversibility which is based on reversible situations. But we also see the reason now. We started with an arrangement which is, in some sense, ordered. Due to the chaos of the collisions, it becomes disordered. It is the change from an ordered arrangement to a disordered arrangement which is the source of the irreversibility.

It is true that if we took a motion picture of this, and showed it backwards, we would see it gradually become ordered. Someone would say, “That is against the laws of physics!” So we would run the film over again, and we would look at every collision. Every one would be perfect, and every one would be obeying the laws of physics. The reason, of course, is that every molecule’s velocities are just right, so if the paths are all followed back, they get back to their original condition. But that is a very unlikely circumstance to have. If we start with the gas in no special arrangement, just whites and blacks, it will never get back.

Sounds familiar? It should . . .

Here is Sewell again:

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology. Cf as well his other remarks here and here.]

Muy interesante. END

PS: Taking walks . . .

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

33 Responses to The famous Feynman Lectures on Physics hosted free for all by Caltech (and taking a peek at entropy . . . )

  1. Christmas is early — Caltech is hosting the Feynman Physics course free. (And, I take a peek at entropy with side-lights on the design inference.)

  2. Dr.Feynman’s strength was the way he simplified complicated concepts and explained it lucidly. Feynman diagrams are cartoonish but incredibly powerful way of depicting and calculating various Quantum interactions. It is fascinating to see his various lectures (just search YouTube).
    As for Sewell arguments, it has been refuted by many here. The article has further links inside.

  3. SR:

    Actually, Feynman’s course did not work particularly well as a primary intro to physics (cf his introductory remarks) but it did work well as a high level compressed survey. As I recall the story was, the numbers in attendance were about the same, but that was a combination of young students dropping out and more advanced people wanting to take the survey coming in.

    On Sewell: I know, I know, the much trumpeted talking point is how he has been “refuted,” but that is both not surprising — given that these are the same ilk seen here at UD in recent days that have so much trouble with actually self evident points, where obviously no scientific case of significance can approach that level of absolute warrant — and the dismissive talking point in fact is ill-grounded on the merits.

    That is — though this is commonly ducked, dodged, twisted into pretzels and diverted from — the only empirically and analytically well-grounded conclusion on the origin of functionally specific complex organisation and associated information [FSCO/I], is that it is the product of designing intelligent purposeful action. (Here is a simple test: name an actually observed counter example, where blind chance and necessity were observed to create such FSCO/I de novo out of spontaneous events. An excellent case that needs to be addressed is the claimed spontaneous origin of life in some prebiotic soup or the like.)

    In particular the notion behind the compensation argument fails: as a rule the injection of raw energy into a system will cause its entropy to INCREASE, for the obvious reason of adding to the ways that matter and energy can be arranged at micro-levels. The practically certain result is that disorder will increase.

    When energy injected does organising, constructive work, that is because of a prior arrangement of components that couples energy and mass inflows to achieve a targetted outcome, as a rule specified in organisation and or direct storage of information. For instance that is how computers and car engines alike work.

    KF

  4. PS: Here is Felsenstein giving a typical attempted rebuttal (one linked by SR):

    My back yard has some very tough and capable weeds, with which we struggle. I know that if I take a few seeds from one of these weeds and plant them, in a few months there will be weed plants there, ones that have a great many of those same seeds on them.

    That is a local decrease in entropy, an increase in order. A few seeds are replaced by many, with stems and leaves too. How did this happen? Aside from some water, carbon dioxide and minerals, mostly it happened by sunlight striking the plants and driving photosynthesis. It’s not a mystery. But all we saw entering the plants was radiation!

    Let ‘s see; what are seeds again? Complex organised systems with the information needed to construct a complex self replicating cell based organism, which manifests how FSCO/I is coming from prior relevant FSCO/I, coupling input energy and matter to achieve a programmed result.

    Is this a counter-example that has probative force?

    No.

    The question of where the FSCO/I originally came from is being massively begged.

    And indeed, this brings up Paley’s second example in his 1804 work, the thought exercise of a self replicating watch as evidence of design (which sadly but unsurprisingly does not usually appear in dismissive arguments against what Paley had to say):

    Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself — the thing is conceivable; that it contained within it a mechanism, a system of parts — a mold, for instance, or a complex adjustment of lathes, baffles, and other tools — evidently and separately calculated for this purpose . . . .

    The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done — for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair — the author of its contrivance, the cause of the relation of its parts to their use.

    It is always easier to knock over a strawman . . .

  5. Hi KF,
    Discussion about entropy is equivalent to discussion about ‘Law of Conservation of information’ and creation of new CSI in open/close system – it is never ending.
    About Feynman – I wouldn’t know about his classes, but looking at YouTube videos, he gathered quite a crowd. What I do know is that he is one of the greatest physicist.

  6. SR:

    Debate can be seemingly endless, that is why I called for concrete demonstration.

    We can easily show billions of cases of FSCO/I [including Paley's watch -- or for me, actually, a cell phone -- found in a field] caused by design or tracing thereto and either explicitly involving or implying information creation by intelligence. There simply are no cases of actual origin of FSCO/I by blind chance and mechanical necessity.

    And the reason for that is closely connected to Feynman’s underlying reasoning. That is, relative statistical weight of macrostates. This is where we see the rise of islands of function deeply isolated in seas of non-functional arrangements of the same components, such that the blind search resources of solar system or even observed cosmos are overwhelmed.

    This does not start at Hoyle’s jumbo jet formed by a tornado hitting a junkyard. No, the watch on the pilot’s wrist or an indicating instrument based on the D’Arsonval galvanometer are more than enough to make the case.

    And, Paley’s thought exercise extension to a self-replicating watch in Ch 2 of his 1804 Nat Theol, is to be seriously respected and explicitly reckoned with.

    KF

    PS: And, the summary on the actual on the ground course is AFAIK accurate. I remember my shock on first opening this famous work in my Dept’l library, only to see Feynman’s admission right there, up-front. Though, it is other sources that filled in details. I still think the course is helpful, and I think similar efforts are worthwhile. Motion Mountain, I like, never mind its oddities. I have long respected Nelson-Parker, Sears-Zemansky, Giancoli and Halliday-Resnick at a lower level. Abbot at O Level standard shines out in my now distant memory. But of all, my favourite is the two-volume Physics by Yavorsky-Detlaf. And, I treasure my wider collection of more advanced Russian physics and similar books, never mind odd points where atheism and marxism creep in.

  7. SR: re. #2:

    As for Sewell arguments, it has been refuted by many here. The article has further links inside.

    The Panda’s Thumb article to which you linked does not refute Sewell’s argument. It attempts to refute an argument that Sewell did not make, namely, “that evolution can’t have occurred because it contradicts the Second Law.” Sewell’s argument is rather that the Second Law implies that the existence of airplanes, skyscrapers, computers, and living organisms on a previously barren planet whose only inputs are sunlight and meteor fragments is highly improbable, and that if one wishes to assert that they arose exclusively through the workings of the laws of physics and chemistry, then the second law demands that one must show how their emergence is not highly improbable. This is what origin of life theories and the neo-Darwinian synthesis in all its myriad incarnations has spectacularly failed to do.

  8. Kairos Focus,

    I tutor science students in Chemistry, Physics, and Math.

    I would be cautious arguing the 2nd Law in favor of ID right now.

    Consider this example:

    Consider a living rat initially at room temperature. Cool the rat down to near absolute zero. Estimate the change in entropy, and state the number in units of Joule/Kelvin

    Here is a case where we killed the rat by removing almost all its entropy. A chemistry student ought to be able to estimate the change in entropy (delta-S) when the rat went from the living state to the dead state. In this case, lowering entropy killed the rat!

    The fact that entropy in the universe is increasing guarantees mammals can live on warm planet without freezing to death. Entropy isn’t always bad.

    I can’t look my students in the eye and tell them with a straight face and a good conscience that I can prove the 2nd law precludes the evolution of life.

    I suspect, somewhere using Jayne’s 2nd Law, maybe there is pro-ID argument in the works, but the ID community isn’t there yet.

    Entropy, like temperature, is neither good nor bad, it’s a matter of having the right amounts!

  9. SC:

    Thanks for thoughts.

    I think if you look at what I have pointed out, you will see that my emphasis is not so much on the classical form of the law, but the underlying macro vs microstate view, and the resulting overwhelming direction of spontaneous change.

    If you freeze a rat to nigh on absolute zero you will reduce its entropy, but in fact that is by and large a side-step.

    The first home of what I am addressing is that proverbial warm pond or the equivalent. The pivotal issue is that a life-state is relatively speaking a macro-observable state of organisation which is rich in specific functional information. This exhibits FSCO/I, and the fact is, the diffused and more thermodynamically stable state of the component atoms and molecules is heavily against clumping of the right molecules and then organisation to function relevant to OOL.

    That is a big part of why OOL studies are in such a mess.

    In fact, on billions of cases plus the relevant implications of statistical weight, we are looking at every reason to see the sorts of scenarios that are too often portrayed overly optimistically to students and the public, as simply not credible.

    The only empirically warranted cause of FSCO/I is design.

    When we turn to the compensation type argument, first, the systems would be energy importing. Absent properly organised coupling, that normally INCREASES entropy and disorganisation. (Don’t overlook cooking here.)

    Next, when we come to issues on origin of major body plans, the issue again turns on deep isolation of islands of function, which is driven by the tightness of fit and work together requisites.

    The incrementalist account lacks empirical observational grounds and has no good answer for origin of FSCO/I by blind chance and mechanical necessity. And the notion of large spontaneous leaps is an appeal to chance miracles. Think, here, protein fold domains and domains with one or a few proteins, say as discussed by Axe, Gauger etc.

    That is my context, not narrow debates over heat flows.

    KF

  10. SC:

    Thanks for thoughts.

    You are welcome. But for a moment put yourself in my place when approached by a student. The following story has somewhat a basis in a real e-mail exchange I had with a creationist chem student, but I amplify a few details to get the point across.

    student:

    Dear Sal,

    My chem professor and textbook said the 2nd law doesn’t preclude the evolution of life. I think he’s wrong and I should challenge him.

    my response

    Sal:

    Dear Student,

    I wouldn’t challenge the chem professor on the point. A human at absolute zero would be a dead human. The human needs entropy to be alive. The universe needs to keep increasing it’s entropy so that you can stay warm on the planet and live. The 2nd Law makes your life possible.

    Whether the 2nd law makes life possible and simultaneously improbable, I don’t know, but the 2nd law has fringe benefits, namely it makes your life possible.

    Consider a human that is about to die because he is freezing to death. To save his life you have to increase his entropy, not decrease it!

    Now estimate the total entropy of 747 under standard conditions. What are the entropy numbers in terms of joules/kelvin? Now have its wing ripped off so it can’t fly. What is the entropy numbers of joules/kelvin. Here are my calculations:

    http://www.uncommondescent.com.....se-part-2/

    For a functioning 747 its entropy was:
    6.94 x 10^7 joules/kelvin
    for the one that got hit by a tornado its entropy was:
    5.94 x 10^7 joules/kelvin

    Do my calculations agree with the way you are taught chemistry? If so, the 747 with higher entropy is the one that is more functional. So the problem of evolution isn’t about lowering entropy! So your professor is right. I’d go so far to say that in many contexts, increasing entropy is necessary for more complex designs. You can perform similar calculations as I have done as an exercise.

    I can’t in good conscience tell you to challenge you teacher or textbook on this point.

    If you want pro-ID arguments, I can give you some, but I wouldn’t know how to argue from the 2nd law. That’s probably not the best angle.

    Besides, don’t argue with your professor about ID, you’re there to learn chemistry.

    If you’re asked on an exam, state what you learned in class, because it is assumed the expected answer is the answer you learned in class, not some profession of faith.

    If I were taking a mythology class, I don’t mind answering questions about the Greek gods even though I don’t believe in them.

    The fact that entropy is increasing in the universe makes life possible, and in that sense since entropy increase makes life possible, it would seem it makes evolution possible. Whether it make evolution probable is another story.

    If you think I’m wrong, you’re welcome to start from the Kelvin-Plank or Clausius postulates and demonstrate life can’t evolve based on those postulates. I’ve not been able to do so. I think to disprove evolution, you’ll have to appeal to a larger set of postulates and data.

    In sum, I’d accept my professors answer for now. You can always change your mind later. This is science, not the Nicene Creed.

    That said, there may be a pro-ID argument in Jaynes version of the 2nd law, but chemists don’t use that version, they use Clausius or Kelvin Plank. Your professor was obviously using Kelvin-Plank, and on those grounds I think he is right.

    That’s what I feel I can in good conscience tell the next generation ID students. If you can tell them otherwise, and you do so in good conscience, I respect that, but I write this to point out, I can’t.

  11. 11

    Oh good grief, not this nonsense again. Sewell has been refuted so many times it’s not even funny. Let me take a look at kairosfocus’ argument (which is actually significantly different from any of Sewell’s):

    That is — though this is commonly ducked, dodged, twisted into pretzels and diverted from — the only empirically and analytically well-grounded conclusion on the origin of functionally specific complex organisation and associated information [FSCO/I], is that it is the product of designing intelligent purposeful action. (Here is a simple test: name an actually observed counter example, where blind chance and necessity were observed to create such FSCO/I de novo out of spontaneous events. An excellent case that needs to be addressed is the claimed spontaneous origin of life in some prebiotic soup or the like.)

    In particular the notion behind the compensation argument fails: as a rule the injection of raw energy into a system will cause its entropy to INCREASE, for the obvious reason of adding to the ways that matter and energy can be arranged at micro-levels. The practically certain result is that disorder will increase.

    First, compensation applies to entropy. It has nothing to do with FSCO/I (as you seem to think), and it doesn’t apply to probability (as Sewell seems to think). It applies only to entropy. That means if you try to refute it by talking about something other than entropy, you’re talking nonsense.

    Second, compensation doesn’t in any way shape or form claim that adding energy (raw, cooked, cooked, whatever) into a system will cause an entropy decrease.

    What it does say is that the second law allows local entropy decreases provided they’re coupled to at-least-as-large entropy increases elsewhere. To illustrate this principle, here’s very a short thermodynamics quiz; just two simple yes-or-no questions:

    Question 1) Someone claims to have discovered a process that involves only two objects (call them A and B), and causes A’s entropy to decrease by a certain amount and B’s entropy to increase by a SMALLER amount. Does this claim imply a violation of the second law of thermodynamics?

    Question 2) Someone claims to have discovered a process that involves only two objects (call them A and B), and causes A’s entropy to decrease by a certain amount and B’s entropy to increase by a LARGER amount. Does this claim imply a violation of the second law of thermodynamics?

    Please everyone (not just KF), try to answer these questions before reading on.

    Ok, answers chosen? Here’s the scoring:

    On question 1: The correct answer is YES, this does imply a violation of the second law. If such a process existed, the process could be run with A and B isolated from everything else (remember, the process ONLY involves those two objects), and the result would be an entropy decrease in that isolated system (since the entropy decrease of A would outweigh the entropy increase of B). The second law forbids entropy decreases in isolated systems, so the existance of such a process would imply that the second law is wrong.

    If you didn’t answer YES to question 1, please go study some basic thermodynamics before trying to lecture anyone else on the subject.

    (Side note: the argument form I used above — showing that X violates the second law by assuming that X exists and showing that that implies the second law is false — is a very standard way of showing a conflict with the second law. If you see an argument that something violates the second law and you can’t put it in this form, there’s probably something wrong with the argument.)

    On question 2: The correct answer is NO, this does not imply a violation of the second law. There’s no logical way to get from this claimed process to an entropy decrease in an isolated system (or any other standard form of the second law), so there’s no conflict.

    If you answered YES to question 2, you’ve just claimed that heat flowing from a warm object to a cooler object violates the second law. You REALLY need to go study some basic thermo.

    Ok, did you answer YES to 1, and NO to 2? Congratulations, you just agreed with the real principle of compensation.

  12. 12

    Actually, I should correct that last statement slightly: if you answered YES to 1 and NO to 2, you haven’t quite agreed to the real principle of compensation. The real principle also allows for an entropy decrease of exactly the same size as the compensating increase. This is a situation that’s allowed in theory, but doesn’t really happen in practice, so I’m not inclined to worry about it.

  13. GD:

    Pardon, but I must first respond to tone. Question-begging and strawman tactic stereotyping are ever associated with sophomoric triumphalism. I suggest you pause and do a rethink on tone.

    Next, the informational school of thermodynamic thought [think Jaynes et al and refer to Harry Robertson's Statistical Thermophysics for a quick survey] Thaxton et al, Sewell and the undersigned are closer than you want to think. The pivotal common point can be summed up at initial level in a clip from Wiki’s article on Informational Entropy:

    At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

    But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

    Let us now look briefly at Darwin’s pond.

    The complex organic molecules of life tend to be endothermic and are often prone to decomposition through light or simply water molecules. Typically, homochirality is important, and it is information-rich, as well as being linked to folding and function of proteins. Also, the relevant molecules are subject to diffusional forces (as are reflected in Brownian motion). We could go on, but already we see highly material statistical thermodynamics-linked challenges to the spontaneous creation, clumping and functionally specific organisation relevant to cell based life.

    For, such life must be:

    1: encapsulated, to protect the internal environment from the cross-iner5ferences as outlined

    2: intelligently gated, to pass the right molecules in/out at the right times — this is a coupling vs raw, uncontrolled interaction issue

    3: a self-assembling, self-repairing metabolic automaton that draws in energy and materials in the right forms at the right times, and appropriately tears down no longer needed components, then eliminates waste.

    4: correctly detects and handles inevitable errors at molecular level with high reliability [cf. protein synthesis errors].

    5: Where as well, Mignea’s minimal requisites apply: I — capability to duplicate, II — capability to separate into fully functional daughter cells, III — with high reliability, fulfilling functional organisation, relationships and interactions in those daughters.

    6: Thus, such a cell must embody a kinematic (computer cellular automata are not good enough) self replication facility, similar to that specified by von Neumann.

    7: Such a vNSR in turn bringing to bear:

    (i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility;

    (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with

    (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:

    (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by

    (v) either:

    (1) a pre-existing reservoir of required parts and energy sources, or

    (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

    OOL studies, after many decades of work, is nowhere near explaining much less empirically demonstrating the origin of such an entity, which manifests FSCO/I in ever so many ways. indeed, it is arguable that the major schools of thought have argued themselves to mutual ruin.

    I repeat, the only empirically observed and analytically plausible causal means for FSCO/I is intelligent design.

    Where, in a case of trying to understand the unobserved remote past of origins on its apparent traces, we are doing an inference to best, empirically grounded explanation.

    A first requisite of such an explanation is that it be empirically and analytically plausible, i.e. first and foremost it must rest on a mechanism known — per observation — to be able to cause the effect in question. Otherwise, we are back at Kipling’s just-so stories, dressed up in a lab coat.

    A priori Lewontin-Sagan evolutionary materialism — the ruling paradigm of the secularist elites whether dressed in the lab coat or the dark suit with red tie or the latest power symbols of the day — simply and patently does not meet this criterion.

    With billions of examples all around us, including the PCs etc on which we are viewing this comment, there is a known, readily observable cause of FSCO/I, intelligent design. A straight inference to best empirically grounded explanation — absent the question-begging ideological materialist a priori just mentioned — would conclude that such FSCO/I is in fact a signature of design and its blatant presence in the living cell is decisive evidence of design.

    Witht hat in mind, let us again see what the despised, derided, duly scarlet-letter C branded, mocked and dismissed Sewell had to say:

    . . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

    In short, Sewell is appealing to precisely the pattern of clusters of possible vs clusters of functionally specific microstates consistent with relevant macrostates that I have and that the informational view on thermodynamics highlights.

    Where, let us recall, we can quoite reasonably interpret and apply entropy in light of this observation:

    . . . in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.

    Until we have a reasonable and empirically grounded account of the origin of C-chemistry, informational macromolecule, aqueous medium cell-based life on spontaneous, thermodynamically driven processes in Darwin’s pond, or a deep sea volcano vent or a comet core or a gas giant moon, or whatever plausible environment is advanced in a given day, the evolutionary materialist tree of life — evolutionary icon no 1 , from appearance as the only illustration in Darwin’s Origin onwards — remains rootless.

    The only empirically and analytically warranted explanation for the massive amount of functionally specific, complex organisation and/or associated information in such life, is design.

    That is, as of right not sufferance, design sits at the table from the root on up.

    And, as we go up to branching into major body plans, we find that we move from 100 – 1,000 kbits of genetic information to 10 – 100+ mn bits (on evidence of the genetic codes observed and plausible). Thus, the same source of FSCO/I sits at the table all the way through to us.

    Recall, the fundamental FSCO/I challenge is that for just 500 bits, the 10^57 or so atoms of our solar system, changing state at the maximum rate for chemical reactions [and each state being deemed a search], would be able to search the equivalent of a one straw sized sample to a cubical haystack 1,000 light years across, i.e. about as thick as our galaxy at its central bulge. If such a haystack were superposed on our galactic neighbourhood, and one were to take a blind sample of that scope, with all but certainty, one would find straw and nothing else.

    The needle in haystack, islands of function in a sea of non-function challenge to blind search mechanisms is real. And while this is a probability challenge, we do not have to exactly evaluate it, the real issue is the search space challenge, which is much, much, much worse than just described. For, the space of 500 bits is only 3.27 * 10^150, that for 100,000 bits — the low end for a genetic code as just one component — is more like 9.99 * 10^30,102.

    That is why for instance, the only plausible explanation for a post such as this is design.

    And, it is the reason why in NFL, Dembski gave this definition of CSI:

    p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

    p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

    I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

    Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

    With these in mind and on the table, I think a more balanced discussion may now proceed.

    KF

  14. SC:

    I agree, that a simplistic appeal to the 2nd law of thermodynamics is not going to answer the problem, but in my observation that is not what has been on the table, not since Thaxton et al in their 1984 TMLO, downloadable for free here, chs 7 – 9.

    Where, TMLO is arguably the seminal work that sparked the emergence of design theory.

    I cite their bridging survey at the end of Ch 7, in preparation for their initial analyses of proteins and DNA to follow:

    While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

    It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Bold emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, "Thermodynamics and the Origin of Life," in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal's online issues will show, this is not necessarily a "friendly audience."]

    I think the above, in the context of what I have just highlighted to GD, would be a beginning for a different answer to your inquirer.

    However, I would also have to speak to the tone and excessively toxic and polarised atmosphere imposed by materialist ideologues in the lab coat. I would counsel such a student — even if in a Christian College — to keep his counsels to himself, and to study with an open mind.

    Indeed, until and unless a critical mass of professors are able to build a viable graduate programme for advanced studies, I would suggest such a student maintain his studied silence [and hide his design theory-oriented readings and notes under lock and key . . . as if it were -- God help us that I have to make such a comparison -- a child porn stash] until he holds tenure and is no longer subject to arbitrary and abusive dismissal such as targetted Gonzalez at the hands of activists. Likewise, I wold stoutly caution him on the fate of Sternberg at the Smithsonian.

    That is the shameful state of the academy in our day.

    In the meanwhile, the actual issues on the table are very real and need to be thought hard about.

    Yes, if you were to freeze and cool a rat to near absolute zero you would kill it by taking the molecular systems out of their range of function, and would in so doing (especially as one approaches 0 K) reduce its entropy. Likewise if you were to immerse same in boiling water you would cook it instantly, and would kill it just as well. In this case the entropy of the open system would increase with the initial rise in temperature, and would instantly degrade the complex molecules of life, destroying function.

    Neither of these scenarios answers to the relevant aspect of entropy, linked to the information rich, functional organisation found in cell based life.

    KF

  15. Gordon Davisson, re. #11

    Oh good grief, not this nonsense again. Sewell has been refuted so many times it’s not even funny.

    It’s a common, but—pardon me—extremely annoying tactic of Darwin defenders to announce that a particular argument has been “refuted” when all that has occurred is that other Darwin defenders have advanced weak or fallacious arguments that simply don’t address the issue adequately. One example is the claim that Behe’s argument from irreducible complexity has been refuted when in fact no one has come close to doing so.

    Sewell’s fundamental argument in its most economical form is simply this: If something is extremely improbable in a closed system, it is still extremely improbable if that system is open unless something is entering through the system boundary that makes it not extremely improbable. The emergence of living things along with computers, skyscrapers, aircraft, etc., is extremely improbable on a barren planet if only the laws of physics and chemistry are in operation there (ie., no intelligent intervention). This is a consequence of the Second Law, or at least a generalization of it. If it is claimed that the input of sunlight, cosmic rays, and meteor fragments can make their emergence not extremely improbable, then the case must be made for how that could be. So far, that case has not been made.

    To my knowledge, no one has refuted this argument. If you know of such a refutation, I would very much like to know what it is.

  16. BD:

    Well said twice over, I think your paragraph:

    Sewell’s fundamental argument in its most economical form is simply this: If something is extremely improbable in a closed system, it is still extremely improbable if that system is open unless something is entering through the system boundary that makes it not extremely improbable. The emergence of living things along with computers, skyscrapers, aircraft, etc., is extremely improbable on a barren planet if only the laws of physics and chemistry are in operation there (ie., no intelligent intervention). This is a consequence of the Second Law, or at least a generalization of it. If it is claimed that the input of sunlight, cosmic rays, and meteor fragments can make their emergence not extremely improbable, then the case must be made for how that could be. So far, that case has not been made.

    . . . is particularly good.

    I would like to hear cogent responses informed also by the point Sewell makes about various kinds of diffusion, as well as by the challenge of functionally specific complex organisation and/or associated information. I have already adverted to the informational view of thermodynamics.

    KF

  17. Thanks, KF. I thought your points were also well taken, so I didn’t repeat them in my own comment.

  18. Sewell’s fundamental argument in its most economical form is simply this: If something is extremely improbable in a closed system, it is still extremely improbable if that system is open unless something is entering through the system boundary that makes it not extremely improbable.

    It is a good argument, but it will need some terminology an definitions reworked, because there are multiple versions of the 2nd law and multiple versions of entropy, and the ID community needs to synthesize workable and clear definitions that ID students matriculating in science courses can work with.

    The example of the frozen rat illustrates how most science students will say, “lowering entropy killed the rat”. That is not a good situation if one is arguing that entropy must be reduced to make life possible!

    There is a certain kind of information-type entropy in life that is not captured in typical thermodynamics even in some advanced physics books, not even, as far as I can tell, in Jaynes generalized version. There is a lot of work to be done, maybe I’ll sketch out some suggestions.

    I hope the ID community can take Dr. Sewell’s intuition and frame it in formal language in the way you summarized above.

    If there is an entropy associated with life, we need a means of measuring it or estimating it. When the rat died via freezing and its thermal entropy was siphoned out, we need a way of stating how its death-type entropy went up.

    Right now we don’t have the formalisms to show this, even though intuitively it is painfully obvious some sort of entropy and irreversibility (death) was introduced into the system.

    The convention I suggest:

    1. use the phrase specified-information as currently used in Bill Dembski’s book, NFL and Design Inference, not his more complex paper that I called CSI Version 2.

    2. use the phrase specified-entropy to indicate when levels of specified information deteriorate. That is, as specified entropy goes up, specified information goes down by exactly the same amount.

    These conventions will then conform at least with industry practice in a way that IT engineers can understand, and further it will separate specified-entropy from thermal-entropy, and thus the Darwinists cannot equivocate terms and Dr. Sewell’s arguments can be formulated in a defensible way.

    I’m not intending to be harsh, but the terminology used by Dr. Sewell is quite open to criticism from within ID quarters, and I’ve probably been the most vocal ID proponent regarding the terminology (the frozen rat example shows why ill-chosen terminology will lead to confusion).

    It’s a fixable problem, as far as I can tell, but its a problem that needs fixing. We owe it to the next generation of ID students matriculating through science schools.

    I might post out some conventions that I think can be workable.

  19. Sal @8:

    Consider a living rat initially at room temperature. Cool the rat down to near absolute zero. Estimate the change in entropy, and state the number in units of Joule/Kelvin

    Well, this example certainly underscores your point that we won’t get anywhere without good definitions. :(

    The only reason this example even tempts a student to think that the life of the rat is associated with the measured entropy, is because the the example (carefully or sloppily?) hides the fact that the “measurement” that allegedly is associated with the living rat has precisely nothing to do with the living rat. A moment’s reflection should be adequate to tell us that the alleged “connection” between the change in the measured entropy and the rat’s death is a figment of our imagination.

    We might as well have said: “Take a pound of hamburger at room temperature. Cool it to near absolute zero. Estimate the change in entropy . . .”

    or

    “Take a pound of sand at room temperature. Cool it to near absolute zero. Estimate the change in entropy . . .”

    Such examples would have been just as meaningful.

    The problem is that the room temperature measurement has absolutely nothing to do with describing what a rat is, as a living, breathing, animate, living animal. And the reason the rat died is not because of a change in temperature (at least not directly). It is because some of the myriad complex, intricate, specified chemical and physical reactions that help make up the essence of a living rat cannot occur at certain temperatures. So? All that tells us is that temperature is yet one more specification that has to be taken into account when building a living creature. Unfortunately, our example doesn’t even begin to shed any light on the real issues.

    So congratulations! You’ve made two temperature measurements, neither of which have the slightest thing to do with what a living rat is. Then you’ve tried to say that because there is a difference between the two measurements (the entropy decreased) the rat died. Absolute hogwash.

    I continue to be astounded at how straight-forward concepts can become convoluted by terrible examples. As a result, here we are yet again wringing our hands over non-existent problems with ID.

    —–

    BTW, please invite me to talk to your students. I’d be happy to talk to them ‘with a straight face’ and walk them through the logic and hidden problems with some of these recent issues we’ve discussed. :)

  20. scordova, re. #18:
    Well, you’re right that it isn’t mathematically rigorous. I’m not an expert in the mathematics of the Second Law, but my understanding is that currently there exists mathematical rigor only when dealing with limited contexts involving heat exchange and diffusion.

    It would be great if the notion could be generalized in a way that lends itself to rigorous mathematical treatment. I hope someone is successful in that endeavor.

  21. SC:

    The first problem is, we don’t do what accountants do, and rule up balance sheets and income statements that reckon category by category with all the accounts. One would not substitute an asset for a liability in accounting for instance.

    Similarly, there are various aspects of the micro-level distribution of mass and energy that though coupled do not simply and easily substitute the one for the other. And doing the equivalent of torching a business, books, records and all does not answer the question.

    On balance, I think that using the statistical micro view to bring up the issue of spontaneous vs intelligent ordering and organisation is a start. Then, we can address the way we normally discuss information rich functionally specific organisation using info metrics that bring in functionally constrained specificity.

    Years ago, I spoke of a vat of parts for a micro jet buzzing around through Brownian Motion which is linked to diffusion. The spontaneous clumping of parts and the organisation of clumped parts to yield a flyable jet on random molecular forces is maximally implausible. But, add an army of programmed nanobots — my substitute for Maxwell’s Demon — and voila, it makes a lot more sense. This is directly parallel to the implausibility of a set of Hoylean tornadoes passing through Seattle and building 747′s contrasted with the routine fact of Boeing doing same in accord with strictly controlled manufacturing processes.

    In that context, mix in the vNSR and Mignea’s requisites. A dash of Paley’s thought exercise self replicating watch will help. (BTW, why is it so rare to see a discussion of the far more relevant case in his Ch 2?)

    Then, we can go to Darwin’s warm little pond etc and ask and address pointed questions.

    In short I think the whole point is to set in context the significance of FSCO/I.

    KF

  22. So congratulations! You’ve made two temperature measurements, neither of which have the slightest thing to do with what a living rat is.

    Temperature has a lot to do with what a living rat is. A living rat is warm rat, it cannot be a frozen rat.

    A rat with almost no thermal entropy is a dead rat.

    Living things need entropy. Life is not possible if life does not have entropy (thermal entropy). Life is not possible if it does not have Shannon entropy (it needs lots of Shannon entropy). Complex functioning designs need lots of Shannon entropy and certain amount of thermal entropy to function.

    This anti-entropy bias in the ID community is not helpful. Entropy, like temperature is neither good or bad, but having the right amount is what is important.

    The problem is that the room temperature measurement has absolutely nothing to do with describing what a rat is, as a living, breathing, animate, living animal. And the reason the rat died is not because of a change in temperature (at least not directly).

    Change in temperature has everything to do with killing the rat. Siphoning out all that thermal entropy from the rat kills it. Near-zero temperature and near-zero thermal entropy destroy the rats design. Thermal entropy is a necessary part of a rat’s design.

    For a rat to live, it has to keep making its contribution to the increasing entropy in the universe. It will die if it is not allowed to do its part and increase the entropy in the universe.

    Thermal entropy is a good thing, it is a necessary part of intelligent design of living systems. A design with no thermal entropy is a dead design. A design with no Shannon entropy is a dead design.

    BTW, please invite me to talk to your students. I’d be happy to talk to them ‘with a straight face’ and walk them through the logic and hidden problems with some of these recent issues we’ve discussed. :)

    Consider a student hears “the 2nd law of thermodynamics prevents evolution from happening”. Where is the proof of that from the Kelvin-Plank or Clausius statement of the 2nd law? Nowhere. That’s because the 2nd Law (Kelvin-Plank) deals with distribution of heat energy, it says nothing about the irreversibility of death. It cannot distinguish a dead rat from a live one, so how then will it prevent the evolution of life since it is pretty much neutral on the issue?

    Using Kelvin-Planck version of the 2nd law to argue against the evolution of life is like using Newton’s 2nd law of motion to argue against the evolution of life. Most science students who actually know how to compute entropy numbers and state them in units of Joules/Kelvin or Joules/Kelvin/Mole will go, “huh, that’s a non-sequitur?”

    The Jayne’s Generalized 2nd Law might offer some hope. We aren’t there yet. These recent discussion at UD are a precursor to laying out entropy metrics that won’t make ID-friendly chemists, physicists, and mechanical engineers cringe.

    ID is indicated in life, but appeals to Kelvin-Plank version of the 2nd law isn’t the proper way to argue ID.

    Here is the Kelvin-Plank statement of the 2nd law:

    is impossible to devise a cyclically operating device, the sole effect of which is to absorb energy in the form of heat from a single thermal reservoir and to deliver an equivalent amount of work

    Ok, anyone is welcome to demonstrate how we can start with that premise and show life can’t evolve. It’s like using Newton’s laws of motion to argue against chemical evolution. It is the wrong set of tools to do the job. It’s like using a chain-saw when you need a tweezer.

  23. Here is a question which anyone is free to answer.

    Suppose a virus or some other primitive life form evolved into a human (via intelligent design or some other mechanisms, whatever). Did the process increase the entropy in the design or lower it.

    That is to say, does a living human being have more entropy (thermodynamic and/or Shannon entropy) than a virus?

    My answer:

    the human has more Shannon entropy in its design, and the human has trillions of times more thermal entropy than a virus.

    What answer will you get by applying textbook chem, physics, and engineering thermodynamics? I think it is that one.

    In light of this, did the evolution going from a virus (even supposing an Intelligent Designer did the evolving), require reducing or increasing the Shannon and thermal entropy of the design? I say it required INCREASING Shannon and thermal entropy, no getting rid of it.

  24. PS: I note your frozen rat example more illustrates the impact of undermining a requisite of C-Chemistry, AQUEOUS medium cell based life than anything else. The rat expires long before it freezes much less goes to the near absolute zero condition — I think, IIRC (but we need PG or GP or another physician) humans can die of exposure if unprotected in the 50′s F for long enough. 10 minutes in sufficiently cold water kills too. The homeostasis, aqueous medium requirement fits with a narrow range of temperatures being suitable for biofunction. Take the rat out of that range by imposed heat or cold and the delicately balanced mechanisms falter and fail triggering the irreversible change of death. Taking it out of range by robbing it of air for a few minutes has a similar effect. Inserting compounds that interfere with the biochemistry, the same. In short FSCO/I is a fine tuned system. And functionally specific fine tuning of complex systems is a sign of design.

  25. PPS: One of my old profs reminds on the limits of math in sciences including physics (and his context was statistical mechanics and thermodynamics). A drunk finds another down on hands and knees under a street light. What happened, can I help? I lost my contact lenses. So, he joined him. After maybe 20 minutes, the first says, I can’t find it, are you sure you lost them here? Oh no, I lost them over there in the dark, but this is where the light is. That is ideal mathematical cases amenable to analysis or good approximation/simulation — recall, even a three-body problem in gravitation is troublesome, and quantum cases are notoriously intractable — are quite limited relative to the sheer challenges of actual reality out in the dark. But, they are what we have, and they have a track record of helping a lot.

  26. PPPS: Also, there may be a system boundary problem. That is why I start from a vat of diffused components and move to clumping then organised functionally, comparing config spaces. Let’s say, a time keeping, self replicating micro-watch that draws in solar power and obviously uses parts from the pool of such in a vat, say a 1 m cube for simplicity with 10^5 required parts envisioned as 10^-6 m across cubes. The scattered state has ever so many more possible states, there being 10^18 cube-sites of that size in the vat, and 10^5 parts to be distributed freely across them. The atomic and temporal resources of the solar system or observed cosmos could not exhaust the number of possible configs, by a long shot. Clumped, we have 10^-13 cu m of actual watch parts — and maybe a bit more of required empty spaces. But 10^5 parts could take up ever so many possible shapes and arrangements within those shapes. Again, beyond cosmic scale search resources. Then finally we must arrange to give a viable solar power supply, a time telling watch, a vNSR and relevant stored program to drive the process . . . a bar of cams would do.To find this within the resources of a cosmos, blindly, would be maximally implausible. But also, as we move from scattered to clumped then to functionally organises the number of acceptable ways to have mass and energy arranged at micro levels falls sharply at each stage. That is, entropy is falling as functionally specific information content rises. The idea that this happens spontaneously is utterly implausible. But pour in an army of programmed co-operating microbots and the picture shifts dramatically. The clumping and organising work is now intelligently directed. Where of course the vat and surroundings indubitably suffer an off-setting rise in degraded energy aka entropy.

  27. Is increasing entropy good for a protein? Yes.

    The Wonderful roles of Entropy in Protein Dynamics

    A protein becomes more stable when the free-energy is minimized, this is done by maximizing entropy, not minimizing it.

    In the protein folding process, the first stage, i.e., the rapid hydrophobic collapse (Agashe, Shastry & Udgaonkar, 1995; Dill, 1985), is in fact driven by the effect of the solvent entropy maximization.

    Like heat, entropy is not necessarily bad, it depends on the context. In this context of this particular protein folding example, maximum entropy is good.

    On the other hand, if you’re building a steam engine, maximum entropy is bad.

  28. F/N: Paley’s self-replicating watch example from his Ch 2, as previously presented in the voice of his ghost, here at UD some months back:

    ____________

    >>Suppose, in the next place, that the person who found the watch should after some time discover, that in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself—the thing is conceivable ; that it contained within it a mechanism, a system of parts—a mould, for instance, or a complex adjustment of lathes, files, and other tools—evidently and separately calculated for this purpose; let us inquire what effect ought such a discovery to have upon his former conclusion.

    I. The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done— for referring the construction of the watch to design and to supreme art. If that construction witliout this property, or which is the same thing, before this property had been noticed, proved intention and art to have been employed about it, still more strong would the proof appear when he came to the knowledge of this further property, the crown and perfection of all the rest.

    II. He would reflect, that though the watch before him were in some sense the maker of the watch which was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair—the author of its contrivance, the cause of the relation of its parts to their use. With respect to these, the first watch was no cause at all to the second: in no such sense as this was it the author of the constitution and order, either of the parts which the new watch contained, or of the parts by the aid and instrumentality of which it was produced. We might possibly say, but with great latitude of expression, that a stream of water ground corn; but no latitude of expression would allow us to say, no stretch cf conjecture could lead us to think, that the stream of water built the mill, though it were too ancient for us to know who the builder was. What the stream of water does in the affair is neither more nor less than this: by the application of an unintelligent impulse to a mechanism previously arranged, arranged independently of it and arranged by intelligence, an effect is produced, namely, the corn is ground. But the effect results from the arrangement. The force of the stream cannot be said to be the cause or the author of the effect, still less of the arrangement. Understanding and plan in the formation of the mill were not the less necessary for any share which the water has in grinding the corn; yet is this share the same as that which the watch would have contributed to the production of the new watch, upon the supposition assumed in the last section. Therefore,

    III. Though it be now no longer probable that the individual watch which our observer had found was made immediately by the hand of an artificer, yet doth not this alteration in anywise affect the inference, that an artificer had been originally employed and concerned in the production. The argument from design remains as it was. Marks of design and contrivance are no more accounted for now than they were before. In the same thing, we may ask for the cause of different properties. We may ask for the cause of the color of a body, of its hardness, of its heat; and these causes may be all different. We are now asking for the cause of that subserviency to a use, that relation to an end, which we have remarked in the watch before us. No answer is given to this question, by telling us that a preceding watch produced it. There cannot be design without a designer; contrivance, without a contriver; order, without choice; arrangement, without any thing capable of arranging; subserviency and relation to a purpose, without that which could intend a purpose; means suitable to an end, and executing their office in accomplishing that end, without the end ever having been contemplated, or the means accommodated to it. Arrangement, disposition of parts, subserviency of means to an end, relation of instruments to a use, imply the presence of intelligence and mind. [--> that word implies the strength of the inference] No one, therefore, can rationally believe that the insensible, inanimate watch, from which the watch before us issued, was the proper cause of the mechanism we so much admire in it—could be truly said to have constructed the instrument, disposed its parts, assigned their office, determined their order, action, and mutual dependency, combined their several motions into one result, and that also a result connected with the utilities of other beings. All these properties, therefore, are as much unaccounted for as they were before.

    IV. Nor is any thing gained by running the difficulty farther back, that is, by supposing the watch before us to have been produced from another watch, that from a former, and so on indefinitely. Our going back ever so far brings us no nearer to the least degree of satisfaction upon the subject. Contrivance is still unaccounted for. “We still want a contriver. A designing mind is neither supplied by this supposition nor dispensed with. If the difficulty were diminished the farther we went back, by going back indefinitely we might exhaust it. And this is the only case to which this sort of reasoning applies. “Where there is a tendency, or, as we increase the number of terms, a continual approach towards a limit, there, by supposing the number of terms to be what is called infinite, we may conceive the limit to be attained ; but where there is no such tendency or approach, nothing is effected by lengthening the series. There is no difference as to the point in question, whatever there may be as to many points, between one series and another—between a series which is finite, and a series which is infinite.

    A chain composed of an infinite number of links can no more support itself than a chain composed of a finite number of links. And of this we are assured, though we never can have tried the experiment; because, by increasing the number of links, from ten, for instance, to a hundred, from a hundred to a thousand, etc., we make not the smallest approach, we observe not the smallest tendency towards self support. There is no difference in this respect—yet there may be a great difference in several respects—between a chain of a greater or less length, between one chain and another, between one that is finite and one that is infinite. This very much resembles the case before us. The machine which we are inspecting demonstrates, by its construction, contrivance and design. Contrivance must have had a contriver, design a designer, whether the machine immediately proceeded from another machine or not. That circumstance alters not the case . . . [Natural Theology, Ch 2, 1806.] >>
    ____________

    In short, as Paley’s ghost would tell us, there is more to the story than commonly meets the eye or ear, and that has patently been so a full half century before Darwin wrote.

    KF

  29. KF,

    I suggested a fix. I hope it helps.

    http://www.uncommondescent.com.....-concepts/

    Sal

  30. Sal @22:

    That is completely wrong.

    I fear your love for your example/analogy is preventing you from seeing the clear flaws in the example.

    Sal, tell me this:

    If I drop a living rat from room temperature to 5 degrees below room temperature, do we have a decrease in entropy (in your language)? Of course we do. Is the rat dead? Of course not.

    The only relationship between the two is that living organisms can’t live at a certain temperature. Big deal. That just means we have identified another specification for how the particular chemical reactions have to be put together. We could get the same result (dead rat) by raising the temperature too far.

    Your rat analogy is a classic example of the correlation equals causation fallacy! We might as well say that the sidewalk is hot because my carefully-crafted ice cream sculpture melted. And when we see our dead rat we can shout all we want about temperature and entropy, but we are confusing correlation with causation.

    We have two facts: (i) entropy decreases with reduction in temperature (in your particular use of the term ‘entropy’); and (ii) a rat can only live within a particular temperature range. Both of those independent facts happen to relate to temperature. But that does not mean that (i) causes (ii).

    Please, please rethink this before digging in your heels.

    The rat example in the way it is presented does no service to your students and just confuses the issue regarding living organisms, which is specification, not temperature.

  31. Gordon Davisson, et al:

    Back to Sewell’s argument. Here is a refinement to the language of my comment #15 which makes the case even more starkly obvious: In the second paragraph, replace the sentence, “The emergence of living things along with computers, skyscrapers, aircraft, etc., is extremely improbable on a barren planet if only the laws of physics and chemistry are in operation there (ie., no intelligent intervention).” with the sentence, “That the matter on a barren planet should spontaneously rearrange itself into living things along with computers, skyscrapers, aircraft, etc., is extremely improbable.” The paragraph would then read,

    Sewell’s fundamental argument in its most economical form is simply this: If something is extremely improbable in a closed system, it is still extremely improbable if that system is open unless something is entering through the system boundary that makes it not extremely improbable. That the matter on a barren planet should spontaneously rearrange itself into living things along with computers, skyscrapers, aircraft, etc., is extremely improbable. This is a consequence of the Second Law, or at least a generalization of it. If it is claimed that the input of sunlight, cosmic rays, and meteor fragments can make their appearance on the planet not extremely improbable, then the case must be made for how that could be. So far, that case has not been made.

    In my view, the real thrust of Sewell’s argument is that it places the burden of proof squarely where it belongs—on the shoulders of those who claim that life could have arisen and evolved solely through the operation of material causes.

  32. BD & EA:

    Well summarised, and significantly noted respectively.

    Reduction in body temperature for a rat would obviously be accompanied by loss of entropy as heat has to exit and does so at certain body temperatures . . . though this is complicated by the variation of temperature at points in the body. But ds >/= d’q/T, per Clausius.

    The pivotal focus for Sewell is indeed to account for the origin of life from atoms and molecules on a barren planet in light of the issue implied by relative statistical weights of clusters of microstates. Or in simpler terms, if something is extremely improbable in a closed or isolated system [I am using different terminology: isolation no mass or energy flows in/out, closed no mass flows] it does not become probable unless something coming in makes a drastic change.

    A mahogany tree — as was planted at strategic spots on my old uni campus — does not spontaneously appear in the soil at a given point, absent seed, germination, growth and suitable weather and climatic conditions. (Ideally, taking centuries to produce the sort of magnificent timbers that were long since harvested out.)

    That seed has in it the genetic info and systems to couple to the environment to form the tree.

    But Darwin’s pond or the other proposed OOL contexts cannot start from that or the like.

    What, then, is the probability amplifier given the search space challenge once we are beyond 500 – 1,000 bits of FSCO/I, and given that cell based life forms credibly require minimal genomes of 100 – 1 mn bits?

    Going beyond, given the limits observed on incremental co-ordinated mutations and time/no. of generations to fix per population genetics [remember that 200+ MY calc for the human line to fix a few mutations] how do we account for the origin of major body plans and features on the same underlying entropy vs FSCO/I issues?

    Surely, atoms and molecules in cells are still prone to the underlying forces and factors of thermodynamics, and mutations are blind chance events.

    Novel body plans credibly require 10 – 100 mn bits over and above a basic living cell. That is far, far beyond the blind search capacity of our solar system or the observed cosmos — the ONLY observed cosmos, never mind how common multiverse speculations dressed up in lab coats are.

    It is reasonable to ask of those who propose the dominant school of thought’s explanations that they show evidence of capacity to address such issues and create FSCO/I through blind chance and mechanical necessity, as well as showing some characteristic traces that mark this as the credibly superior explanation of the actual course of the origin and diversification of life on Earth.

    We do know that FSCO/I routinely arises by design, that it is ONLY seen to arise by design, and that life forms, living or fossil, are chock full of it.

    What compelling empirical — as opposed to ideologically driven, a priori evolutionary materialistm controlled . . . — grounds can the dominant school of thought offer us that suffice to overturn the obvious inference on FSCO/I given that set of facts?

    KF

  33. F/N: This has ended up as a discussion across various threads, so I will cross-post a comment just made here:

    _____________

    >> SC:

    I must again draw attention to Sewell’s directly stated and/or implied context, with particular reference to OOL.

    What probability amplifier renders the access to functionally specific complex organised configs of atoms and molecules reasonably plausible, in absence of design?

    Remember, in that context, the only relevant forces and factors are strongly thermodynamic.

    Until there is a solid and fair answer to this, the observed pattern of exchanges and dismissals of Sewell amounts to little more than shifting the subject to a different context to attack and dismiss the man. Which has been pretty obvious in the case of far too many retorts I have seen, including the PT one raised by GD in my original thread of discussion.

    Now, the rat example can be seen in the context that heating it up sufficiently will ALSO kill it. So, doing +A and -A will both achieve the fundamentally same destructive result, though by different specific effects.

    That is, the rat’s biochemistry works in a given temp band.

    Heat up or cool down enough and it will die, as the chemistry breaks down.

    Simple thermal entropy calculation is largely irrelevant to the cause of death and moreso to the origin of the FSCO/I to be explained.

    Irreversibility is even less relevant as essentially all real world thermodynamic processes are irreversible; we often use entropy’s state function character to do a theoretical quasi-static equilibrium process calc between the relevant states and take the result.

    The Shannon metric of info-carrying capacity is not on the table, functionally specific complex info is. A DVD that says Windows XP but is garbled may have more Shannon info than a properly coded one. But that is besides the point.

    Maybe you were not watching when we had to deal with the MG attempt by Patrick May et al, but one upshot was to produce a simplification of the 2005 Dembski CSI metric applicable to this, Chi_500 = I*S – 500, bits beyond the solar system threshold. I is a relevant info metric [one may use Shannon or something like Durston et al's fits measure that brings to bear redundancies observed in life forms], and S a dummy variable that is 1 on having an objective reason to see functional specificity, 0 otherwise. (And yes that means doing the homework to identify whether or no you are sitting on one of Dembski’s zones T in fields of wider possibilities W. Functionally specific coding is a classic case, as in DNA and proteins derived therefrom.)

    I did use an earlier simple metric that does much the same job but this one is directly connected to Dembski’s work.

    I notice, it has been subjected to the same tactics Sewell has been, and that tells me that we are not dealing with something that is a matter of the merits (which is the sort of thing that has resulted in threats against my family, now culminating in attempts to reveal my personal residential address, which is target painting pure, ugly and simple — I am not interested in playing nicey-nice with people who are threatening my uninvolved family like that, or who by going along with such are enablers . . . ).

    Going back to the point being made on the nature of entropy, the informational view is a serious one, one that should not be simply brushed aside as though it is not saying anything worth bothering to listen to.

    As you know, we routinely partition entropy calcs, implicitly ruling out components that we do not deem directly relevant to a case. For instance, nuclear related forces and factors — which would immediately and decisively drown out anything we are doing with chemical, thermal, magnetic, electrical and optical or the like effects [being about 6 orders of magnitude beyond and often with a degree of imprecision that would make the things we are interested in vanish in the noise . . . ] — are routinely left out. But as Hiroshima, Nagasaki and nuke reactors routinely show, such are thermally relevant. What happens is the contexts and couplings or lack thereof save under exceptional circumstances, allow us to routinely ignore these factors.

    Likewise, there is no doubt that thermal factors are relevant to anything to do with micro-scale arrangements of mass and energy. However, we can see that in relevant systems, certain arrangements are stabilised against thermal agitation effects and related phenomena — noting maybe a radiation triggered generation of ions and free radicals in a cell, leading to destructive reactions that destabilise it leading to radiation sickness and death as an “exception” that brings to bear nuke forces . . . — and can be similarly isolated from the context as there is a loose enough coupling. (Where also, such radiation and/or chemically induced random variation, at a much lower level of incidence, is a known major source of genetic mutations. Such chance mutations in turn being the supposed main source of incremental creative information generation used in neo-Darwinian and related theories. Differential reproductive success leading to pop shifts being not an info source but an info remover through lack of reproductive success leading to elimination of the failed varieties.)

    We can then use the state function additivity of entropy to address the arrangements of mass and energy at micro-level in these highly informational systems.

    Only, we don’t usually do it under a thermodynamic rubric, we do it under the heading: information-bearing macromolecules and their role in biochemistry and the life of the cell. That is we are applying the old ceteris paribus implicit step of setting other things equal common in economics and implicit in a lot of scientific work otherwise.

    However, as cooking [or a high enough, long enough sustained fever] shows, thermal agitation can affect the process. The same would hold for cooling down a rat sufficiently to kill it.

    Let me cite the just linked Wiki article on cet par:

    A ceteris paribus assumption is often fundamental to the predictive purpose of scientific inquiry. In order to formulate scientific laws, it is usually necessary to rule out factors which interfere with examining a specific causal relationship. Under scientific experiments, the ceteris paribus assumption is realized when a scientist controls for all of the independent variables other than the one under study, so that the effect of a single independent variable on the dependent variable can be isolated. By holding all the other relevant factors constant, a scientist is able to focus on the unique effects of a given factor in a complex causal situation.

    Such assumptions are also relevant to the descriptive purpose of modeling a theory. In such circumstances, analysts such as physicists, economists, and behavioral psychologists apply simplifying assumptions in order to devise or explain an analytical framework that does not necessarily prove cause and effect but is still useful for describing fundamental concepts within a realm of inquiry . . .

    Cet par, we deal with the micro-state clusters relevant to the dynamics of cell based life, which are informational and functionally specific.

    We see too that the informational view can be extended to the analysis of the underlying processes of thermodynamics. Again citing that Informational Entropy Wiki article as a useful first level 101:

    At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. [--> Just as, nuke level forces are again right off the chart relative to normal chemical, optical, magnetic and thermal interactions . . . ]

    But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

    Thus, we see the direct relevance to Darwin’s warm little pond or the like.

    1 –> Open system, subject to mass inflows and outflows and energy inflows and outflows.

    2 –> Likely to pick up various chemicals from the environment, and subject to light etc.

    3 –> Absence of coupling mechanisms and systems that turn light, heat etc into shaft work that performs constructive tasks, yes.

    4 –> Presence of brownian motion etc and through molecular random motions, diffusion etc, yes.

    5 –> Presumed presence of monomers relevant to life, granted per argument.

    6 –> Presumed viable duration of same granted per argument.

    7 –> Problem: chirality, as there is no sustained effective thermal difference between L- and D- handed forms, so we normally see racemic forms. Import: apart from Glycine, 1 bit of info per monomer, where life relevant ones are at about 200 – 300 long in the chain for proteins. result: 2 – 3 typical proteins is already at the solar system threshold.

    8 –> Problem, cross reactions and formation of tars etc. Major issue, but ignored per argument. Though, the issue of encapsulation, appropriate cell membranes and smart gating are implicated here.

    9 –> Focus, proteins and D/RNA to code for them. Life forms require 100′s of diverse mutually fitted proteins in correct functional organisation, at minimum involving genomes of 100 – 1,000 bases, read as an equivalent number of bits. This is well beyond the blind search capacity of the solar system and the observed cosmos.

    10 –> Presumed source of info: chance variations and chemistry, driven by thermodynamic forces and related reaction kinetics to form the relevant co-ordinated cluster of macromolecules exhibiting in themselves large amounts of FSCO/I.

    11 –> Say, 1,000 protein molecules of typical length at 4.32 functionally specific bits per character, say 1/5 of these being essential, others allowing for substitution while folding to function. 216,000 bits, plus for chirality (5% allowance for Glycine, 20 different monomers) with 1,000 proteins avg 250 monomers, 237,000 bits.

    12 –> 453,000 bits, order of mag consistent with a genome length of order as discussed. the solar system-cosmos scale threshold of blind search is 500 – 1,000 bits. Where also the search space DOUBLES per additional bit.

    13 –> 453,000 bits implies a search space of 3.87*10^136,366, vastly beyond astronomical blind search capcity.

    14 –> However, in bytes that is 56, 525. Such a file size is well within reach of an intelligent source.

    15 –> The process also involves in effect undoing of diffusion to clump and properly organise the cell. The scope of that challenge is also well beyond astronomical blind search capacity.

    _________

    We need answers to this, not distractions, dismissals, target-painting threats and hand waving. Remember, OOL is the root of the tree of life, and without the root on credible blind chance mechanisms, nothing else is feasible.

    Where, if design is at the root of the tree of life, there is no good reason to exclude it from empirical evidence anchored comparative difficulties based inferences to best explanation thereafter across the span of the tree of life to us.

    KF >>

    ______________

    I trust this will help focus the issue.

    KF

Leave a Reply