Uncommon Descent Serving The Intelligent Design Community

“Specified Complexity” and the second law

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A mathematics graduate student in Colombia has noticed the similarity between my second law arguments (“the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view”), and Bill Dembski’s argument (in his classic work “The Design Inference”) that only intelligence can account for things that are “specified” (=macroscopically describable) and “complex” (=extremely improbable). Daniel Andres’ article can be found (in Spanish) here . If you read the footnote in my article A Second Look at the Second Law you will notice that some of the counter-arguments addressed are very similar to those used against Dembski’s “specified complexity.”

Every time I write on the topic of the second law of thermodynamics, the comments I see are so discouraging that I fully understand Phil Johnson’s frustration, when he wrote me “I long ago gave up the hope of ever getting scientists to talk rationally about the 2nd law instead of their giving the cliched emotional and knee-jerk responses. I skip the words ‘2nd law’ and go straight to ‘information'”. People have found so many ways to corrupt the meaning of this law, to divert attention from the fundamental question of probability–primarily through the arguments that “anything can happen in an open system” (easily demolished, in my article) and “the second law only applies to energy” (though it is applied much more generally in most physics textbooks). But the fact is, the rearrangement of atoms into human brains and computers and the Internet does not violate any recognized law of science except the second law, so how can we discuss evolution without mentioning the one scientific law that applies?

Comments
Pixie I have responded at the blog. Onlookers are invited to go take a look. GEM of TKI PS: It is probably worth excerpting here as a closeoff at UD, an excerpt I used from Sewell's main presentation of his case as he cites above:
The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur. The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . . What happens in a closed system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special. THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . .
Go to the link to follow up on the significance of that and how it develops. All the best.kairosfocus
April 13, 2007
April
04
Apr
13
13
2007
02:49 AM
2
02
49
AM
PDT
I have given up on this thread, and am continuing the discussion at the thread kairosfocus kindly started on his own blog here.The Pixie
April 13, 2007
April
04
Apr
13
13
2007
12:10 AM
12
12
10
AM
PDT
OOPS! Shifts. [And Open Office AND Firefox did not spot the typo -- what does that tell us . . .]kairosfocus
April 10, 2007
April
04
Apr
10
10
2007
05:02 AM
5
05
02
AM
PDT
Continuing . . . 4] Evolution can be considered a targeted search; the target being a species that flourishes in the given environment. Closeness to that target is rewarded by survival (that may be tautological, but that does not make it wrong) Several notes: a] First, an equivocation. “Targetted” in GAs is intelligent, not blind. (E.g. Set up antenna performance specs and randomise parameters to get the heuristically “best” outcome etc. Do a Monte Carlo pattern of runs on a model, etc. Trial and error by PC . . . ) By definition, NDT style evolution is precisely not based on a targetted, configurationally constrained search that rewards closeness to the identified target. b] Second, “natural selection” is a term not a creative force of reality. In the real world, what happens is that relatively well adapted individuals are more likely to thrive and reproduce, so their descendants dominate the population. There is no necessary tendency to drift. [Cf the discussion under the Blythian thread to see this.] NS is consistent with minor changes as observed [often oscillating, like Galapagos Finch beaks], it is consistent with stasis, and with loss of genetic information – how the founder principle often leads to new varieties and even “species.” [NB how some of the Galapagos Finch species are interbreeding successfully now . . .] c] The tautology issue comes up when the above is confused with a creative force. That the least unfit or better fitted survive and reproduce does not mean that they are innovative, adaptable to future unforeseen environmental shits, etc. Frontloading is intelligently targetted by contrast, with local adaptability across multiple environments a major objective of the optimisation. (I am not advocating this, just noting.) d] The biggie issue is information generation beyond the Dembski-type bound by in effect random processes, say 500 – 1000 bits, or about 250 – 500 monomers. Body plan level innovations to account for the Cambrian revolution on require three or five or more orders of magnitude more than that. The whole gamut of the observed cosmos, 13.7 BY and 10^80 or so atoms [on a generous estimate!] are insufficient to credibly cross that threshold once much less dozens or more times. e] The issue comes back to the point in my jet in a vat, or TBO's protein in a prebiotic soup. The bridge to cross is the gap from scattered components to complex integrated functional whole where a high threshold of functionality is required for minimal function to occur. Intelligent agents do this routinely. Random forces for excellent reasons linked tot he underlying analysis of stat thermodynamics, do not do so credibly within the gamut of the observed cosmos. So, absent a back way up Mt Improbable there is an unanswered problem. And, as Ch 9 in TBO points out, it's slim pickens on back paths up Mt Improbable.[Shapiro's recent Sci Am piece just underscores the point . . .] 5] It seems your original post was approved and has already gone through . . . GEM of TKIkairosfocus
April 10, 2007
April
04
Apr
10
10
2007
04:55 AM
4
04
55
AM
PDT
Continuing . .. Got through! Now on points: 1] 0 K: You can't get there – and that's important. [And at any accessible temp, Sconfig is a function of the number of states that pass the functional/macroscopic state test, random states in effect having a no-test test. For a unique code, once it is specified, its Sconfig is already zero. But of course it has thermal entropy etc. Recall for a system that can be so comparmentalised, Wsys = W1.W2. [This now standard trick, I believe, was originally used by Boltzmann to derive s = k ln w itself, using a hypothetical physical partitioning of the system. That is comparmentalisation of statistical weights based on physical processes is an underlying assumption of the whole process. Thence my vats and nanobots again.] 2] Modes and degrees of freedom Of course,we have freezing out effects that on a quantum basis do separate modes "naturally." In TBO's case and my Vat exercise, once we see that the energy of bonds is more or less the same for any configuration, and there is no effective pressure-volume work being done, the enthalpy term is quasi-neutral across configurations. But, when not just any random or near random config will do [TBO discuss this on proteins in Chs 8 and 9], programmed work is normally indicated to get to the specified one. Prebiotic soup exercises end up requiring more probabilistic resources than are available in the credible gamut of the observed cosmos. BTW, from the discussion and refs made in TMLO, the usage of configurational work and entropy they make is in the OOL lit from that time, i.e. this is again not a design thought innovation as such. 3] Wiki note:
It is often useful to consider the energy of a given molecule to be distributed among a number of modes. For example, translational energy refers to that portion of energy associated with the motion of the center of mass of the molecule. Configurational energy refers to that portion of energy associated with the various attractive and repulsive forces between molecules in a system. The other modes are all considered to be internal to each molecule. They include rotational, vibrational, electronic and nuclear modes. If we assume that each mode is independent (a questionable assumption) the total energy can be expressed as the sum of each of the components . . .
TBO's usage is of course in this spirit, bearing in mind that the molecules in view are endothermically formed so the work of clumping and that of configuring can reasonably be separated as mere clumping is vastly unlikely to get to the macroscopically recognisable functional state. Thus too, my vats example. Pausing 2 . . .kairosfocus
April 10, 2007
April
04
Apr
10
10
2007
04:27 AM
4
04
27
AM
PDT
Hi Pixie: I sympathise on the comment filtering issue -- having had some mysterious swallowings myself. On the other hand, in another thread this AM Dave Scott informed me they have had something like 90,000 spam messages in recent months, and very few of these have been filtered off improperly. [I think there is a two stage filter or something . . .] I have posted a thread for onward comments. Pausing [get the link out of the way first] . . .kairosfocus
April 10, 2007
April
04
Apr
10
10
2007
04:14 AM
4
04
14
AM
PDT
Right, I give up. It is stupid when I have to submit an argument one sentence at a time to sneak it past the spam filter. My last seven word sentence was rejected, with nothing in any way offensive in it. Kairosfocus, can I suggest you start a thread at your blog on this, and we move over there.The Pixie
April 9, 2007
April
04
Apr
9
09
2007
03:54 PM
3
03
54
PM
PDT
... continues Consider this thought experiment:The Pixie
April 9, 2007
April
04
Apr
9
09
2007
03:49 PM
3
03
49
PM
PDT
... continues So follow it through. S(config) for DNA at 0 K is zero. The same for a random sequence, a simple repeating sequence and human DNA.The Pixie
April 9, 2007
April
04
Apr
9
09
2007
03:19 PM
3
03
19
PM
PDT
kairosfocus Having big problems getting the next bit though, so I apologise for breaking it up so.
Pix: "S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex." Correct and that is in part why you cannot get there, according to Nernst was it. One consequence would be perfectly efficient heat engines.
The Pixie
April 9, 2007
April
04
Apr
9
09
2007
03:17 PM
3
03
17
PM
PDT
9] Genetic Algorithms: These are not blind but designed, targetted searches that reward closeness to a target function well within the probabilistic resources of a computer. They are an instance of intelligent design that uses random processes.
Evolution can be considered a targeted search; the target being a species that flourishes in the given environment. Closeness to that target is rewarded by survival (that may be tautological, but that does not make it wrong). Yes, genetic algorithms are designed. That does not by itself imply that analogous processes must also be designed.The Pixie
April 9, 2007
April
04
Apr
9
09
2007
03:14 PM
3
03
14
PM
PDT
I suspect you did Engg th-D; poss as Chem Eng. I did th-D in physics, as well as further relevant studies.
Straight chemistry, actually.
As noted long since, TBO are ANALYTICALLY separating dSth [as they label it — it is really clumping] and dS config, exploiting the state function nature of dS.
The W of S = k ln W can be broken into various ways in which energy can be stored in a system (eg as rotational energy, vibrational). I believe what TBO are doing is akin to trying to split out say vibrational energy. I think that is different to what you seem to describe. It certainly makes no sense to break the process into discrete steps, one in which vibrational energy increases, the second in which all other modes increase. The point about entropy is that the energy is distributed across all available modes. Interestingly, this Wiki entry mentions configurational as one of those energy modes. However, this is the energy associated with intermolecular forces, i.e., clumping together, rather than the configuration of a molecule.The Pixie
April 9, 2007
April
04
Apr
9
09
2007
03:06 PM
3
03
06
PM
PDT
kairosfocus
True, but immaterial...
Well, yes, the conversation has moved on, so my point is not relevant to what we are talking about now (thus, your points 1, 2 and 4 are responding to my objection to your vats argument, when my objection was actually to something else). However, I feel this is a fundamental problem with Sewell's argument.The Pixie
April 9, 2007
April
04
Apr
9
09
2007
02:54 PM
2
02
54
PM
PDT
kairosfocus
True, but immaterial...
Well, yes, the conversation has moved on, so my point is not relevant to what we are talking about now (thus, your points 1, 2 and 4 are responding to my objection to your vats argument, when my objection was actually to something else). However, I feel this is a fundamental problem with Sewell's argument.
3] S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex. Correct and that is in part why you cannot get there, according to Nernst was it. One consequence would be perfectly efficient heat engines.
Right, so follow it through. S(config) for DNA at absolute zero is zero. The same for a random sequence, a simple repeating sequence and human DNA. Consider this thought experiment... Heat up those DNA sequences to ambient. The sequences do not change. One DNA is still the same random sequence, one is the same repeating pattern, one is still human DNA. What is S(config) for each of those sequences now they are at ambient?
I suspect you did Engg th-D; poss as Chem Eng. I did th-D in physics, as well as further relevant studies.
Straight chemistry, actually.
As noted long since, TBO are ANALYTICALLY separating dSth [as they label it — it is really clumping] and dS config, exploiting the state function nature of dS.
The W of S = k ln W can be broken into various ways in which energy can be stored in a system (eg as rotational energy, vibrational). I believe what TBO are doing is akin to trying to split out say vibrational energy. I think that is different to what you seem to describe. It certainly makes no sense to break the process into discrete steps, one in which vibrational energy increases, the second in which all other modes increase. The point about entropy is that the energy is distributed across all available modes. Interestingly, this Wiki entry mentions configurational as one of those energy modes. However, this is the energy associated with intermolecular forces, i.e., clumping together, rather than the configuration of a molecule.The Pixie
April 9, 2007
April
04
Apr
9
09
2007
02:52 PM
2
02
52
PM
PDT
PS: Pixie, I think there is a "J" who comments at UD, so if you mean Joseph, you will need to differentiate . . . GEM of TKIkairosfocus
April 9, 2007
April
04
Apr
9
09
2007
05:04 AM
5
05
04
AM
PDT
Continuing . . . 7] What is it, a hundred parts per clump (rather low for a jet plane, but perhaps not for a replicating protein), and say a mole of parts in the vat. So there are (say) a hundred factorial ways to arrange the parts, and 6×10^21 clumps. in the pre-biotic world, we are looking at something like 80 different amino acids [taking into account chirality], and selecting the correct set of 20 of 80 100 times, in any order. That gives me P = 1/4^100 ~ 10-60 of getting TO a chain with the right chirality, bio-relevant amino acids alone. [I am of course leaving off those few oddball proteins that use odd acids] . Then, factor in correct bonding about 1/2 the time, and the odds go down by a further 1/2 ^100. Then we forget chain-stoppers and the odds of disassembly of such energetically unfavourable macromolecules. Of course metabolism first scenarios require dozens of proteins to get going. And more. In short, the odds of getting TO biofunctionality by chance are such as begin to exhaust probabilistic resources real fast. That's why I think the man I call “Honest Robert” Shapiro is mistaken to champion this scenario, even as he accurately points out the core challenge of the RNA world model:
[OOL researchers] have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . . The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
8] I throw in those randomiser bots, and they will arrange and rearrange the parts all the time. And every time they happen upon on nano-jet, a “jet pilot bot” will remove it from the vat. What do you think the end result will be? Simple: once we are dealing with a realistically complex nano-jet, nothing functional will happen by random processes in the lifetime of the earth, with very high probability. TH JPBs will, with near-unity probability, search in vain. 9] Genetic Algorithms: These are not blind but designed, targetted searches that reward closeness to a target function well within the probabilistic resources of a computer. They are an instance of intelligent design that uses random processes. As Wiki notes: A typical genetic algorithm requires two things to be defined: 1 a genetic representation of the solution domain, 2 a fitness function to evaluate the solution domain. Thus, GA's show ID not blind RM + NS. Further to this, the level of information to be generated is well below the Dembski type bound, through the use of coded strings. [And, BTW, where did the required algorithms and coding schemes as well as complex functional machines to implement come form?] GEM of TKIkairosfocus
April 9, 2007
April
04
Apr
9
09
2007
04:49 AM
4
04
49
AM
PDT
Continuing . . . 4] Just invoking probabilities and macrostates does not make it second law stuff. Relative statistical weights of mactostates is precisely how entropy is measured and compared in stat mech. That is what s = k ln w is about, where w is the number of microstates associated with the relevant macrostate. Robertson astutely notes, as can be seen in my always linked:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms [pp. Vii – viii] . . . . the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context. [pp. 3 – 7, 36, Stat Thermophysics, PHI]
In short, information, probability and thermodynamics are intimately linked, once we address cases in which clusters of microstates are associated with macroscopically observable ones. 2 LOT, is a part of that. 5] I once debated thermodynamics with a guy who had no idea about calculus I suspect you did Engg th-D; poss as Chem Eng. I did th-D in physics, as well as further relevant studies. 6] The best you can do is break the process into two steps, one with S(config) approximately zero, the second with S(thermal) approximately zero. As noted long since, TBO are ANALYTICALLY separating dSth [as they label it -- it is really clumping] and dS config, exploiting the state function nature of dS. By giving a thought expt that shows how bonding/clumping [and bonding energies, hence internal energy state, the dE part of dH – I feel a pull to use my more familiar dU] can be differentiated from configuring, I have shown why they can do that. Pausing . . .kairosfocus
April 9, 2007
April
04
Apr
9
09
2007
04:24 AM
4
04
24
AM
PDT
Hi Pixie: Okay, today I paused to remember Vimy Ridge: 90 years to day and date. So pardon if I am a bit summary or rough and ready on points: 1] Some things are extremely unlikely, but not at all connected to the second law. True, but immaterial. The case in point in my thought expt and those in TBO's analysis, have everything to do with 2 LOT, and particularly to do with why TdS can be split up into clumping and configuring stages, to yield a decrease in entropy associated with the work of creating an information-rich structure. Of course, as 2 LOT requires, the planned work reqd to do so overcompensates so overall entropy increases. My corrective point stands: TBO made no mistake in splitting up TdS. In particular, I have shown why it makes sense for TBO to see that we can distinguish what I have re-termed “clumping” and configuring; and other cases where it may or may not happen that way are immaterial. The rest of their analysis follows. 2] It is a fallacy to claim that: Something is extremely improbable, therefore it is forbidden by the second law, therefore it will not happen. Notice, I have not ever spoken of “forbidden” by 2 LOT in the stat mech sense. I have pointed out that the equilibrium cluster of microstates overwhelmingly dominates the distribution and so fluctuations sufficiently far away from that equilibrium are utterly unlikely to occur. This is the basis for the classical observation that isolated systems do not spontaneously decrease their entropy etc. In particular, the relevant cases are such that the probabilistic resources of the observed cosmos are insufficient to credibly lead to an exception. [Your randomly selected cluster turns out to be functional for the “zip-zap-zop” is so likely to fall under this stricture that I simply discussed it as a “for the sake of argument” case. Remember that nothing forbids the air molecules all rushing to one end of your room, but the probabilities of such a fluctuation are so low we simply routinely ignore it.] Further to this, I also showed in my always linked – have you read it? -- that Clausius' case study no 1 of closed systems within an isolated system, with heat moving from one to the next body will cause an INCREASE of entropy in the second body, absent coupling of the raw energy to do work. The issue at stake, therefore being the ORIGIN of functionally specific, complex information processing and based systems beyond the Dembski type bound – exhaustion of the probabilistic resources of the observed cosmos. Thus, my vats example, especially the control vat. 3] S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex. Correct and that is in part why you cannot get there, according to Nernst was it. One consequence would be perfectly efficient heat engines. [That is we are on the way to perpetual motion machines . . .] Pausing . . .kairosfocus
April 9, 2007
April
04
Apr
9
09
2007
04:07 AM
4
04
07
AM
PDT
kairosfocus Okay, those vats. First off, splitting S(config) and S(thermal). If you want to know deltaH for a chemical reaction at a certain temperature, you could determine deltaH to cool/heat the reactants to standard conditions, look up the deltaH for the reaction, then determine deltaH to heat/cool the products to the stated conditions. You can break the one step process into a number of steps, and as long as you start and end at the same point, the overall thermodynamics must be the same. I do not think you can use that sort of approach for S(config) and S(thermal). As I read TBO, these are two things going on in every process; you cannot do one and then the other. The best you can do is break the process into two steps, one with S(config) approximately zero, the second with S(thermal) approximately zero. Maybe this is what you mean (or maybe you disagree), but I what to clarify my understanding.
e] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. We see here dSthermal] After a time, will we be likely to get a flyable nano jet?
No, I do not think we do see deltaS(thermal), though perhaps we see an analogy to it. Do we get flyable nano jets? Yes, some. How many depends on how many ways there are of putting the parts together, and how many clumps. What is it, a hundred parts per clump (rather low for a jet plane, but perhaps not for a replicating protein), and say a mole of parts in the vat. So there are (say) a hundred factorial ways to arrange the parts, and 6x10^21 clumps. Hmm, I have some new nanobots. These are "jet pilot bots". I throw them into this vat, and all the complete nano-jets are removed. Yeah, okay, probably none right now, but... Then I throw in those randomiser bots, and they will arrange and rearrange the parts all the time. And every time they happen upon on nano-jet, a "jet pilot bot" will remove it from the vat. What do you think the end result will be?
h] Now, let us go back to the vat. For a large cluster of vats, we use direct assembly nanobots, but in each case we let the control programs vary at random - say hot them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones,and if there is an improvement, we allow replacement. Iterate. Given the complexity of he relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un- anticipated technology? Justify your answer on probabilistic grounds. My prediction: we will have to wait longer than the universe exists to get a change that requires information generation on the scale of 500 - 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?]
I am not even sure hyperspace-capable spacecraft is possible, which could mean this task is impossible for an intelligent civilisation, let alone a vat of nano-bots. There is software that does do this sort of thing. It does not define nanobots, but it does come up with novel ideas, using RM + NS. It uses an idea called a Geneteic Algorithm. I believe they do indeed generate information. Given that your process is computer controlled, surely we can dispense with the imaginary nano-bots, and just look at software that really exists.The Pixie
April 8, 2007
April
04
Apr
8
08
2007
03:40 PM
3
03
40
PM
PDT
J Only just noticed your post, sorry (and wanting to put off those vats...).
What intelligent (including human) agency can do, however, is generate results that are different from anything produced by blind/dumb/purposeless processes.
If by results you mean things like jumbo jets, then yes.
If one holds that all natural processes are blind/dumb/purposeless, then this implies that intelligence is “supernatural.”
I do not.
You left out a few adjectives that make all the difference: “unpredictably,” “indefinitely,” “of arbitrary character.”
Can you explain why these make a difference? Specifically, what is the relevance to the discussion? If I am slaying strawmen, it is because I cannot see what your point is.The Pixie
April 8, 2007
April
04
Apr
8
08
2007
02:51 PM
2
02
51
PM
PDT
kairosfocus Happy Easter!
You are reiterating a point I have repeatedly made — “there is no logical or physical principle that forbids extremely improbable events” or the like — but to distract from its proper force. For, the stat mech form of 2 LOT shows that non-equilibrium microstates are sufficiently improbable in systems of the scale in question, that they will not reasonably be spontaneously accessed by macroscopic entities within the lifetime of the observed universe etc. Thus, the classical result.
No. What you say here is true, but entirely misses my point. Some things are extremely unlikely, but not at all connected to the second law. The chances of me winning the UK lottery on saturday is 1 in 14 million (if that is not improbable enough, then winning the lottery for n consequetive Saturdays). That has nothing to do with entropy. It has nothing to do with the second law. Even if you describe it in terms of macrostates (winning in the lottery is a single macrostate, not winning is 14 million), it still has nothing to do with the second law. It is a fallacy to claim that: Something is extremely improbable, therefore it is forbidden by the second law, therefore it will not happen. It you want to invoke the second law of thermodynamics, you have to look at the thermodynamic entropy.
Oops, that is I am afraid impossible under 3 LOT: no finite number of refrigeration cycles can reduce material objects to absolute zero. 0K is inaccessible though we can get pretty close to it.
I was talking hypothetically. But the boys in the theoretical lab have done some sums, based on S = k ln W. It turns out that from a consideration of the macrostates, S is zero at 0 K. Which agrees with the third law. For everything, no matter how complex.
However, in your case you have inadvertently smudged over the key point that we are looking at relative statistical weights of observable macrostates; in my case as functional ones – a flyable microjet. In yours, ultimately, a sub assembly for “a “zip-zap-zop.” [Notice that if the sub assembly works is something we can recognise macroscopically.]
What this comes down to is the point I made at the start. Just invoking probabilities and macrostates does not make it second law stuff. Even if TBO says it is.
PS: I should note on a likelihood point, once conceded BTW by Dawkins. If we see a flyable jet, the best explanation is intelligent agency, not a tornado in a junkyard. That is because relative to chance plus necessity only, the configuration is so improbable that we recognise at once that a force that makes it far more likely is the best explanation: agency. You will note that the phil of inference to best explanation is the underlying basis for scientific explanation.
Sure. You consider competing explanations, determine probabilities, and go with the most probable (and your confidence reflects the probabilities).
PPS: I took exception to the “do you know” point...
I asked about dS as I once debated thermodynamics with a guy who had no idea about calculus, and it was a long time before I realised that. His arguments were entirely different from yours, but you did seem to be using dS for deltaS, so I just asked for reassurance. I was afraid it would cause offense, and now I have the reassurance, I apologise for asking (if that makes sense).The Pixie
April 8, 2007
April
04
Apr
8
08
2007
02:36 PM
2
02
36
PM
PDT
PS: I should note on a likelihood point, once conceded BTW by Dawkins. If we see a flyable jet, the best explanation is intelligent agency, not a tornado in a junkyard. That is because relative to chance plus necessity only,the configuration is so improbable that we recognise at once that a force that makes it far more likely is the best explanation: agency. You will note that the phil of inference to best explanation is the underlying basis for scientific explanation. The point of course also holds for the vats expt, as a comparison with he control vat will show and of course with the vats for which the control software has been randomised. PPS: I took exception to the "do you know" point, as this fits in all too closely with Dawkins-style prejudice and bigotry: if you object to NDT and broader evo materialism, you "must" be ignorant, stupid, insane or wicked. I freely confess to being a penitent sinner under reformation, but daresay that as a holder of relevant undergraduate and graduate degrees, I am none of the first three. So, for the sake of productive, civil discourse, let us not go there. GEM of TKIkairosfocus
April 8, 2007
April
04
Apr
8
08
2007
03:28 AM
3
03
28
AM
PDT
Continuing . . . 3] I am curious if you know the difference between deltaS and dS? Obviously. Delta, strictly is finite increment, d is infinitesimal. [I am not bothering to make too much of the distinction here as we both know that we can convert the second into the first by integration, iterative summation of the increments [in the limit as dX or whatever - -> 0] in effect. I assume you too have done at least high school calculus. 4] First I will send in my “splodge assembler nanobots”. These will assemble the parts into a random, but prespecified configuration (I call it a “splodge”). And then I will send the “randomiser nanobots” in; these rearrange the parts randomly. First, “a random, but prespecified configuration” is, strictly, a contradiction in terms. You probably mean a specified, complex [beyond 500 – 1000 bits of information] but non-functional configuration. These will indeed reduce the number of accessible microstates, and will do dSclump and dSconfig, but will not in so doing create a macroscopically functional macrostate. [Of course after the fact “discovery” that an at random selected targetted microstate is functional for something else will be utterly improbable, but moreso, it will not underminse the force of my main point, namely dSclump and dSconfig are in fact clearly distinguishable and incremental.] Rearranging the parts thereafter at random will simply expand the number of microstates corresponding to the macrostate -- any at random clumped state will do. Again, the point I have made stands. 5] The cryogenics lab have successfully cooled the nano-jet, the nano-splodge and the mixture of products from the randomiser-bots to absolute zero Oops, that is I am afraid impossible under 3 LOT: no finite number of refrigeration cycles can reduce material objects to absolute zero. 0K is inaccessible though we can get pretty close to it. In any case, on your real point: 6] Same entropy, but different configurations. Can you explain how that can be? When s = k ln w is based on the same number of accessible states, we are looking at the same value of entropy. However, in your case you have inadvertently smudged over the key point that we are looking at relative statistical weights of observable macrostates; in my case as functional ones – a flyable microjet. In yours, ultimately, a sub assembly for “a “zip-zap-zop.” [Notice that if the sub assembly works is something we can recognise macroscopically.] In either case, we are looking at the same, now demonstrably plain point: dSclump and dSconfig are incrementally separable adn analytically meaningful. So, TBO's analysis makes sense. GEM of TKIkairosfocus
April 8, 2007
April
04
Apr
8
08
2007
02:41 AM
2
02
41
AM
PDT
Ah, Pixie: First, Happy Easter. I will note on selected points, mostly in sequence. 1] Just because something is extremely unlikely, that does not make it forbidden by the second law. Not even when you invoke macrostates! You are reiterating a point I have repeatedly made -- “there is no logical or physical principle that forbids extremely improbable events” or the like -- but to distract from its proper force. For, the stat mech form of 2 LOT shows that non-equilibrium microstates are sufficiently improbable in systems of the scale in question, that they will not reasonably be spontaneously accessed by macroscopic entities within the lifetime of the observed universe etc. Thus, the classical result. Consequently, as you will note from my vats thought experiment, the 2 LOT does apply to the situation, and entropy is a relevant consideration. [Notice how I chose a scale sufficiently small to be quasi-molecular but too large to be quantum-dominated. By using the nanobots, you can see how the work of clumping reduces the number of accessible microstates, and how the work of configuring further reduces; thus by s = k ln w, and the state functional nature of s, we can distinguish dSclump [or thermal if you will] from dSconfig. The tornado example is similar, but on a larger scale.] 2] . . . because that is the way you have set it up. Precisely, in order to show that we can distinguish between [1] the work of setting up a random agglomeration, and [2] that of setting up a macroscopically recognisable functional one. (This was where your objection lay to TBO's work.) I have shown by an appropriate thought expt that one can do TdSclump then TdSconfig, or do it directly; in both cases by application of intelligence. You will notice that the sign of dS in both cases is incrementally negative: [1] divide up the positions in the vat [~ 1m^3] into suitable phase space cells such that only one part will be in each at most [~ 10-6 m], [2] similarly, work out the 3-d rotational alignment of the parts, such that only one will be more or less correct – 45 degree increments gives 8 * 8* 8 = 512 angular orientations per part. The number of accessible location and orientation states for parts in the vat as a whole vastly exceed those for clumping, and those for clumping vastly exceed those for alignment to make a functional jet. I have left off the selection and sorting work, and the need to kill off translation and rotation, which again will both compound the numbers of accessible microstates. I should note that the assembly of biological macromolecules is an endothermic process, as TBO highlight. Also, say let the parts be magnetic. [Hard to do for Al but let's ignore for the sake of argument.] The relevant scale will be such that the amount of pulling force exerted will be tiny until the parts are nearly in contact, which is what I set up. Also 1 m^3 of say water has in it has something like 56 k mol, or 3.4 *10^28 molecules. A few hundred million parts will be “lost” in that space, apart from smart search techniques, which of course require work. [Thence Brillouin's point on Maxwell's Demon.] In short, my case has been properly made. Pausing (and dear filter please be kind today . . .) . . .kairosfocus
April 8, 2007
April
04
Apr
8
08
2007
02:19 AM
2
02
19
AM
PDT
kairosfocus The cryogenics lab have successfully cooled the nano-jet, the nano-splodge and the mixture of products from the randomiser-bots to absolute zero, and discovered that they all have zero entropy at that temperature. Same entropy, but different configurations. Can you explain how that can be? I will have to respond to the rest of your post later, I am afraid.The Pixie
April 7, 2007
April
04
Apr
7
07
2007
03:29 PM
3
03
29
PM
PDT
kairosfocus I appreciate by the way that I have yet to address your main point.
In this vat, call out the random cluster nanobots, and send in the jet assembler nanobots. These recognise the parts, and rearrange them to form a jet, doing configuration work. A flyable jet results - a macrostate with a much smaller statistical weight of microstates, probably of order ones to tens or perhaps hundreds. [We see here separated dSconfig.]
I am curious if you know the difference between deltaS and dS? Anyway, this "configuration work"... It sounds as though there is an energy requirement here, so perhaps we can play around with that. It just so happens I have my own nanobots. First I will send in my "splodge assembler nanobots". These will assemble the parts into a random, but prespecified configuration (I call it a "splodge"). And then I will send the "randomiser nanobots" in; these rearrange the parts randomly. Let us suppose that all three nanobot armies end up doing the same number of changes (though the specific changes are different), which one did the most "configuration work"? Which expended the most energy? Ah, now here is a surprise. I have just heard from the next door lab that a "splodge" is a vital component in a "zip-zap-zop". Turns out that that configuration is actually very useful. How does that affect the "configuration work" of the "splodge assembler nanobots"? Does this new knowledge change the thermodynamics?The Pixie
April 7, 2007
April
04
Apr
7
07
2007
03:21 PM
3
03
21
PM
PDT
kairosfocus I am having problems with the spam filter, so this may be spread over many posts. Sorry
In the control vat, we simply leave nature to its course. Will a car, a boat a sub or a jet , etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging is not strong enough for them to clump and precipitate.]ANS: Logically and physically possible but he equilibrium state will on stat thermodynamics grounds overwhelmingly dominate — high disorder.
Well, yes, because that is the way you have set it up. Now let us suppose that the parts cling to each other in a way for the to clump together. Now the thermodynamics causes them to clump together! Say the process is: A + B + C + D -> car + heat As the process releases energy, under some conditions this process may well be thermodynamically flavoured. It may seem counter-intuitive, but the nano-car plus heat may be the more disordered state - thermodynamically (i.e., for the energy). Consider crystal formation; cool a saturated salt solution, and this is what happens: Na+ + Cl- -> NaCl(crystals) + heat The highly ordered crystals, plus the heat are thermodynamically favoured over the chaotic ions dissolved in water. Yes, I know order in a crystal is different to complexity in whatever, but the second law is about entropy, which is disorder. And not complexity.The Pixie
April 7, 2007
April
04
Apr
7
07
2007
03:03 PM
3
03
03
PM
PDT
kairosfocus
(I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional macrostate is so rare relative to non functional ones that random search strategies are maximally unlikely to access it, i.e. we see here 2nd LOT at work.]
No, this has nothing to do with the second law! The second law is about entropy. If you can frame the situation in terms of entropy, you might have a point. Just because something is extremely unlikely, that does not make it forbidden by the second law. Not even when you invoke macrostates!The Pixie
April 7, 2007
April
04
Apr
7
07
2007
02:59 PM
2
02
59
PM
PDT
Continuing . . . h] Now, let us go back to the vat. For a large cluster of vats, we use direct assembly nanobots, but in each case we let the control programs vary at random – say hot them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones,and if there is an improvement, we allow replacement. Iterate. Given the complexity of he relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un- anticipated technology? Justify your answer on probabilistic grounds. My prediction: we will have to wait longer than the universe exists to get a change that requires information generation on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?] I] Try again, this time to get to the initial assembly program by chance . . .See the abiogenesis issue? j] In the actual case, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO's term chemical work, fine], and configuring workl can be identified and applied to the shift in entropy through the s = k ln w equation. This, through Brillouin, TBO link to information, citing as well Yockey-Wicken's work atthe time and their similar definition of information. [As you know I have pointed to Robertson on why this link makes sense -- and BTW, it also shows why energy converters that use additional knowledge can couple energy in ways that go beyond the Carnot efficiency limit for heat engines.] In short, the basic point made by TBO in Chs 7 - 8 is plainly sound. The rest of their argument follows. 2] On heating and cooling vs assembling rocks BTW, on cooling rocks and continents, the thermal entropy does reduce on cooling as the number of accessible microstates reduces. Now, redo the experiment above with nano-ashlars etc that together make up a model, functional aqueduct, complete with an arched bridge -- that we could inspect through a microscope. --> Would this be likely to happen by chance + necessity only if you heat the vat [inject more random molecular motion]? --> Would it happen if you were to clump the stones haphazardly? --> If you clump then assemble? --> If you search out and directly assemble? --> Can you identify dStot, dSthermal on heating/cooling, dSconfig? --> Apart from scale, is this in principle different from a tornado building an aqueduct, vs a Roman legion doing so? (In short, it seems to me that we can in principle identify the entropies associated, though of course to actually measure would be beyond us at this level of technology!!!) I trust this helps for now Happy Easter GEM of TKIkairosfocus
April 6, 2007
April
04
Apr
6
06
2007
04:08 AM
4
04
08
AM
PDT
Hi Pixie and Joe [et al]: Today is Good Friday [so my focus for the day is on other matters of greater moment . . .]. I think a thought experiment will be helpful in clarifying, along with a pause to read the online chapters of TMLO, 7, 8 & 9. My own always linked may also help, follow the link through my name, please. 1] THOUGHT EXPT: a] Consider the assembly of a Jumbo jet, which plainly requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional macrostate is so rare relative to non functional ones that random search strategies are maximally unlikely to access it, i.e. we see here 2nd LOT at work.] b] Now, let us shrink the example, to a nano-jet so small that the parts are susceptible to brownian motion, i.e they are of sub-micron scale and act as large molecules, say a million of them, some the same, some different etc. In-principle possible. Do so also for a car a boat and a submarine, etc. c] In several vats of a convenient fluid, decant examples of the differing nanotechnologies, so that the particles can then move about at random. d] In the control vat, we simply leave nature to its course. Will a car, a boat a sub or a jet , etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging is not strong enough for them to clump and precipitate.]ANS: Logically and physically possible but he equilibrium state will on stat thermodynamics grounds overwhelmingly dominate -- high disorder. e] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. We see here dSthermal] After a time, will we be likely to get a flyable nano jet? f] In this vat, call out the random cluster nanobots, and send in the jet assembler nanobots. These recognise the parts, and rearrange them to form a jet, doing configuration work. A flyable jet results -- a macrostate with a much smaller statistical weight of microstates, probably of order ones to tens or perhaps hundreds. [We see here separated dSconfig.] g] In another vat we put in an army of clumping and assembling nanobots, so we go straight to making a jet based on the algorithms that control the nanobots. Since entropy is a state function, we see here that direct assembly is equivalent to clumping and then reassembling from random "macromolecule" to configured functional one." That is: dS tot = dSthermal + dS config. Pausing . . .kairosfocus
April 6, 2007
April
04
Apr
6
06
2007
04:03 AM
4
04
03
AM
PDT
1 2 3

Leave a Reply