Uncommon Descent Serving The Intelligent Design Community

Where is the difference here?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Since my Cornell conference contribution has generated dozens of critical comments on another thread, I feel compelled to respond. I hope this is the last time I ever have to talk about this topic, I’m really tired of it.

Here are two scenarios:

1. A tornado hits a town, turning houses and cars into rubble. Then, another tornado hits, and turns the rubble back into houses and cars.

2. The atoms on a barren planet spontaneously rearrange themselves, with the help of solar energy and under the direction of four unintelligent forces of physics alone, into humans, cars, high-speed computers, libraries full of science texts and encyclopedias, TV sets, airplanes and spaceships. Then, the sun explodes into a supernova, and, with the help of solar energy, all of these things turn back into dust.

It is almost universally agreed in the scientific community that the second stage (but not the first) of scenario 1 would violate the second law of thermodynamics, at least the more general statements of this law (eg, “In an isolated system, the direction of spontaneous change is from order to disorder” see footnote 4 in my paper). It is also almost universally agreed that the first stage of scenario 2 does not violate the second law. (Of course, everyone agrees that there is no conflict in the second stage.) Why, what is the difference here?

Every general physics book which discusses evolution and the second law argues that the first stage of scenario 2 does not violate the second law because the Earth is an open system, and entropy can decrease in an open system as long as the decrease is compensated by increases outside the Earth. I gave several examples of this argument in section 1, if you can find a single general physics text anywhere which makes a different argument in claiming that evolution does not violate the second law, let me know which one.

Well, this same compensation argument can equally well be used to argue that the second tornado in scenario 1 does not violate the second law: the Earth is an open system, tornados receive their energy from the sun, any decrease in entropy due to a tornado that turns rubble into houses and cars is easily compensated by increases outside the Earth. It is difficult to define or measure entropy in scenario 2, but it is equally difficult in scenario 1.

I’ll save you the trouble: there is only one reason why nearly everyone agrees that the second law is violated in scenario 1 and not scenario 2: because there is a widely believed theory as to how the evolution of life and of human intelligence happened, while there is no widely believed theory as to how a tornado could turn rubble into houses and cars. There is no other argument which can be made as to why the second law is not violated in scenario 2, that could not equally well be applied to argue that it is not violated in scenario 1 either.

Well, in this paper, and every other piece I have written on this topic, including my new Bio-Complexity paper , and the video below, I have acknowledged that, if you really can explain scenario 2, then it does not violate the basic principle behind the second law. In my conclusions in the Cornell contribution, I wrote:

Of course, one can still argue that the spectacular increase in order seen on Earth is consistent with the underlying principle behind the second law, because what has happened here is not really extremely improbable. One can still argue that once upon a time…a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendents generation after generation, even correcting errors. One can still argue that, after a long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complex collections of atoms, and eventually something called “intelligence” allowed some of these collections of atoms to design cars and trucks and spaceships and nuclear power plants. One can still argue that it only seems extremely improbable, but really isn’t, that under the right conditions, the influx of stellar energy into a planet could cause atoms to rearrange themselves into computers and laser printers and the Internet.

Of course, if you can come up with a nice theory on how tornados could turn rubble into houses and cars, you can argue that the second law is not violated in scenario 1 either.

Elizabeth and KeithS, you are welcome to go back into your complaints about what an idiot Sewell is to think that dust spontaneously turning into computers and the Internet might violate “the basic principle behind the second law,” and how this bad paper shows that all of the Cornell contributions were bad, but please first give me another reason, other than the one I acknowledged, why there is a conflict with the second law (or at least the fundamental principle behind the second law) in scenario 1 and not in scenario 2? (Or perhaps you suddenly now don’t see any conflict with the second law in scenario 1 either, that is an acceptable answer, but now you are in conflict with the scientific consensus!)

And if you can’t think of another reason, what in my paper do you disagree with, it seems we are in complete agreement!!

[youtube 259r-iDckjQ]

Comments
keiths: In 236, your gerbil example is not stated accurately enough. The first law doesn't say that "matter or energy can be created or destroyed as long as, any old place in the universe, an equal amount of matter or energy is created." The point of the first law is to stress that existing matter or energy is *converted* to something else. So if the atoms making up the furniture are *converted* (by a process we can ascertain) into gerbils, then the first law is not violated. But if the atoms making up the furniture are literally poofed out of existence (i.e., not converted to gerbil atoms or to anything else, but simply annihilated), and if the atoms making up the gerbils are literally poofed into existence (i.e., not formed from the translation of existing energy into matter, or matter into a different kind of matter, but simply generated ex nihilo), then the first law has been violated even if the energy/matter losses and gains are completely balanced. I assume that you were imagining that somehow the furniture matter was *converted* into the gerbil matter. In that case, you are right to say that the first law is not violated. But your scenario said nothing about conversion; it suggested that some matter was simply vanishing from the universe in one case and other matter was created ex nihilo in the other. That would be a violation of the first law, even if there was quantitative equivalence of loss and gain.Timaeus
March 5, 2015
March
03
Mar
5
05
2015
01:24 PM
1
01
24
PM
PDT
Note the stillness of the response. What's the entropy of that I wonder.Mung
July 24, 2013
July
07
Jul
24
24
2013
04:22 PM
4
04
22
PM
PDT
IS still cold air more or less "ordered" than still hot air?Mung
July 21, 2013
July
07
Jul
21
21
2013
06:06 PM
6
06
06
PM
PDT
Elizabeth Liddle:
I’m glad you like it, Mung. So would you like to apply it to my question as to whether a chaotic system like a tornado has more or less order-as-in-entropy than still air?
How hot is the still air? keiths hot-air hot? "order-as-in-entropy" more nonsense.Mung
July 21, 2013
July
07
Jul
21
21
2013
05:48 PM
5
05
48
PM
PDT
KF
That has fundamentally changed how I view anything you have to say.
Well, that's pretty silly, KF. How can our disagreement as to what does and does not constitute slander make any difference as to whether transforming a p value in and out of base 2 logs makes any difference to the value? Or, rather more importantly, to the validity of your computation of that p value?
I have shown just above, yet again, that darwinist search depends on chance variation to generate info in a non-foresighted way, as has been repeatedly shown for years literally and ignored; it then hopes to hill climb by selection that subtracts some of the info generated.
Nope. You are hopelessly confused. What "chance variation" does not "create information" except in the Shannon sense, and then only if it increases the number of bits, rather than reducing it. As mutations can consist of insertions, deletions, point mutations, and repetitions, whether the result is a net increase in Shannon entropy is more or less chance, although I would concede that genome-lengthening mutations are probably more common than genome-shortening ones. What "selection" does, or rather, what happens next is that if those "chance" variations (which are drawn from a very narrow distribution around the parent) have confer greater or lesser parental success, than the parental version, then those with greater will become more prevalent, and those with less, less. As a result, the "population" has acquired information as to what works best (increases reproductive success) in the current environment, represented by the relative prevalence of those sequences that confer it. There is no mystery as to where this valuable information comes from - it comes from the environmental resources and hazards that the population has to navigate to persist. You don't turn Shannon entropy into useful information by expressing the probability as a negative 2 log. It gets turned into useful information by the process we call "selection", as described above.
this already begs the question of getting to the islands where there are hills, design is interested primarily in how to get to such islands, hill climbing being at most micro evo.
Evolution is not a hill climbing algorithm. Fitness can go up as well as down, and ravines can be crossed - even plains. This has been shown in lab, in field, and in silico.
Blind here means non foresighted
Evolution is not blind. It is merely "short-sighted" if you insist on this metaphor. It cannot do something now in the knowledge that it will help in the future. It has no such knowledge. However, it does do something now that will help it now. Which is an anthropomorphic way of saying the obvious: Variants that reproduce better in the current environment will become more prevalent in the population. Even if that results in their losing a facility that might come in handy later. They can also retain many variants that are neutral, because of drift, and from time to time these do come in handy later. Finally, once you have a sequence that confers reproductive success, by the same token you have many exemplars of that sequence, vastly increasing the probability that one of them will undergo an enhancing mutations. This is why your "independent draw" model is so totally inadequate as the null. You can reject it easily, but in rejecting it, you are not rejecting evolution.
as the objectors trying to throw up yet another red herring and strawman distraction full well know; there is no need for a random variable to be equivalent to a flat random one, just that it follows a distribution that is not determined by controllable input values. Yet another squirt of squid ink cloud. KF
I'm afraid the squid here is you, KF. I am not talking about flat or other distributions. I'm talking about independent draws from such distribution, whether flat or otherwise. The fact that Durston considered non-flat distributions of amino acids is neither here nor there; their calculation was still based on independent draw. The draws in evolution are the very reverse of independent; they are cumulative.Elizabeth B Liddle
July 13, 2013
July
07
Jul
13
13
2013
08:02 AM
8
08
02
AM
PDT
F/N 2: Just to cut off more distractions, here is so simple a search as Wiki:
In probability and statistics, a random variable or stochastic variable is a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense). As opposed to other mathematical variables, a random variable conceptually does not have a single, fixed value (even if unknown); rather, it can take on a set of possible different values, each with an associated probability. [--> which does not have to be equal, obviously.] A random variable's possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain (for example, as a result of incomplete information or imprecise measurements). They may also conceptually represent either the results of an "objectively" random process (such as rolling a die), or the "subjective" randomness that results from incomplete knowledge of a quantity. The meaning of the probabilities assigned to the potential values of a random variable is not part of probability theory itself, but instead related to philosophical arguments over the interpretation of probability. The mathematics works the same regardless of the particular interpretation in use. Random variables can be classified as either discrete (that is, taking any of a specified list of exact values) or as continuous (taking any numerical value in an interval or collection of intervals). The mathematical function describing the possible values of a random variable and their associated probabilities is known as a probability distribution. The realizations of a random variable, that is, the results of randomly choosing values according to the variable's probability distribution, are called random variates.
kairosfocus
July 13, 2013
July
07
Jul
13
13
2013
06:09 AM
6
06
09
AM
PDT
F/N: I have shown just above, yet again, that darwinist search depends on chance variation to generate info in a non-foresighted way, as has been repeatedly shown for years literally and ignored; it then hopes to hill climb by selection that subtracts some of the info generated. this already begs the question of getting to the islands where there are hills, design is interested primarily in how to get to such islands, hill climbing being at most micro evo. Blind here means non foresighted as the objectors trying to throw up yet another red herring and strawman distraction full well know; there is no need for a random variable to be equivalent to a flat random one, just that it follows a distribution that is not determined by controllable input values. Yet another squirt of squid ink cloud. KFkairosfocus
July 13, 2013
July
07
Jul
13
13
2013
06:03 AM
6
06
03
AM
PDT
Dr Liddle Maybe it has not sunk in to you that at this point, it is plain that for months you have harboured slander against me, have tried to deny this and blame me for objecting, then have tried to justify it and pretend that nothing seriously wrong was done. That has fundamentally changed how I view anything you have to say. And that is on top of a longstanding problem on your part of making assertions that have been corrected as though repeating error drumbeat style turns it into fact. And not to mention your red herrings, strawmen tactics and squid ink cloud evasions of equally longstanding basis. I am not going to repeat myself over and over again in endless circles in response to an ideologue who -- on abundant evidence -- willfully harbours slander and will not change her mind if the truth were staring here eye to eye in the face. And if you think it is frustrating or irritating to be talked past -- even if only to brush aside red herring and strawman distractors, think about how you have treated others for months and years. I for one will not entertain another dragged out whirling circle of obfuscations, evasions and distractions, backed up by willful obtuseness. The below is for record, and will be posted here just once. For some time you have been pretending that the sub expression P(T|H) in Dembski's 2005 metric gives you an out to reject the concept of complex specified information. On further pretence that it cannot be quantified, as the ways and means of chance working cannot be identified directly. Joe, aptly, keeps pointing out that that already shoots your own case through its heart, as in fact chemical evolutionary then Darwinist and the like evolutionary mechanisms have ZERO empirically observed record of being able to originate body plans from the first up. You -- along with many others -- are throwing up an empirically ungrounded speculation driven by a priori assumptions of materialism or the practical equivalent, dressing it in a lab coat and announcing it as science. Two years and more ago, when this particular objection first came up under the stolen web persona MathGRRL -- there is a Calculus professor who uses it legitimately -- we took time to show that by doing a log reduction, the Dembski 2005 metric goes to an explicit information metric, so once we can evaluate the information we do not need to try to work out P(T|H) since we know what it results in once reduced, and can empirically identify that info and deduce a reasonable threshold that addresses the rest of the expression. Onlookers, for you, I give the link here where this has been there for all to see for 2 years. The log reduced result is: Chi_500 = I_p * S - 500, functionally specific bits beyond the solar system threshold (S being a dummy variable 0 by default, 1 where there is objective reason to see that something is functionally specific, I an info value and 500 the 1,000 LY haystack threshold) On this basis it is easy to see from I-values from Durston et al, that protein families as follows as just one example are beyond the threshold:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond
This is consistent with the message from chirality at OOL, but applies to formation of proteins in living forms. It is these I-values that KS tried to mislead the public, as being based on a flat random distribution. I already pointed out from Durston's Table I, that they are not. (Notice, no acknowledgement of the correction and its significance from that obviously dark triad ideologue.) Now, in the exchange where I responded initially, I took time to show that whether a sample is flat or biased, it will leave a signature of its action in what happens. (I used a set of letters in the proportion of English, flat random sampled with replacement, then sampled in a biased way with replacement; the statistics of the result will show the trace of the bias and redundancies, i.e the avge info per symbol automatically draws in the biases and reflects them, so we can draw the estimates of probability per symbol on average back out again -- and all we need is that average. We are looking for the weighted average H not pi, and a decomposition of pi log pi across the i's.) Where of course, as already pointed out, we need some empirical warrant that the claimed means of blind chance and mechanical necessity are able to actually generate FSCO/I, per observation. Missing. I need to briefly point out that in the claimed darwinist mechanism it is chance variation (CV) not differential reproductive success (DRS) that has to be responsible for claimed descent with modification up to and including novel body plans (RWM), as RES is a short hand for saying inferior or unlucky varieties die out and their info is subtracted from the pool of the population. That is, we are left to the notion that lucky noise writes FSCO/I. For which there is nowhere any good empirical substantiation, of body plans being actually observed to arise by such. Now, what I did most recently is to simply work back from the empirically grounded functional bits values of Durston et al (which one can see from Table I are NOT based on 4.32 bits per character [cf excerpt here], taking into account such redundancies as occur and such variabilities as occur) That is, I reversed the expression I = - log P to get a P value, cf here:
since we know the info values empirically, and we know the relationship that I = – log_2 (P), we can deduce the P(T|H) values for all relevant hypotheses that may have acted by simply working back from I:
RecA: 242 AA, 832 fits, P(T|H) = 3.49 * 10^-251 SecY: 342 AA, 688 fits, P(T|H) = 7.79 * 10^-208 Corona S2: 445 AA, 1285 fits, P(T|H) = 1.50 * 10^-387
That is, the power of the transform allows us to apply an empirical value to what is a more difficult to solve problem the other way. once we do know the info content of the protein families by a reasonable method, we can then work the expression back ways to see the value of P(T|H). And so lo and behold we do not actually have to have detailed expositions on H to do so, once we have the Information value, we automatically cover the effect of H etc. As was said long since but dismissively brushed aside by EL and KS. And consistently these are probabilities that are far too low to be plausible on the gamut of our solar system, which is the ambit in which body plan level evolution would have had to happen. (Indeed, I could reasonably use a much tighter threshold, the resources of earth’s biosphere, but that would be overkill.) Now, do I expect EL and KS to accept this result, which boils down to evaluating the value of 2^-I, as we have I in hand empirically? Not at all, they have long since shown themselves to be ideology driven and resistant to reason (not to mention enabling of slander), as the recent example of the 500 H coin flip exercise showed to any reasonable person. But this does not stop here. Joe is right, there is NO empirical evidence that darwinisn mechanisms are able to generate significant increments in biological information and thence new body plans. All of this — things that are too often promoted as being as certain as say the orbiting of planets around the sun or gravity — is extrapolation from small changes, most often loss of function that happens to confer an advantage in a stressed environment such as under insecticide or sickle cells that malaria parasites cannot take over. Of course, such is backed by the sort of imposed a priori materialism I highlighted earlier today. What is plain is that the whole evolutionary materialist scheme for origin of the world of life, from OOL to OO body plans and onwards to our own origin, cannot stand from the root on up.
But this is not all. There is a far more fundamental problem that you and others have been ducking and dodging, which you as a professional investigator using statistical methods full well must know but have consistently been unwilling to acknowledge, not even when I went to the extent of setting up an instructive thought exercise example. As in make up a bristol board normal curve, marked with 1 sd stripes to each side of the mean at the peak. Make it what 40 cm high to reflect the peak conveniently. Ten go out to 5 - 6 SDs on the tails. And yes, I know at 4 - 6 SDs it will be impossibly skinny -- at some point about the cross section of a bacterium. That is the exact point. Then, drop darts from an elevation so that we get a somewhat flat or even a bit peaked of a distribution. Count hits per stripe and see how after about 100, with "fair" strikes, we would have a pretty good picture of the bulk of the curve. With all but certainty, the far tails will not be hit. This is the needle in haystack effect, it is very hard to hit a rare, distinctively identifiable zone in a space of possibilities. This will also happen even with biased distributions. Once we are not making enough samples to expect reasonably to pick up the far tails or the like in a field of possibilities, we have no good reason to expect to see such cropping up. They are unobservable under the circumstances. So, if we are in the far tails when they should not be observable, it is not likely to be by chance, i.e. the basis for the Fisherian hyp testing programme (and I am speaking loosely) is evident. This is not dependent on precise estimates of probability, or even on rough ones. It depends only on that we are dealing with blind even somewhat biased samples of a distribution with a bulk and zones of interest that are overwhelmingly isolated relative to that bulk. (E.g. when we push our hands into a sack of beans to pull out some and see their quality, or pull up a blood sample from a vein, this does not bother overmuch over niceties on whether we have demonstrated that we have a truly flat random sample, and for good enough reason.) Now, it can be shown that he atomic resources of our solar system, used to sample from a field of possibilites for 500 bits, for 10^17 s [of order of the generally accepted age of the cosmos] and at a rate of one sample per astom of 10^57 atoms per 10^-14 s, will stand as a scope of 1 straw sized pull to a haystack 1,000 LY across, as thick as our galaxy's central bulge. Even if such a haystack were superposed on our galactic neighbourhood, with practical certainty, such a pull will pick up a very predictable result: hay and nothing else. That is, it will only capture the bulk. Under these circumstances, the possible blind sampling hyps don't matter, unless they are large enough they will not do better than a flat random one would of the same scope. And if we were to see something labelled chance and necessity without intelligent direction that picked up needle under circumstances claimed to be like this, we would have every good reason to be highly suspicious that design was being relabelled as chance. In short all the hooting and hollering over chance hyps is a red herring lead away from the sampling challenge, and led away to strawmen set up and soaked in ad hominems, set alight to cloud, confuse, poison and polarise the atmosphere. Until it can be shown under reasonably credible circumstances, per observation, that blind chance and mechanical necessity in a soup of chemicals that are reasonable, will yield a metabolising, encasulated, gated, von Neumann architecture code using self replicating cell, there is no root tot he whole tree of life and it sways and crashes to the ground. I am not holding my breath for that, for good thermodynamic reasons. I therefore point out that we do have an alternative that is backed up by billions of un-exceptioned cases where we do directly see FSCO?I being formed: design. And the sampling analysis shows why that is so. Similarly, we now have design sitting at the table from the root on up. So, when we look at the 10 - 100 million bits of info to form body plans, per reasonable observation and a back of the envelope cross check, we see that it is reasonable to assign that to design also. Once design is not excluded by question-begging a priori materialist ideology, it is the blatantly obvious, empirically warranted best explanation of the FSCO/I in the world of life from OOL to us. That is what you need to answer and it is what you have ducked and dodged aside from for years, here at UD and elsewhere. Good day, madam KFkairosfocus
July 13, 2013
July
07
Jul
13
13
2013
05:58 AM
5
05
58
AM
PDT
A blind search is a non foresighted one, without oracles that allow unsuccessful cases to be rewarded on warmer/colder messages.
And Darwinian evolution is not blind search.Elizabeth B Liddle
July 13, 2013
July
07
Jul
13
13
2013
05:01 AM
5
05
01
AM
PDT
F/N 3: BTW, we must beware of manipulative redefinitions as has been put above; the tactic here being to so so much misinformation that one cannot easily correct it all. And if one tries, the amount of correction that has to be put, will then be subjected to accusations we have seen of spamming the thread or the like or some excuse to ignore. We have seen a case where a point by point refutation exposing something as a strawman tactic and then correcting it with substantial backup has been blandly dismissed as not an answer, this being repeated drumbeat style hither and yon as though saying an outright willful distortion of truth like that makes it true [a clue: failure to link the actual refutation being dismissed (the case is here) will often tell us that something is being hidden.] A blind search is a non foresighted one, without oracles that allow unsuccessful cases to be rewarded on warmer/colder messages. That is, if there is nothing that squarely addresses the vast dominance of non-functioning configurations in the space of possibilities for AAs or D/RNA etc, and the pretence that every thing will be functional to some extent so all that is needed is incremental hill climbing to ascend to the peaks, we are not dealing with a blind search. If there is no serious reckoning with the deep isolation of islands of function in cases relevant to FSCO/I (caused by the need for multiple, well matched, properly arranged and coupled parts to achieve function -- e.g. symbols forming coherent text in ASCII coded English), we are being manipulated yet again. KFkairosfocus
July 13, 2013
July
07
Jul
13
13
2013
04:44 AM
4
04
44
AM
PDT
KF: it seems to me that you are simply not taking in the point that keiths and I have repeatedy made; clearly you think the same of us, but it would be good if you could at least try to address the point, because your responses consistently address something we are not saying. Let me try another time:
Onlookers: Predictably, correction of KS does not take, and he simply will not acknowledge where he did something very wrong. And because of the back-link from info (the log-antilog relationship) and the known variability of the AAs in protein sequences — which gives an empirically grounded estimate of degree of contingency in the functional “code,” we do have a valid measure of info and how whatever processes obtained across the history of life were able to vary the genome and have it still function on the relevant proteins.
This simply does not address the issue of how you compute the probability of the sequence given any constraints other than independent random selection. The log-anti-log relationship does NOT address this. Converting a probability into and out of log form does nothing to it value or its derivation. You are clearly a very numerate guy, KF: you must surely agree with this.
The message is, the info content is well beyond whatever blind search capacity on the gamut of the solar system in 10^17 s could muster, and the linked message is, back-converting to a probability metric established from the info content, we see the probabilities are well below a solar system search threshold where blind search will be a reasonable means.
No. All you are doing here is taking a probability value, and seeing if it less then 10^-150. It doesn't matter whether you log tranform both the probability and the cut off first, or not. The answer won't change. We are not questioning this - we get it. What we are questioning is the probability value itself. It does not take into account the change in probability value that would be a result of processes other than independent random draw, which is precisely what evolutionary theory proposes. You are rejecting a null, but that null is NOT "evolutionary processes".
KS is eager to brush aside and dismiss then have us forget the implications of only being able to make a search on the scope of 1 straw to a cubical haystack 1,000 light years (as thick as our galaxy) but that ideological desire does not make that go away.
No. He. Is. Not. He is not questioning the implications of your p value. He is questioning the p value itself Please address this - it is so frustrating to have you repeatedly defend your alpha cut-off which no-one is disputing. What we are disputing is your p value!
Worse, for neither OOL nor origin of body plans, can KS and ilk show us empirical warrant for claims, assertions, imaginings and just plain assumptions that blind chance and mechanical necessity under any plausible format, can and did produce requisite functionally specific complex organisation and associated info. On OOL, notice, we have a direct knowledge of the probabilities of L-/R- hand monomer formation, and of peptide vs non-peptide bonds, about 50% in both cases.
Well, that is a different argument, KF. If your argument is that OOL is improbable, fine. But you still can't compute its probability, and "protein space" won't help because we don't even know whether the first Darwinian life-forms involved proteins at all, or, if they did, what selective advantage they might have conferred. Again: you cannot compute a p value without knowing what your p value is the probability of. As I keep reminding my stats students ad nauseam! First compute the probability distribution under your null. Then, and only then, can you decide whether to reject it or not. What your rejection criterion is can be anything you like. I'd be perfectly happy with something a lot more lenient than p < 10^-150.
Just on chirality, handedness, we can see that the macromolecules of life, which are one-handed (not a 50-50 mix of the two geometries) we see easily that we are well beyond the FSCO/I threshold, well below the reach of blind sampling on the gamut of the solar system in 10^17 y. This is sending a very clear message that from the root of the Darwinist tree of life on up, there is no good reason to reject the inference that the only empirically warranted cause of FSCO/I was credibly operative, i.e. design.
No, because you haven't computed the p value based on anything other than independent random draw.
As for his eagerness to push evolutionary materialist amorality and the ethic of might and manipulation make ‘right,’ that speaks, sadly, for itself. The bottomline is obvious, we are dealing with a large scale, many decades long socio-cultural and policy agenda driven by a priori — question-begging — evolutionary materialism as ideology that for prestige and apparent credibility reasons finds it convenient to wrap itself in a lab coat.
Oh, do shed this ridiculous paranoia, KF! A naturalistic account of the origin of life is absolutely NO threat to the things you value. It's no threat to theism (improves it, I would say) and certainly has no relationship to gay marriage. The link is absurd, and I suggest that your inability to see the question we are asking, and instead to insistently repeat the same counter-argument to an argument no-one is making, arises from your irrational fear that if we were right (and I do not even claim that we are) somehow all that you hold dear would come crashing down. It wouldn't.Elizabeth B Liddle
July 13, 2013
July
07
Jul
13
13
2013
04:37 AM
4
04
37
AM
PDT
KF:
This is by no means a case of mere scientific knowledge being opposed by the ignorant, stupid, insane or wicked, as is too often suggested or as has been outright said by Dawkins
Indeed it is not, KF. And likewise methodological naturalism is not an attempt by the ignorant, stupid, insane or wicked to impose an evil godless ideology on the innocent, either. Hence my continued attempts to find some common ground.Elizabeth B Liddle
July 13, 2013
July
07
Jul
13
13
2013
04:16 AM
4
04
16
AM
PDT
F/N 2: Likewise, you may wish to read here on the particular issue of the day being wrapped in a lab coat and pushed on us under colours of "rights" and "equality." This on some of the manipulative techniques [critique, here], may help us understand how for too many in our day might and manipulation make 'right.' KFkairosfocus
July 13, 2013
July
07
Jul
13
13
2013
04:12 AM
4
04
12
AM
PDT
F/N: Some may wonder if I am right to point to a priori materialist ideology as a problem, so I suggest reading here on in context for some documentation. This is by no means a case of mere scientific knowledge being opposed by the ignorant, stupid, insane or wicked, as is too often suggested or as has been outright said by Dawkins. KFkairosfocus
July 13, 2013
July
07
Jul
13
13
2013
04:04 AM
4
04
04
AM
PDT
Onlookers: Predictably, correction of KS does not take, and he simply will not acknowledge where he did something very wrong. And because of the back-link from info (the log-antilog relationship) and the known variability of the AAs in protein sequences -- which gives an empirically grounded estimate of degree of contingency in the functional "code," we do have a valid measure of info and how whatever processes obtained across the history of life were able to vary the genome and have it still function on the relevant proteins. The message is, the info content is well beyond whatever blind search capacity on the gamut of the solar system in 10^17 s could muster, and the linked message is, back-converting to a probability metric established from the info content, we see the probabilities are well below a solar system search threshold where blind search will be a reasonable means. KS is eager to brush aside and dismiss then have us forget the implications of only being able to make a search on the scope of 1 straw to a cubical haystack 1,000 light years (as thick as our galaxy) but that ideological desire does not make that go away. Worse, for neither OOL nor origin of body plans, can KS and ilk show us empirical warrant for claims, assertions, imaginings and just plain assumptions that blind chance and mechanical necessity under any plausible format, can and did produce requisite functionally specific complex organisation and associated info. On OOL, notice, we have a direct knowledge of the probabilities of L-/R- hand monomer formation, and of peptide vs non-peptide bonds, about 50% in both cases. Just on chirality, handedness, we can see that the macromolecules of life, which are one-handed (not a 50-50 mix of the two geometries) we see easily that we are well beyond the FSCO/I threshold, well below the reach of blind sampling on the gamut of the solar system in 10^17 y. This is sending a very clear message that from the root of the Darwinist tree of life on up, there is no good reason to reject the inference that the only empirically warranted cause of FSCO/I was credibly operative, i.e. design. As for his eagerness to push evolutionary materialist amorality and the ethic of might and manipulation make 'right,' that speaks, sadly, for itself. The bottomline is obvious, we are dealing with a large scale, many decades long socio-cultural and policy agenda driven by a priori -- question-begging -- evolutionary materialism as ideology that for prestige and apparent credibility reasons finds it convenient to wrap itself in a lab coat. Johnson's retort to Lewontin et al was and is on target:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
KFkairosfocus
July 13, 2013
July
07
Jul
13
13
2013
01:29 AM
1
01
29
AM
PDT
keiths:
Dembski himself stipulates that P(T|H) must include all “Darwinian and material mechanisms”. Durston hasn’t done that, nor have you.
Again keiths is confused. YOUR position needs to do that keiths. And it cannot. Your position can't even produce a testable hypothesis for “Darwinian and material mechanisms” producing multi-protein configurations. And keiths, YOU don't have any idea how evolution works. Sp perhaps you should stuff a sock in it. YOU can't even show that “Darwinian and material mechanisms” deserve a seat at the probability discussion. That is how pathetic your position is.Joe
July 12, 2013
July
07
Jul
12
12
2013
04:09 PM
4
04
09
PM
PDT
KF,
As in, because there is some redundancy OBSERVED, he adjusted and reduced info capacity below 4.32 bits per AA, reflecting he actual functional state.
That doesn't help. As Lizzie has been trying, apparently in vain, to communicate to people here: probability is not an inherent property of a sequence. You have to specify the generating mechanism in order to compute the probability. What's the probability of flipping 500 heads in a row? Extremely low if you're flipping a fair coin. Extremely high if you're flipping a two-headed coin. The generating mechanism makes all the difference. You have to specify the mechanism (or mechanisms) in order to evaluate the probability. Dembski himself stipulates that P(T|H) must include all "Darwinian and material mechanisms". Durston hasn't done that, nor have you. Your CSI values are bogus, and so is your conclusion of design.keiths
July 12, 2013
July
07
Jul
12
12
2013
04:02 PM
4
04
02
PM
PDT
Onlookers, the just above should suffice to show just how wanting in credibility, diligence to duties of care to accuracy and fairness KS has so often shown himself to be. Let us take due note before taking any of his talking points at face value. Unfortunately, on long track record, it is predictable that for a long time to come he will persist in corrected misrepresentations and ad hominem laced projections such as just happened. KFkairosfocus
July 12, 2013
July
07
Jul
12
12
2013
03:50 PM
3
03
50
PM
PDT
KS: I simply note that you are playing strawman distortions again, as, as a matter of fact that Durston's values are empirical based on the statistics of observed protein families, not the flat random null state, cf. here. That basis in empirics will reflect whatever actual, relevant patterns have happened in the history of life. As in, because there is some redundancy OBSERVED, he adjusted and reduced info capacity below 4.32 bits per AA, reflecting he actual functional state. The null state was flat random, which he did not use to give his results. This may be readily seen from table 1, here where the null state column (col no 4 from left) is NOT the one that gives the functional bits values reported after adjustments for redundancy reflected in aligned segments, col 5. The values I used came from the Fits column. KFkairosfocus
July 12, 2013
July
07
Jul
12
12
2013
03:43 PM
3
03
43
PM
PDT
KF,
Durston et al have an independent, empirical value of relevant info from their work on protein families...
Durston's values assume random draw. Evolution doesn't work that way. I repeat:
Random draw, not evolution. So all this blather about “the gamut of the solar system” proves nothing about evolution. It merely shows that protein families weren’t formed by purely random processes. Which nobody claims anyway. You, who are always going on about strawmen, have created a whopper of a strawman, which you have set alight in a vain attempt to cloud, poison and polarise the atmosphere, and to thwart the homosexual agenda, which poses a serious threat to our civilization. The onlookers can see that you are bluffing. Again.
keiths
July 11, 2013
July
07
Jul
11
11
2013
03:02 PM
3
03
02
PM
PDT
Onlookers, KS is now being outright misleading in this strawman, trying to manufacture a concession. As I pointed out, extracting the - log P takes us to the info metric. Durston et al have an independent, empirical value of relevant info from their work on protein families, moving from 4.32 bits per AA in null state to ground and functional state reflecting variants in the island of function from across the observed world of life (and whatever stochastic processes may have contributed.) So, substituting a known I value, we may go back and get the P-value. No surprise, it is consistently below the threshold where any blind search across the config spaces on the gamut of the solar system, could reasonably be expected to find one much less the many distinct islands required. At this point, since we know him to be a highly educated ideologue, he is carrying on a disinformation and willful misrepresentation campaign to sow confusion and polarisation. Such exploits the tendency we have to think there must be substance there, but sometimes, there is only the flame, smoke and poisons of burning, ad hominem soaked strawmen. It is time to set KS' antics and stunts to one side save as illustrations of what agenda-driven materialist ideologues and fellow travellers are doing. Which is why I took time to note here. KFkairosfocus
July 11, 2013
July
07
Jul
11
11
2013
02:52 PM
2
02
52
PM
PDT
keiths - Thanks for your comments (11:59pm, July 10, #361 as I write). I have followed the earlier discussions of whether the entropy of an assembled computer is less than that of an unassembled computer, and I’m yet not convinced as to who is really right on this. What proponents of the “Darwinian evolution violates the 2nd Law” view are usually referring to (as we can see in the foregoing threads) is that the 2nd Law is seemingly violated in statistical/probabilistic terms, not strict thermodynamic or energy terms. A system exhibiting a less probable macro-state would have lower entropy (understood probabilistically) than one with a more probable macro-state. An unassembled computer can take up many more configurations than an integrally functioning assembled one, and so exhibits a more probable macro-state than an assembled one, as measured in terms of functional organisation (as others have noted). In their writings both Richard Dawkins (obviously no friend of ID) and William Dembski consider that “organised complexity” or “specified complexity” can be measured as improbability – that is, that there is an inverse relationship between organised/specified complexity and probability. Thus, in statistical terms (assuming that such a relationship is valid), systems having greater organised/specified complexity than others can be considered to have lower entropy than those others. That is why I suggested that Darwinian processes would have to work in a role equivalent to a heat pump (with, on average, a positive linkage/correlation between increases in selectable fitness and increases in functional complexity) if Darwinian evolution alone is able to produce the organised complexity claimed for it. It seems to me that this is a question that could, in principle, be resolved empirically, by establishing the existence of such a linkage/correlation. From my perspective, however, the real problem with such a proposal is in arriving at a consistent measure of organised/specified complexity. (I am aware of longstanding ID proposals to quantify functional complexity, including some examples in the threads above, but I am not sure how successful/consistent they are across the whole spectrum of relevant examples of complexity in nature – the problem in part being the quantification of the “organised” bit of the complexity. I am conscious, however, that my problem here may in part just be that of my current ignorance). Looking at this particular question (quantification), the computer example to which you and Dr Liddle have both referred appears to me to illustrate this very problem.Thomas2
July 11, 2013
July
07
Jul
11
11
2013
02:16 PM
2
02
16
PM
PDT
KF,
I took the expression, pushed it one simplification step forward, to show that -log (P) is an info metric — the – log operation is already present just not worked out.
Finally (!) you acknowledge that the log operation doesn't add any information. The result is just a probability in a different form, expressed in bits. So you take Durston's numbers -- which assume random draw, not evolution, as Lizzie has pointed out a dozen or so times -- and you convert them into probabilities which also assume random draw, not evolution. Random draw, not evolution. So all this blather about "the gamut of the solar system" proves nothing about evolution. It merely shows that protein families weren't formed by purely random processes. Which nobody claims anyway. You, who are always going on about strawmen, have created a whopper of a strawman, which you have set alight in a vain attempt to cloud, poison and polarise the atmosphere, and to thwart the homosexual agenda, which poses a serious threat to our civilization. The onlookers can see that you are bluffing. Again.keiths
July 11, 2013
July
07
Jul
11
11
2013
12:57 PM
12
12
57
PM
PDT
cantor, I understand, and those things take priority over UD, of course. I'm just curious about your point in posing the challenge. I hope you'll let us know when you have more time.keiths
July 11, 2013
July
07
Jul
11
11
2013
12:39 PM
12
12
39
PM
PDT
KS obfuscates and habitually misrepresents rather than explains. He and ilk were trying to use p(T|H) in the Dembski 2005 expression as an objection, going so far as to try to turn it into a clever quip about elephants in rooms. I took the expression, pushed it one simplification step forward, to show that -log (P) is an info metric -- the - log operation is already present just not worked out. Thanks to Durston et al we credibly know info content of 15 protein families, with distributions that reflect whatever has actually happened. So, substitute, then reconvert to probability form to get reasonable estimates. Ans, well below what it is credible our solar system could reasonably find on blind search of the relevant config space. Also, going back to OOL, the same basic message comes up, just from homochirality, an independent line. But ideologues will keep setting off rhetorical IEDs in hopes of creating an impression of having a point, and being able to keep on pushing strawman arguments. BTW, it is now coming on 10 months that darwinists have not been able to answer to credibly accounting for OOL and major body plans on evo mat premises backed by adequate observational evidence not imposed materialist a prioris. No root, no shoot and no empirically well warranted macro evo tree of life. KFkairosfocus
July 11, 2013
July
07
Jul
11
11
2013
12:32 PM
12
12
32
PM
PDT
KS@350 ...probability of a little over 11%.
Nicely done. Unanticipated obligations to friends, family, and others in need has taken precedence for the time being over my playtime here.cantor
July 11, 2013
July
07
Jul
11
11
2013
11:54 AM
11
11
54
AM
PDT
Thomas2,
In terms of the 2nd Law (as I very basically understand it), these Darwinian processes must therefore generate an increase in such organised/functional/specified complexity by being compensated from greater reductions in organised/functional/specified complexity elsewhere within the same system.
No, because complexity is not the inverse of entropy. For example, the entropy of an assembled computer is not necessariy less than the entropy of a corresponding pile of computer parts. This is discussed several times in the recent second law threads, so I won't rehash it here.
[On another note, I find your illustrations very helpful: I rather liked the furniture-into-gerbil example, and I think there is some interesting discussion that might be had around that, but that’s probably one for another day].
Yes, sometimes extreme illustrations are the best, because they make the issues stand out clearly.keiths
July 10, 2013
July
07
Jul
10
10
2013
10:59 PM
10
10
59
PM
PDT
Are you there, cantor?keiths
July 10, 2013
July
07
Jul
10
10
2013
10:45 PM
10
10
45
PM
PDT
keiths - Thank you for your reply (of July 7, 5:56, #285 as I write). One way of compactly formulating ID in words as a putative scientific theory ("theory" being used in its common scientific sense rather than in the more formal sense usually cited in ID /Creationism -v- Darwinism debates) might be something like (based mainly on Dembski) – "where in nature you encounter an entity which features appropriately statistically significant tractably (demonstrably relevant) conditionally independently specified complexity (quantifiably beyond the plausible reach of chance and necessity alone), then you can reliably make an unequivocal design inference (propose intelligent design – the planned action of a mind – as hypothesis to account for those features)”. In other words, suitably defined/delimited “organised”, “functional” or “functional” “complexity” is a reliable indicator of design in certain circumstances. Darwinian evolution proposes to account for increasing organised/functional/specified complexity in biological organisms without invoking mindful design. In terms of the 2nd Law (as I very basically understand it), these Darwinian processes must therefore generate an increase in such organised/functional/specified complexity by being compensated from greater reductions in organised/functional/specified complexity elsewhere within the same system. Since the essence of Darwinian evolution is that Darwinian processes basically operate by selective mechanisms acting on differentially fit self-replicating systems to promote those which will generate greater numbers of self-replicating offspring as a result, rather than by selective mechanisms acting directly and proportionately upon systems characterised by their organised complexity, if Darwinian processes are to succeed in increasing biological complexity their must, on average, be a correlation between increases in selectable/selected fitness and organised complexity. Hence my tentative suggestions that for Darwinian processes alone to successfully do the work attributed to them in a way consistent with the 2nd Law, (i) natural selective processes must fulfil the role of a heat pump (as an integral and essential part of the thermodynamic system under consideration), and (ii) there must be a significant positive correlation between appropriately quantified increases in [selectable fitness] and appropriately quantified increases in complexity, and hence my question (assuming that this approach is on the right track) as to whether there was any empirical evidence to support such a relationship. To put it another way, I do not know whether Darwinian processes alone can accomplish what is claimed for them. If they can, then clearly they do not violate the 2nd Law. On the face of it, the Darwinian claim to be capable alone of generating local increases in functionally complex order seems implausible (to say the least). If, however, a positive correlation of the kind I have suggested above were to be demonstrated, then not only would conformity to the 2nd Law be demonstrated, but this kind of objection would be pretty well nailed, and this particular apparent absurdity resolved. The biggest problem with my suggestion would be (I suspect) in appropriate quantification. Regarding the suggestion that there is an inconsistency between Creationist and IDer “thermodynamic” objections to Darwinian processes and their views on the “thermodynamics” of ordinary biological life processes, the difference between the proposed Darwinian mechanism and ordinary biological life processes (in statistical or complexity “thermodynamic” terms) is that biological reproductive systems already exist and fulfill the role of “heat pumps” whereas Darwinian systems have yet to be shown to be able to generally act in that way, so I too see little problem here in those terms. (I personally have a greater problem in finding a suitable way of quantifying these issues). [On another note, I find your illustrations very helpful: I rather liked the furniture-into-gerbil example, and I think there is some interesting discussion that might be had around that, but that’s probably one for another day].Thomas2
July 10, 2013
July
07
Jul
10
10
2013
01:10 PM
1
01
10
PM
PDT
CS3, Okay, but I urge you to keep thinking about it. Particularly this part: Do you see that the second law is as irrelevant to your doubts about evolution as the the first law would be to your doubts about gerbil-poofing?
While an analogy such as this (which, among other things, makes an analogy between the First and the Second Laws), will be rather strained, I will suggest we should add something like the following to make it more accurate: Later, someone (let's call him "Sewell") recognizes that, in the process of making his careful measurements, your friend (let's call him "Styer") has used two scales with completely different calibrations in determining the mass of the gerbils and of the furniture, thus making his proof that the First Law has not been violated completely invalid. Furthermore, the mass of all those gerbils very strongly appears to be much more than the mass that has disappeared from the furniture. Nevertheless, (for the sake of this analogy), it is not possible to measure the gerbils and the furniture with equivalently calibrated scales, and thus it cannot be definitively proven either way whether the First Law has been violated, though it intuitively appears that it has. Militant neo-gerbilpoofists attempt to suppress "Sewell's" paper showing the miscalibration of "Styer's" scales, because they agree with "Styer's" conclusion, even though they know deep down that the calibration of "Styer's" scales is indefensible. :)CS3
July 9, 2013
July
07
Jul
9
09
2013
06:33 PM
6
06
33
PM
PDT
1 2 3 13

Leave a Reply