Uncommon Descent Serving The Intelligent Design Community

ID is Not an Argument from Ignorance

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

ID opponents sometimes attempt to dismiss ID theory as an “argument from ignorance.”  Their assertion goes something like this:

1.  ID consists of nothing more than the claim that undirected material forces are insufficient to account for either the irreducible complexity (IC) or the functionally specific complex information (FSCI) found in living things. 

2.  This purely negative assertion is an invalid argument from ignorance.  As a matter of logic, they say, it is false to state that our present ignorance concerning how undirected material forces can account for either the IC or the FSCI found in living things (i.e., our “absence of evidence”), means no such evidence exists.  In other words, our present ignorance of a material cause of IC and FSCI is not evidence that no such cause exists.

This rejoinder to ID fails for at least two reasons.  First, ID is not, as its opponents suggest, a purely negative argument that material forces are insufficient to account for IC and FSCI.  At its root ID is an abductive conclusion (i.e., inference to best explanation) concerning the data.  This conclusion may be stated in summary as follows: 

1.  Living things display IC and FSCI.

2.  Material forces have never been shown to produce IC and FSCI.

3.  Intelligent agents routinely produce IC and FSCI.

4.  Therefore, based on the evidence that we have in front of us, the best explanation for the presence of IC and FSCI in living things is that they are the result of acts of an intelligent agent.

The second reason the “argument from ignorance” objection fails is that the naysayers’ assertion that ID depends on an “absence of evidence” is simply false.  In fact, ID rests on evidence of absence.  In his Introduction to Logic Irving Marmer Copi writes of evidence of absence as follows:

In some circumstances it can be safely assumed that if a certain event had occurred, evidence of it could be discovered by qualified investigators. In such circumstances it is perfectly reasonable to take the absence of proof of its occurrence as positive proof of its non-occurrence.

How does this apply to the Neo-Darwinian claim that undirected material forces can produce IC and FSCI?  Charles Darwin published Origin of Species in 1859.  In the 152 years since that time literally tens of thousands of highly qualified investigators have worked feverishly attempting to demonstrate that undirected material forces can produce IC and FSCI.  They have failed utterly. 

Has there been a reasonable investigation by qualified investigators?  By any fair measure there has been.  Has that 152 year-long investigation shown how undirected material forces can account for IC or FSCI?  It has not.

Therefore, simple logic dictates that “it is perfectly reasonable to take the absence of proof” that undirected material forces can account for IC and FSCI as “positive proof of its non-occurrence.”

As far as I can see, there are two and only two responses the Darwinists can make to this argument:

1.  The investigation has not been reasonable or reasonably lengthy.

2.  Give us more time; the answer is just around the corner.

Response 1 is obvious rubbish.  If thousands of researchers working for over 150 years is not a reasonable search, the term “reasonable search” loses all meaning.

Response 2 is just more of the same Darwinist promissory notes we get all the time.  How many such notes will go unpaid before we start demanding that the materialists start paying COD?

Comments
MathGrrl (#49), You have interesting standards. In #18 you say,
Orgel’s work is completely dissimilar except for the name.
No rationale, no reasoning, just the bald assertion. Yet somehow after I explained my reasoning in #32, you feel that
You said that before, but you still haven’t demonstrated it to be the case
You can make bald assertions, but if someone else explains the meaning of his statements, you can demand a demonstration. I have already explained my reasoning. Perhaps you could explain what part you don't get or disagree with or find incomplete, so as to facilitate my clarifying the concept to you. If not, since I stated that Durston's FCSI is a subset of Dembski's CSI, perhaps you could give a counterexample where something has a threshold amount of FCSI but does not have CSI. You do agree that Durston's FCSI exists, don't you?Paul Giem
April 16, 2011
April
04
Apr
16
16
2011
09:31 PM
9
09
31
PM
PDT
MathGrrl:
I’m following just fine. You introduced the concept of Shannon information. Dembski’s CSI is not based on Shannon information. I’m interested in understanding Dembski’s CSI. If you didn’t intend your discussion of Shannon information to suggest a relationship, it is simply a non sequitur.
No, you are not following fine. Not at all. You seem to take pride in taking what I say out-of-context. Strange... CSI is a specified subset of Shannon information- Shannon information is the superset and SI (and therefor CSI) is a subset of that superset. That is a fact. That said what I said was that CSI is Shannon information with meaning/ functionality and of a certain complexity, ie number of bits.Joseph
April 14, 2011
April
04
Apr
14
14
2011
09:05 AM
9
09
05
AM
PDT
Paul Giem,
It would be more helpful if you could show that Durston’s metric is mathematically equivalent to Dembski’s description of CSI. If it isn’t, then Durston’s metric cannot be used to support claims made about CSI.
Let me try again. Durston’s metric is a subset of Dembski’s metric.
You said that before, but you still haven't demonstrated it to be the case. In fact, I'm not sure what you mean by a metric being a "subset" of another metric. Do you mean that Durston's is an approximation to Dembski's, similar to the way Newton's equations can be viewed as an approximation to Einstein's? In any case, the equivalence of Durston's metric to Dembski's CSI remains to be demonstrated mathematically.MathGrrl
April 14, 2011
April
04
Apr
14
14
2011
08:48 AM
8
08
48
AM
PDT
BREAKING: The collapse of MG's claims on CSI and Dembskikairosfocus
April 14, 2011
April
04
Apr
14
14
2011
05:35 AM
5
05
35
AM
PDT
MathGrrl (#42), You say,
It would be more helpful if you could show that Durston’s metric is mathematically equivalent to Dembski’s description of CSI. If it isn’t, then Durston’s metric cannot be used to support claims made about CSI.
Let me try again. Durston's metric is a subset of Dembski's metric. Thus, the paper showing that Durston's metric can be measured in specific cases, shows that, because these are also examples of Dembski's metric, in these cases Dembski's metric can also be measured. To clarify things, are you claiming that Durston's metric is not a subset of Dembski's metric (and can you support this claim)? Or are you claiming that the reports of Durston's metric being measured are wrong (and can you support this claim)? Or are you now conceding that at least sometimes Dembski's metric can be measured?Paul Giem
April 13, 2011
April
04
Apr
13
13
2011
01:28 PM
1
01
28
PM
PDT
MG: pardon a few direct words. It was already shown, for years [cf WACs 27 - 28], that FSCI is a subset of CSI. Indeed, it is the biologically relevant subset, as can be seen from Orgel's description; which drips with allusions to biofunction. In the freshly prepared linked excerpts and analysis the links and relationships to CSI are clearly shown. Remember, the Dembski metric boils down to -- you ducked out of the thread where that was presented on Sunday, on a fairly flimsy excuse -- a measure of bits beyond a threshold that starts at 398 bits, and is predicated on the issue that islands of function in such spaces are going to be too deeply isolated to be found without active information assisted search. You may quibble at how he got there, but that is where he got to, and it is a reasonable metric on those terms:
CHI = - log2 [D1*D2*p] Where D1 = 10^120 ~ 2^398 so, on Hartley's negative log metric approach: CHI = Ip - [398 + K2] Where also K2 ranges up to 100 or so bits, as VJT discussed and so rounded off the effect of D1 and D2 as a threshold of 500 bits. So, the CHI metric is a measure of information beyond a threshold where it is reasonable that blind chance and necessity cannot credibly get to islands or hot zones that are in configuration spaces defined by at least that many bits. Remember the number of quantum events of the atoms in our solar system is of order 10^102, which is the sort of space taken up by 339 bits.
The Durston metric brings out a measure of the size of such islands of function and a comparison to the config spaces they sit in. The Durston approach is via extending Shannon's metric of average information per symbol, H, for functional as opposed to ground states, and judging increment in info to do that jump. The Dembski metric looks at probabilities of being on islands of function or target or hot zones otherwise, then converts to bits and deducts a threshold for complexity that specifes degree of isolation. The two are plainly closely related; all that happens is the Durston et al metric does not explicitly identify a threshold, but the obvious range of such thresholds is from 400 or so to 500 or 1,000 bits. Just as my own simple brute force X-metric simply stipulates the threshold on search considerations then assesses specificity and complex contingency, giving bit value, if the item is beyond the threshold. I am sorry, but this looks like the fallacy of endless objection. Especially where, after weeks of ignoring the longstanding metrics and calculations that you deny exist, after confusing many others in the process, and after dodging the issue of whether Orgel and Wicken were meaningful in laying out the key concepts, we have yet to see a single substantial contribution from you, mathematical or otherwise. So, are you serious or are you simply playing an endless objections and obfuscations rhetorical game? GEM of TKIkairosfocus
April 13, 2011
April
04
Apr
13
13
2011
10:24 AM
10
10
24
AM
PDT
Mathgrrl, Below is the comment that you refused to engage. You can choose to do so now. - - - - - - Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality. In other words, if the evolutionary algorithm – by any means available to it – should add perhaps a ‘UCU’ within an existing sequence, does that addition create new information outside (independent) of the semiotic convention already existing? If we lift the convention, does UCU specify anything at all? If UCU does not specify anything without reliance upon a condition which was not introduced as a matter of the genetic algorithm, then your statement that genetic algorithms can create information is either a) false, or b) over-reaching, or c) incomplete.Upright BiPed
April 13, 2011
April
04
Apr
13
13
2011
10:13 AM
10
10
13
AM
PDT
MAthgrrl, Do you understand what "in principle" means? Are you suggesting that if you make a comment about a subject using mathematics, but that comment is then invalidated by other reasoning, your comment stands as valid regardless? How exactly is that possible Masthgrrl?Upright BiPed
April 13, 2011
April
04
Apr
13
13
2011
10:11 AM
10
10
11
AM
PDT
PPS: Onlookers, in fact -- as is in teh already linked discussion of Weasel in my always linked -- "latching" was empirically demonstrated, on the record, as a behaviour of runs of reasonable Weasel type programs. Indium is raising a red herring leading out to a strawman, which he already was setting up for soaking in distortion-laced ad hominems. This is bringing me to the verge of the conclusion that I am dealing with a troll. Unless he shows me some definite signs of reasonableness, I shall take the position that is recommended best practice for such trolls: "don't feed da trollz"kairosfocus
April 13, 2011
April
04
Apr
13
13
2011
09:56 AM
9
09
56
AM
PDT
Paul Giem,
I am interested in CSI as defined by Dembski since that is what is claimed by many ID proponents as a clear indication of the involvement of intelligent agency. Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.
Let me help you. Durston’s metric is a subset of Dembski’s metric. I’ll give you four examples that illustrate the difference, and the similarity.
It would be more helpful if you could show that Durston's metric is mathematically equivalent to Dembski's description of CSI. If it isn't, then Durston's metric cannot be used to support claims made about CSI.MathGrrl
April 13, 2011
April
04
Apr
13
13
2011
09:52 AM
9
09
52
AM
PDT
Joseph,
Dembski does not use Shannon information.
I didn’t say he did. Please try to follow along.
I'm following just fine. You introduced the concept of Shannon information. Dembski's CSI is not based on Shannon information. I'm interested in understanding Dembski's CSI. If you didn't intend your discussion of Shannon information to suggest a relationship, it is simply a non sequitur.MathGrrl
April 13, 2011
April
04
Apr
13
13
2011
09:52 AM
9
09
52
AM
PDT
Upright BiPed,
n her comments leading up to Mathgrrl’s thread, she was fond of saying that evolutionary algorithms can create CSI based upon the definitions given by ID proponents. On her thread, a valid challenge (comment #31) was made to that conclusion, in principle.
That is not a "valid challenge", it's simply your attempt to define CSI in such a way that it requires intelligence. That's pretty uninterestng mathematically and not related to Dembski's description of CSI, which was the topic under discussion.MathGrrl
April 13, 2011
April
04
Apr
13
13
2011
09:51 AM
9
09
51
AM
PDT
bornagain77,
MathGrrl propagandizes this statement; ‘Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.’ Well Durston in this video, clearly is claiming that functional Information (FITS) is a reliable indicator of Intelligence; Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – video http://www.metacafe.com/watch/3995236/ ,,, But alas MathGrrl this does not really matter to you does it??? for you are not really interested in pursuing the truth in the first place!
Your lack of civility is noted. I trust you won't be enough of a hypocrite to accuse others of the same in the future. My statement is completely true. At the time I wrote it, I was not aware that Durston's metric had been claimed to be an indicator of intelligent agency. Further, it remains true that Durston's metric is not the same as Dembski's CSI.MathGrrl
April 13, 2011
April
04
Apr
13
13
2011
09:51 AM
9
09
51
AM
PDT
PS: Onlookers, since one of the current talking points is that ID supporters are ducking challenges, I took time to respond with a mini essay to Indium. In fact, at this point, I have little confidence that he will pay any mind to what was just put down. If, after Dawkins' admission of the failure of Weasel to address the real challenges of functional improvement on random changes, he still brings it up after years of exchanges here at UD, that tells me a lot, none of it good.kairosfocus
April 13, 2011
April
04
Apr
13
13
2011
09:48 AM
9
09
48
AM
PDT
Indium: Dawkins' Weasel -- cf my discussion here in my always linked -- is in fact a demonstration of design, here where variants are rewarded on increments to target without regards to functionality, as he himself admitted. Weasel should never have been used, it only succeeds in misleading the people. And I have never said anything about "Some things are very unlikely to happen if the only resource you have is complete randomness." I think you need to read, say the introductory remarks on the issues of origins science here to get a better balanced view on what is going on; you seem to have thought that the Darwinist critics at heir sites will give a true and fair view. Not so, on long experience. You will easily see that phenomena trace their causes to chance and/or necessity and/or art, on an aspect by aspect basis. Each has characteristic signs and capabiliteis. Mechanical necessity (a dropped heavy object reliably falls) does not account for high contingency but for regularities of nature like the law of gravity just exemplified. Chance contingency leads to stochastic distributions of outcomes. For instance if our dropped object is a fair die, it comes up in positions 1 to 6 at random, with more or less equal frequency. Two dice, would sum from 2 to 12, with 7 the most likely outcome. That domination by statistical weight of possible ranges of outcomes means that if an island of function is sufficiently isolated in the relevant config space there will not be enough search resources for chance to hit on its shores, i.e to get that first level of success that can then lead to hill climbing. And, recall "enough resources" issues start as quickly as 1,000 bits of information. Intelligence is able to generate purposeful choice contingency and so gives things directed configurations that are functional and complex, i .e on islands of isolated, complex function. That is why FSCO/I beyond 1,000 bits is a reliable sign of design. Have you seen any coherent posts in this blog that were credibly produced by mechanical necessity and/or chance contingency? The problem with origin of life on evolutionary models is that first the only observed life embeds a metabolic capacity coupled to a von Neumann, stored coded information based self replicator, which itself requires codes, algorithms, data structures, information that is highly specific, and a means of putting that set of instructions to work. Such is irreducibly complex, and the DNA tells us that the stored information starts at 100 k+ bits. To compare, y/day I showed how a blank word doc has 150+ k bits. DNA is extremely efficient coding to do what it does in the space it uses! And yet, 100 k bits is well past the 1k bit threshold. Until it is passed, there is no credible capacity to do metabiolism and to do self-replication, including makig the set of required working molecules to carry on the activities of life. When it comes to more complex body plans, we are looking at 10 + million bits, dozens of times over. When it comes to origin of the human body plan with language capacity the same again. Darwinian type evolutionary mechanism can explain modest hill climbing within an island of function, but they have no empirical demonstration of capacity to get to such an island of function. That is, no ability to explain body plan level macroevolution, which is required to explain the origin of the range of species. Some other forms of evolution could explain such, on intelligent intervention. Indeed, a nanotech molecular technology lab a few generations beyond Venter could do it. mechanisms to effect design are quire conceivable and Venter has demonstrated the first steps to routinising that. Already GMOs are a force in agriculture, they are even talking of GMO fish being approved, though I am a little leery. GMO corn is a major crop. GMO sugar cane is what drives Brazil's energy cane industry. In short we see that design can do it, and we do not see how darwinian mechanisms -- despite 150 years of claims -- can. Indeed, since the infinite monkeys threshold starts at 125 bytes of info, we have a strong analytical barrier, not just observations. That is what you need to answer to and answer on empirical data not just so stories that presume a priori materialism, so serious improbabilites are brushed aside. nor will strawman distortions of the issues being raised like I cited at the top of this discussion, do. And, Dawkins' Weasel trick is not a good place to begin. GEM of TKIkairosfocus
April 13, 2011
April
04
Apr
13
13
2011
09:41 AM
9
09
41
AM
PDT
kf Yes I understand all this stuff. Thanks for the summary. Some things are very unlikely to happen if the only resource you have is complete randomness. Dawkins Weasel shows what happens when you start to have non-random components in the process. We could probably argue about some details again for hours (latching!), but that is beside the point: As soon as you have some kind of feedback in the process, your chances will increase dramatically. So, to the point, what you attack is a straw man version of evolution. Evolution has highly non-random feedback mechanisms that filter the mutation induced noise in each generation. Please note that I don´t say that this answers all the questions with regard to the realistic capabilities of evolution. I just say that what you attack here has not much to do with evolution at all. It´s a straw man, pure and simple.Indium
April 13, 2011
April
04
Apr
13
13
2011
04:44 AM
4
04
44
AM
PDT
Dr Giem: Durston is using real world observed distributions of AA's in protein families to assess the ways in which we get islands of function in the space of possible configs. His analysis of null, ground and functional states on Shannon's H-metric of average information per symbol and increment in information per symbol to go from state to state, is strongly related to the Dembski type islands of function approach. I cannot understand why it is that some would try to drive a wedge between the two looks at the matter. They are obviously related. Of course Dembski has been trying to get at a broader view, so that he does not use function as the specific way to impose a specification, but he does speak of function as one way to cash out specification; which goes back to Wicken and to Orgel. The way I see it is that if we have real world results in the form of distributions of AA's for proteins in families, for various organisms, why not use that distribution as a good sample of the real world possibilities? My own quick and dirty look, as noted in discussing a hypothetical protein's sequence variability while retaining function, in my always linked note, contains this remark:
If, instead, we model the the individual AA's as varying at random among 4 - 5 "similar" R-group AA's on average without causing dys-functional change, the full 232-length string would vary across 10^150 states. As a cross-check, Cytochrome-C, a commonly studied protein of about 100 AA's that is used for taxonomic research, typically varies across 1 - 5 AA's in each position, with a few AA positions showing more variability than that. About a third of the AA positions are invariant across a range from humans to rice to yeast. That is, the observed variability, if scaled up to 232 AA's, would be well within the 10^150 limit suggested; as, e.g. 5^155 ~ 2.19 * 10^108. [Cf also this summary of a study of the same protein among 388 fish species.]
That looks like a picture of an island of function in a wider sea of possibilities, at least to me. GEM of TKIkairosfocus
April 13, 2011
April
04
Apr
13
13
2011
02:04 AM
2
02
04
AM
PDT
Indium: The underlying issues for the Infinite Monkeys analysis are at the heart of the questions on the design theory issue. One must understand what it is getting at -- the business of random walk plus trial and error searches of large configuration spaces -- if one is to have any reasonable idea of the issues at stake in the discussion. So, pardon a bit of a tutorialish pause . . . 1 --> The infinite monkey theorem is about real world testing of the likelihood of random walk search plus trial and error to find functionally specific, complex information; or, 2 --> at least to find hot zone clusters of microstates (in the thermodynamics context). 3 --> It was long -- and often -- said that evolutionary advocates from C19 on were arguing that a large enough group of Monkeys, banging away at keyboards at random, would eventually type out the works of Shakespeare. 4 --> Thus, from my childhood [I recall there was more than one Sci Fi short story on this], I have been familiar with the rhetorical claim that it is plausible for surprisingly unusual configurations can be accessed by chance given enough resources. 5 --> Indeed, that is the general context of Dawkins' notorious targetted search Weasel software, and indeed it is partly why he chose a phrase in Shakespeare. 6 --> From the state of debate on the roots in Wiki, it seems the claimed C19 provenance in evo debates is not documented [as opposed to is not real -- not everything gets written down or printed . . . ], but there is documentation of use as a metaphor for the challenges implied in trying to get around the second law of thermodynamics by chance (which may reflect oral tradition on use in debates on evolution!). 6 --> I never met this theorem in that thermodynamics context, but I met the rough equivalent of the question of the odds of the O2 molecules in a lecture room all rushing to one end by chance. Logically and physically possible, but not observable on the gamut of the cosmos. 7 --> This grounds the sort of premise used in the 2nd law of thermodynamics: not all that is possible is sufficiently likely to spontaneously happen. Hence Hoyle's scaled up metaphor of a tornado in a Junkyard (and Robertson's metaphor of an air traffic control system gone awry where they no longer know where the many many planes are, as a model for the informational thermodynamical view of molecular chaos). 8 --> Cf my thought experiment scaling back down of the chance assembly challenge to Brownian motion level here.
(My copy of Kittel's Thermal Physics has somehow been misplaced in going back and forth across the region, so I cannot check the Wiki cite from him directly now.)
9 --> In short, the point addressed by the Monkeys analysis is central to the issues being raised by the design inference approach. 10 --> In particular, there are some things that are sufficiently remotely likely that they are empirically implausible and practically unobservable due to the balance of relative statistical weight of microstate clusters linked to the general macro-level circumstances, as Abel has elaborated in his recent paper here. 11 --> This idea is also connected, at a much simpler level, to traditional hypothesis testing. 12 --> For the idea is that if you pick samples at random from especially a bell type distribution, it is much less likely to be in the far tails than in the central bulk.
(So if the null hyp is that you belong to distribution A not B, but your sample is in the far tail of A but could possibly come from the bulk of B, it is more reasonable to infer that the better explanation is that you are in the bulk of B than the far tail of A. So, with a certain level of confidence, one rejects the null and accepts the alt hyp.)
13 --> I have used the image of dropping darts from a stepladder unto a chart of a distribution, with say 30 points. If you then mark even-width stripes yo will see that he dart drops are far more likely to hit the tall stripes from the bulk than the far tails, but will give a sample that can be turned into a fair picture of the original chart. 30 hits is of course chosen for that is about the point where the law of large numbers has an impact. (This is actually an adaptation of one of my first university level physics exercises, of tossing darts at a graph paper with a point target and plotting the resulting distribution in bands, which I adapted for my own teaching, to a model of statistical process control with six-sigma banding . . . ) ____________ Okay, that should help set the issues in context for a more focussed reflection and discussion. GEM of TKIkairosfocus
April 13, 2011
April
04
Apr
13
13
2011
01:50 AM
1
01
50
AM
PDT
ID opponents sometimes attempt to dismiss ID theory as an “argument from ignorance.”
It is an argument from ignorance. It goes something like this: I'm ignorant of the arguments for ID. Therefore, ID theorists are ignorant. It follows by the impeccable force of logic that ID is an argument from ignorance. http://www.talkorigins.org/indexcc/CA/CA100.htmlMung
April 12, 2011
April
04
Apr
12
12
2011
05:41 PM
5
05
41
PM
PDT
MathGrrl (#16), You say,
I am interested in CSI as defined by Dembski since that is what is claimed by many ID proponents as a clear indication of the involvement of intelligent agency. Durston’s metric is not the same and, as far as I know, has not been claimed or demonstrated to be such an indicator.
Let me help you. Durston's metric is a subset of Dembski's metric. I'll give you four examples that illustrate the difference, and the similarity. A. The tar at the bottom of a Miller-Urey apparatus has long polymeric chains, but no discernible order. This has neither Durston FCSI nor Dembski CSI. B. A long string of DNA with random bases has a long specified backbone but no discernible order to the bases themselves. Whether the backbone itself can be formed without intelligent intervention can be disputed (I tend to believe it can't), but the arrangement of the bases does not have either Durston FCSI or Dembski CSI. C. A long string of DNA capable of coding for a 500 amino acid residue protein, at least half of which must be correct in order for the protein to function, has a probability of 20^(-250) of forming spontaneously, and thus has log2 (2^250 * 10^250), or 250 + 830, or 1080 bits of information, well over the Dembski limit. Since its information is defined by its ability, when translated, to perform a function, it has both Dembski's CSI and Durston's FSCI. D. Venter's watermarks have 60 amino acid residues coded for total, which means 259 bits (actually a little more because of the absence of stop codes, plus if there are stop codes on either side of each string the string length is increased to 70). These watermarks do not have Durston's FSCI, as their specification is not functional, but they do have CSI as defined by Dembski, just not 499 bits so as to surpass the universal limit set by Dembski. _ I'd be willing to hazard a guess that most of the ID-friendly commentators to UD would agree with me. I am inviting those who choose to agree or disagree. Since you apparently disagree with this analysis, could you please explain why, and specifically why Durston FSCI is not a subset of Dembski CSI.Paul Giem
April 12, 2011
April
04
Apr
12
12
2011
11:23 AM
11
11
23
AM
PDT
kf Yes I know the wiki link-I just don´t understand what you´re getting at, hence the question. Anyway, I will no longer distract from from the main topic of this thread.Indium
April 12, 2011
April
04
Apr
12
12
2011
10:58 AM
10
10
58
AM
PDT
PS: You were already given a link on the infinite monkeys theorem discussion. Here is the Wiki article, which brings up and addresses all the relevant issues at 101 level.kairosfocus
April 12, 2011
April
04
Apr
12
12
2011
09:07 AM
9
09
07
AM
PDT
Indium: the threshold begins not at the level of a tornado in a junkyard building a Jumbo jet, but att he level of prducing 125 funcitonally specific bytes, or 1,000 bits or 143 ASCII characters. the attempt to dismiss Hoyle's point as a fallacy is itself a strawman. FYI, 125 bytes of information is trivially small for anything that has to seriously function on a specific configuration. That obtains for origin of life and for origin of body plans, most notably the origin of the unique human physical equipment to use conceptual, verbal, articulate language. Observed cell based life is irreducibly complex on an integration of metabolising capacity and a coded information based von Neumann self-replicator. If you want to propose a hypothetical autocatalytic RNA world, you need to produce empirical evidence to substantiate origin of codes, algorithms, data structures to express required info, informational molecular nanomachines and their irreducibly complex functional integration on blind watchmaker processes in credible prelife environments. The infinite monkeys result already tells us not credible. But, maybe you know of a set of results not previously known that renders such credible on the gamut of our observed cosmos. going beyond, you have to similarly cross the body plan origination threshold, including for the origin of the human language and cognitive ability that is so tightly bound up with it. I predict in track record: once the distractive rhetorical gambits [such as the so called fallacy of Hoyle] are set aside, no empirical evidence that crosses the informational gaps, but plenty presumptions on a priori materialism that makes it a "it must have been so." NOT. We know intelligence routinely creates FSCO/I. Designers create machines controlled by and expressing FSCI rich software, and we are working on miniaturisation. We know in principle how to design a vNSR, though we are nowhere currently near a full kinematic implementation. There was a promising case of a machine that sort of replicated itself as a 3-d printer recently, though. In short, design is an infinitely better warranted explanation for FSCO/I than blind watchmaker chance plus necessity, including in living cells and complex multicellular organisms up to and including language using man. And that sticks crossways in the gullet of the materialist establishment. GEM of TKIkairosfocus
April 12, 2011
April
04
Apr
12
12
2011
09:04 AM
9
09
04
AM
PDT
kf Congratulation, I think we can all agree that you have successfully demonstrated that a tornado in the junkyard scenario is not a good explanation for the diversity of life we see today! Infinite monkeys: Not so sure! Infinities are always a bit tricky and sometimes they achieve extraordinary things (which must especially be true for infinite monkeys, of all things). Could you elaborate, please?Indium
April 12, 2011
April
04
Apr
12
12
2011
08:06 AM
8
08
06
AM
PDT
PS: In case the temptation is to again brush aside the X-metric as non quantitiative, observe how it is used: C = 1/0, on semiotic agent's reasonable evaluation as beyond 1,000 bits of contingent complexity. S = 1/0 on SA's judgement on warrant that the item is specific per funciton, code use, K-compressibility etc B = Number of bits used. X = C*S*B That is, this is a direct application of the explanatory filter. If not contingent, C = 0 and X = 0. If not specific [almost any complex bit string will do], S = 0 and X = 0. Only if both complex while being contingent, and specific, can X rise above zero. Once past the thresholds, X is the number of functionally specific bits used. In the WACs there is a calculation for a RGB computer screen full of useful information. Any English, ASCII text string that passes 143 characters will be deemed FSCI, and the DNA complement of the living cell will be deemed FSCI. On the explanatory filter, such FSCI is deemed designed. I hold this is obviously quantitative, and is based on a cogent conceptual model that can be practically, operationally used. Indeed, it has been routinely implicitly used when we speak of typical working computer files of any size.kairosfocus
April 12, 2011
April
04
Apr
12
12
2011
07:48 AM
7
07
48
AM
PDT
MG: Pardon, but CSI was NOT defined by Wm Dembski. As you were repeatedly corrected in the earlier thread, and as has sat in the UD WACs 25 ff for years, it was defined on key examples -- i.e an ostensive definition [and you were also given a tutorial on definition that you ignored] by Orgel and Wicken in the 1970's. What Dembski did, was for sufficiently low probabilites on chance-driven hyps, define a Hartley style log-probability info metric [the BASIS for Shannon's Info metrics and analysis], then apply a beyond a threshold criterion, as Joseph just summarised:
Claude Shannon provided the math for information. Specification is Shannon information with meaning/ function (in biology specified information is cashed out as biological function). And Complex means it is specified information of 500 bits or more- that math being taken care of in “No Free Lunch”. That is it- specified information of 500 bits or more is Complex Specified Information. It is that simple . . .
1 --> The basic idea here is that once we -- us semiotic, judging, observing agents who do science -- are looking at identifiable complex specified information, we can assign a reasonable chance-driven hyp and assess a probability. 2 --> That probability inverted and logged [leaving off a posteriori issues] as Hartley did, gives us an info metric, in bits if the log is base 2. (Cf my basic tut in my always linked, here on this. Have you had to deal with designing, developing or testing or analysing real world digital comms systems working with bits?) 3 --> This has been naturally extended to identifying quantity of info carrying capacity in bits by looking at number of contingencies per symbol of element or parameter. this is the commonplace measure of memory, CDs etc in bits. 4 --> Now, we recognise that a space of contingencies based on possible configs is in principle searchable by chance and trial and error. Indeed that is Darwin's theory in a nutshell. But, once we come to sufficiently large config spaces, the odds of finding recognisably special, hot or target zones or islands of function by random walks plus trial and error falls as bit depth rises. 5 --> Indeed, so far we see on reported Infinite Monkey real world tests [as has been drawn to your attention repeatedly but never acknowledged as noticed], that spaces of order 10^52 are demonstrably searchable for islands of function, i.e. 175 or so bits. Citing the just linked:
A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
6 --> But, spaces far beyond that are growing exponentially. For number of possibilities is as 2^n. So, when we start to look at 400 - 500 bits or 1,000 bits we are dealing with a very different kettle of fish. 10^120 - 10^150 or 10^301 possibilities or so. In a world where the maximum reasonable number of bit operations is 10^120, the maximum number of Planck-time quantum states of 10^80 atoms is 10^150, and 10^301 is ten times the square of that. 7 --> As you will see in the Little Green men thread (where you declined to comment specifically) from 14 - 16, based on the discussion over the weekend, the undersigned analyses that the various relevant metrics [including Dembski's and variants thereof] are doing a Hartley information beyond a threshold metric i.e they are looking at searching a config space and are positing that beyond a threshold, it is unreasonable to expect that recognisable special zones will be hit by random walks plus trial and error dominated searches. (If you want to argue that the laws of necessity of the cosmos acting on initial conditions force the emergence of life, that is tantamount to a declaration that the cosmos is designed and programmed to produce life; which would immediately imply that the design inference on seeing the FSCI in DNA is correct. There are two observed sources of high contingency, chance and choice.) 8 --> Now, in the quantitative metrics under description that you deny the effective existence of, the de facto thresholds applied are at 398+, 500 and 1,000 bits, in a context where we semiotic agents identify that we are dealing with complex specification by various means including K-compressibility of the description. 9 --> In the case of the Dembski metric that you have dismissed,t he threshold will only be passed if the neg log p(T|H) value is in excess of 398 bits, i.e. the probability is of order 1 in 10^120 as an upper bound, which is sufficiently low for the implicit approaximation away from the analytical result to be acceptable. 10 --> Given the way log of a product operates, phi_S EXTENDS the threshold value, with 10^150 being a natural upper bound. So, the Dembski metric can be seen in Hartley information terms, thusly, excerpting the LGM comment 16:
Chi=-log2[10^120.Phi_s(T).P(T|H)], . . . Eqn 9 . . . we see the same structure C = – log2[D*p], with only the value of K = – log2 (D) being differently arrived at. In this case, we have a compound factor one term being a metric of number of bit operations in the observed cosmos, and the other — expanding the threshold bit depth — being a metric of “the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T.” 10^120 is basically a multiplier taking into account “where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120.” So, whatever the technical details and critiques involved, the metrics all boil down to identifying a reasonable threshold for which, beyond it, once we have specified complexity by KC-compressibility or functionality etc, we can be confident that the hot zone or the like are maximally not likely to have been hit upon by chance.
11 --> That is, we move like this:
Chi=-log2[10^120.Phi_s(T).P(T|H)] or, Chi = - log2[D1*D2*p] i.e Chi = - log2(p) -K1 -K2 or, Chi = I - 398 -K2
12 --> In relevant cases, Dembski's metric Chi is a measure of specified information in bits beyond a flexible threshold driven by a lower bound of 398 bits, and with a natural upper bound at 500 bits. 13 --> That threshold is set, again, based on criteria of complexity that are reasonable, identifying when a recognisable hot zone is credibly so deeply isolated that it is a superior inference to hold that if we see something of that much specified complexity, it is most reasonably understood as an artifact, not a product of blind watchmaker processes. 14 --> And the simple brute force X-metric stands out as again using a reasonable judgement on contingent complexity at 1,000 bits, and specificity by recognisable function based on a limited cluster of configs, use of a meaningful code with restrictive rules and symbols, or the same sort of K-compressibility that is otherwise described. then, we simply use the number of bits used. 15 --> the 1,000 bit limit is set to get around probability density function debates. The whole observable cosmos acting as a search engine could not sample more than 1 in 10^150 of the states so no credible search is possible. 16 --> Bluntly put, if you see a flyable jumbo jet the best explanation is design, just as it is the best explanation for Ascii text in English beyond 143 characters, and by extension, the DNA code for the cluster of proteins in the living cell. 17 --> To overturn this, you do not need to go into all sorts of debates over whether everything has to be reducible to mathematical models to be meaningful (self referentially absurd BTW, reduce that to a math metric please) all you need to do is to produce a case where at least 143 characters of ASCII text in English have been created by Infinite Monkey processes, and you can use the Gutenberg library collection as a test base or the like. 18 --> And in fact this has been repeatedly pointed out to you. So-called evolutionary algorithms that are intelligently designed to hill climb within islands of function are not counter examples for the obvious reasons. Duplicating functional strings is not an explanation of the origin of the info in the strings by chance and necessity, it is simply duplication. 19 --> See if, being functional all the way in at least a core group of sentences, you can convert "See Spot run" into a Shakespearean Sonnet much less play, or the like, by duplication, random walk variation, and trial and error, within the search resources of the observable cosmos. 20 --> if you can, on observation, you have shown the capacity of chance plus necessity to produce FSCI from scratch. That is the criterion of empirical testability. 21 --> On the infinite monkeys analysis and the induction on reliable tested sign, we hold that FSCI is a reliable sign of design. Indeed, given the link to the second law of thermodynamics, you are setting out on the task of proposing to create an informational equivalent to a perpetual motion machine of the second kind. 22 --> You will therefore understand our comfortable conclusion that your task is almost certainly hopeless. _______________ GEM of TKIkairosfocus
April 12, 2011
April
04
Apr
12
12
2011
07:29 AM
7
07
29
AM
PDT
MathGrrl, The book "No Free Lunch" introduced CSI and states it pertains to origins. That you refuse to read the book is an indication that you aren't interested in anything beyond getting the water all muddy.Joseph
April 12, 2011
April
04
Apr
12
12
2011
07:01 AM
7
07
01
AM
PDT
Claude Shannon provided the math for information. Specification is Shannon information with meaning/ function (in biology specified information is cashed out as biological function). MathGrrl:
Dembski does not use Shannon information.
I didn't say he did. Please try to follow along.
Further, Schneider has shown that a small subset of known evolutionary mechanisms can generate arbitrary amounts of Shannon information.
Your continued equivocation is duly noted- as is your willful ignorance. However neither of those refutes CSI nor addresses wha I posted. <bAnd Complex means it is specified information of 500 bits or more- that math being taken care of in “No Free Lunch”. MathGrrl
And yet, no one in my guest thread was able to provide detailed calculations of CSI for the four scenarios I described.
And yet I told YOU how to do it for yourself. Why can't you give it a go? I gave you one answer already.Joseph
April 12, 2011
April
04
Apr
12
12
2011
06:48 AM
6
06
48
AM
PDT
In her comments leading up to Mathgrrl's thread, she was fond of saying that evolutionary algorithms can create CSI based upon the definitions given by ID proponents. On her thread, a valid challenge (comment #31) was made to that conclusion, in principle. She ducked that challenge by repeatedly asking a question that had no bearing whatsoever on the challenge being made. There is little doubt that she did not want to acknowledge the validity of the challenge because it would add a certain perspective to her comments that was unwelcome - which is exactly why I made the challenge. Materialist who promote EAs often like to portray that a solution to the mystery of the information within the genome is being found, yet the very thing that creates that mystery has nothing whatsoever to do with an evolutionary algorithm.Upright BiPed
April 12, 2011
April
04
Apr
12
12
2011
06:44 AM
6
06
44
AM
PDT
Another confusion for MathGrrl is her refusal to understand that CSI pertains to ORIGINS. MathGrrl:
That appears to be your own idiosyncratic view, not shared by many, if any, other ID proponents.
Strange that I quoted Dembski in support of my claim:
The central problem of biology is therefore not simply theorigin of information but the origin of complex specified information. page 149 of "No Free Lunch" (bold added)
ID is based on three premises and the inference that follows (DeWolf et al., Darwinism, Design and Public Education, pg. 92): 1) High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design. 2) Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity. 3) Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity. 4) Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems. (bold added) IoW MathGrrl proves she is either willfully ignorant or purposely obtuse.Joseph
April 12, 2011
April
04
Apr
12
12
2011
06:42 AM
6
06
42
AM
PDT
1 2

Leave a Reply