Uncommon Descent Serving The Intelligent Design Community

The Tragedy of Two CSIs

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation.

CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation. It a measurement of how unlikely the artefact was to emerge given its context. This is the version that I’ve been defending in my recent posts.

CSI, as used by others is something more along the lines of the appearance of design. Its typically along the same lines as notion of complicated developed by Richard Dawkin in The Blind Watchmaker:

complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.

This is similar to Dembski’s formulation, but where Dawkins merely requires that the quality be unlikely to have been acquired by random chance, Dembski’s formula requires that the quality by unlikely to have been acquired by random chance and any other process such as natural selection. The requirements of Dembski’s CSI is much more stringent than Dawkin’s complicated or the non-Dembski CSI.

Under Dembski’s formulation, we do not know whether or not biology contains specified complexity. As he said:

Does nature exhibit actual specified complexity? The jury is still out. – http://www.leaderu.com/offices/dembski/docs/bd-specified.html

The debate for Dembski is over whether or not nature exhibits specified complexity. But for the notion of complicated or non-Dembski CSI, biology is clearly complicated, and the debate is over whether or not Darwinian evolution can explain that complexity.

For Dembski’s formulation of specified complexity, the law of the conservation of information is a mathematical fact. For non-Dembski formulations of specified complexity, the law of the conversation of information is a controversial claim.

These are two related but distinct concepts. We must not conflate them. I think that non-Dembski CSI is a useful concept. However, it is not the same thing as Dembski’s CSI. They differ on critical points. As such, I think it is incorrect to refer to any these ideas as CSI or specified complexity. I think that only Dembski’s formulation or variations thereof should be termed CSI.

Perhaps the toothpaste is already out the bottle, and this confusion of the notion of specified complexity cannot be undone. But as it stands, we’ve got a situations where CSI is used to referred to two distinct concepts which should not be conflated. And that’s the tragedy.

Comments
This is undoubtedly far too late to the party, but I can't help but be fascinated by the perpetual wrestling over the validity and application of the CSI concept, and, consequently, want to add a few ideas to the pot. My primary exposure to the concept of CSI is via Meyer's Signature, so I can't comment with any force on Dembski's development or application of his conception of CSI. Meyer discusses CSI (though I'm not sure that he explicitly uses that acronym, he certainly gives a very lucid exposition of the component concepts of CSI) in chapter 4 of SITC under the subheading "Shannon Information or Shannon Plus?" (pg. 105-110). He begins with the hypothetical anecdote concerning the attempts of Misters Jones and Smith to reach each other via phone - contrasting Mr. Jones random 10-digit sequence with Smith's specifically arranged sequence (i.e. Jones' phone number). Meyer then makes this comment concerning the distinction between Jones' sequence and Smith's sequence:
Both sequences... have information-carrying capacity, or Shannon information, and both have an equal amount of it as measured by Shannon's theory. Clearly, however, there is an important difference between the two sequences. Smith's number is arranged in a particular way so as to produce a specific effect, namely, ringing Jones's cell phone, whereas Jones's number is not. Thus, Smith's number contains specified information or functional information, whereas Jones's does not; Smith's number has information content, whereas Jones's number has only information-carrying capacity (or Shannon information). [Emphases from original]
Note how Meyer uses the terms specified information and functional information in parallel. It becomes quite apparent, then, that the term specified in CSI is to be identified with function. We could just as properly use the term CFI if we so chose. Meyer next tackles the 'C' - complexity:
Both Smith's and Jones's sequences are also complex. Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm.... Complex sequences... cannot be compressed to, or expressed by, a shorter sequence or set of coding instructions. (Or rather, to be more precise, the complexity of a sequence reflects the extent to which it cannot be compressed.) [Emphasis from original]
Here it is shown that complexity corresponds to the compressibility of a sequence (or, rather, the lack thereof). It should also be noted that complexity in this sense is not an "either-or" type of quality. Some sequences may resist even the smallest degree of compression, yet others may be wholly compressible, and many likely fall somewhere in the middle. But how are we to understand information? One thing that jumps out in Meyer's discussion of the subject is that he specifically confines his usage of the term 'information' to linear digital sequences. He uses the term in reference to numerical sequences, alphabetic sequences, amino acid sequences, and nucleotide sequences. All of those can properly be described as linear digital sequences (where "digital" is understood as referring to discrete values - as opposed to continuous values). Putting all of this together creates this definition of CSI: Linear digital sequences that are algorithmically incompressible (in some measure) and possess functional significance Meyer makes the applicability of this concept to biology clear on paragraph 2 of page 109:
Molecular biologists beginning with Francis Crick have equated biological information not only with improbability (or complexity), but also with "specificity," where "specificity" or "specified" has meant "necessary to function." Thus, in addition to a quantifiable amount of Shannon information (or complexity), DNA also contains information in the sense of Webster's second definition: it contains "alternative sequences or arrangements of something that produce a specific effect.".... DNA displays a property - functional specificity - that transcends the merely mathematical formalism of Shannon's theory. [Emphases from original]
We can thus confidently say that DNA (and nucleotide sequences) possesses CSI in the aforementioned sense, namely it contains sequences that actually do something (to put it as simply as possible). One especially salient point that Meyer makes is that functional specificity (or specificity) cannot be reduced to mere numerical status - it transcends it. So while it is perfectly possible to calculate the 'C' part of CSI (its complexity or Shannon information), the 'S' part cannot be calculated (as I believe Eric earlier pointed out). Having said all of that, here's a few questions for CSI skeptics: 1. Is there such a thing as Shannon information (sequence complexity)? 2. Do you believe that 'function' is a useful descriptor? 3. Can a sequence possessing some amount of Shannon information simultaneously have functional significance that is sequence-dependent? a) If yes, would you agree that CSI (as defined in this comment) has at least limited applicability? b) If no, how would you differentiate between functional and non-functional sequences? For fellow ID proponents: 1. Meyer states that the Shannon information of Jones's random 10-digit number is 33.2 bits. If a 10-digit base 10 number (non-redundant, I assume) contains 33.2 bits of Shannon information, could we say that a functional sequence of the same type contains 33.2 bits of CSI? a) If not, why not? I'm curious to hear the responses, so please chime in if you have the time. Thx:)Optimus
December 7, 2013
December
12
Dec
7
07
2013
08:48 PM
8
08
48
PM
PDT
I agreed with Elsberry and Shallit that the LCI doesn’t work in the case that the natural process is unknown to the specifying agent, a point also made by Tellgren and conceded by Dembski.
I have never tried to understand the technical aspects of Dembski's ideas so this is not a comment on that. But reading through the lines above, it says that all natural processes currently known to man cannot add information. And does this mean that natural selection is such a process that cannot add information. If this is true and agreed to by Elsberry, Shalit and Twllgren, then shouldn't that fact in lay man's language become part of the science curriculum?jerry
December 1, 2013
December
12
Dec
1
01
2013
06:41 AM
6
06
41
AM
PDT
Thanks, Alan. I wish I had something to say that hasn't already been said in papers and blog posts, but I don't.R0bb
November 29, 2013
November
11
Nov
29
29
2013
12:08 PM
12
12
08
PM
PDT
Winston, yes I'm aware of Dembski's account of the LCI in NFL. As for problems with it, in my last comment I agreed with Elsberry and Shallit that the LCI doesn't work in the case that the natural process is unknown to the specifying agent, a point also made by Tellgren and conceded by Dembski. This problem by itself is enough to disqualify the LCI, as defined in NFL, from being a mathematical fact. I completely respect your choice to not delve into these issues in this thread. I appreciate your attempts to clear up the confusion surrounding the topic of CSI, and agree with much of what you say. I especially appreciate the fact that you're willing contradict other IDists. For example, you point out that whether two copies of something have more CSI than one copy depends on the assumed mechanism, contra Dembski who says that they have the same amount of information, and that any formal account of information had better agree. More power to you, Winston.R0bb
November 29, 2013
November
11
Nov
29
29
2013
12:08 PM
12
12
08
PM
PDT
I suggest that you write a post and get it up on The Skeptical Zone...
R0bb has author status at The Skeptical Zone, should he decide that TSZ is a suitable venue and feels inclined to publish a post there. He would be most welcome.Alan Fox
November 29, 2013
November
11
Nov
29
29
2013
11:05 AM
11
11
05
AM
PDT
I'm assuming that you are well aware where Dembski has offered definitions of Specified Complexity, and that you find fault with those definitions. I assume you are also aware of his proof of that law in No Free Lunch, and you find fault with it. But you don't spell out what your problem is with the proof or defintion, and merely vague statements about not being defined well enough. I wrote my response to Elsberry and Shallit's criticism as part of a larger response to someone else who had referenced E&S. Thus I addressed the particular issue that he brought up, although I did go back and read E&S. Perhaps you are looking at something slightly different from their criticisms. On either issue, answering your questions here is more effort than I'm willing to put into a blog comment. If you'd like to offer a critique of my response there, I suggest that you write a post and get it up on The Skeptical Zone, Panda's Thumb, or similar. If you do that, I'll look into responding.Winston Ewert
November 29, 2013
November
11
Nov
29
29
2013
09:56 AM
9
09
56
AM
PDT
Winston:
Elsberry and Shallit’s criticism show consistent misunderstanding of Dembski’s work. I’ve previously discussed their confused objections to the LCI.
WRT their confused objections: 1) I'm curious -- where in their paper do they appear to believe that K in Dembski's definition refers to the entire background knowledge of a subject? At the beginning of section 8 they define K as "a set of items of background knowledge" that "'explicitly and univocally' identifies a rejection function", and they seem to stick with this definition throughout the paper. 2) I'll take your second response a sentence at a time:
Second, Elsberry and Shallit object that the natural process under consideration might not be in the background knowledge of the subject.
To be exact, their objection is that g∘f is not necessarily explicitly and univocally identifiable from K, where K is the background knowledge that explicitly and univocally identifies g.
However, Dembski has never claimed that every subject will be able to identify specified complexity in every case.
You seem to be implying that there might be specified complexity, but the subject might lack the background knowledge to recognize it as such. But specified complexity is defined in terms of K, the background knowledge that identifies the pattern. If there's no K, then there's no specified complexity.
The design inference is argued to be resilient against false positives, but not false negatives.
But we're talking about the LCI, not the design inference.
Furthermore, after investigation, the subject will learn about the natural process and thus it will enter the background knowledge of the subject.
Even if we could guarantee that an investigation will always take place and that the investigation will always yield knowledge about the natural process (which we can't), that would not change the fact that prior to the investigation, the LCI is being violated. 3) I think your response to "the question of whether knowledge gained about the process might invalidate the conditional independence requirement" has some problems, but I'm not even sure if this is a question posed by Elsberry & Shallit. Is it? I still stand by my assertion that the LCI hasn't been defined well enough to allow for mathematical proof. If you think that is has, can you point me to the definition? Hopefully I can respond to the rest of your comment later.R0bb
November 29, 2013
November
11
Nov
29
29
2013
07:43 AM
7
07
43
AM
PDT
F/N: A bit late to the party, see that the matter has been quite well handled in general. I note from 2 above an inadvertent illustration by AF of the all too typical fundamental misunderstandings and dismissiveness of objectors to the concept of functionally specific, complex organisation and/or associated information:
How has dFCSI demonstrated itself as useful? Where can I find a demonstration of usefulness? All I see is GEM counting amino acid residues and claiming he has done something useful without achieving anything useful at all.
Let's see: 1 --> Amino acid sequences of relevant length [say 100 up] give us a huge space of possible configs, even leaving off chirality and the geometrical/functional fail to fold correctly implications of incorrect handedness, possibilities of different bonding patterns, the much broader set of possible amino acids vs the 20 or so in life, interfering cross-reactions, implications of endothermic reactions, etc. 2 --> Of these, given what we know about fold domains, singletons, key-lock fitting and particular requisites of function, we know that functional sequences are a very tiny fraction of the space of possibilities. 3 --> In addition, they come in isolated clusters with non-functional gaps intervening in the Hamming-distance space. 4 --> That is, it is an appropriate metaphor to speak of deeply isolated islands of function in wide seas of non-function. 5 --> where, given search resources of a solar system or an observed cosmos, we can use these facts and the typical lengths of relevant proteins to see -- per needle in a haystack issues -- that it is maximally unlikely that blind chance and mechanical necessity in a pre-biotic soup, could come up with a cluster of relevant molecules to get life started. 6 --> And similarly, the intervening seas of non-function multiplied by search challenges and observed patterns pointing to upper limits on plausible numbers of simultaneous changes [as in 7 or so per Axe and Gauger, Behe etc] point to a similar maximal lack of likelihood of forming new body plans be chance variation of various types and differential reproductive success leading to new population patterns thence descent with an adequate degree of modification. 7 --> So, it is no surprise that there is a lack of empirical observation of origin of novel body plans by such mechanisms. The Darwinian theory of body plan level macro evolution lacks an observed causally adequate mechanism. The same, for variants. 8 --> All of this has been repeatedly pointed out to AF and explained in adequate detail. On fair comment, he has persistently refused to yield to adequate warrant. 9 --> On further fair comment the dismissive remarks as cited are little more than a strawman fallacy. 10 --> AF et al would do better to carefully examine the point that functional specificity is as close as what happens when a car part is sufficiently out of spec. As a simple biological case in point, reflect on sickle cell anemia. (The fear of what radiation does to cells is a similar case.) 11 --> And likewise, they would do well to ponder the protein synthesis mechanism and its use of codes -- digital four state codes, and step by step processes aka algorithms. (But then, this is an ilk that is highly resistant to the demonstrated reality of self evident truth. No inductive argument -- thus nothing of consequence in science -- can rise to that level of warrant. This is ideologically driven selective hyperskepticism that we are dealing with. ) KF PS: EW, it is in the context that WmAD highlights in NFL that in the biological arena specificity is cashed out as function that I have focussed on that. A simplification of the 2005 Dembski expression then gives: Chi_500 = I*S - 500, bits beyond the Solar System threshold. I being a reasonable measure of info content and S a dummy variable defaulting to 0 and set to 1 where on objective grounds functional specificity is positively identified. Digital code such as in D/RNA is an obvious example. The Durston metric can be used for I and it yields that some relevant protein families are credibly designed.kairosfocus
November 29, 2013
November
11
Nov
29
29
2013
04:02 AM
4
04
02
AM
PDT
Naturally I disagree. This would imply that the law is defined rigorously enough to allow for mathematical proof, which it certainly is not. See the ambiguities and problems pointed out by Tellgren and Elsberry & Shallit.
Elsberry and Shallit's criticism show consistent misunderstanding of Dembski's work. I've previously discussed their confused objections to the LCI. .
I’m afraid I still don’t understand. You yourself have done CSI calculations based on hypothesized processes without knowing whether those processes were actually in operation. Dembski has done the same. He based his CSI analysis of the Nicholas Caputo incident on a hypothesis of a random draw, even though there apparently was no actual random draw in operation.
Specified complexity allows the testing of a particular hypothesis for a given outcome. We can a test a hypothesis whether or not it was actually in operation. So we can test the fair coin hypothesis for Caputo, or any of the various hypotheses I tested for the mystery image. However, if I went to argue that an artifact was not produced by anything internal to a system, I need to ensure that it was not produced by any natural laws in that system. That's the case where I need to look at all the mechanisms/natural laws that operate in the system. So we have no basis for claiming that Caputo's actions were driven by intelligence. We can conclude that he almost certainly didn't use a fair coin. But he could very well have used a biased coin, all the specified complexity tells us is that we can reject the fair coin hypothesis. That alone does not tell us whether his action were driven by intelligence.Winston Ewert
November 28, 2013
November
11
Nov
28
28
2013
09:53 AM
9
09
53
AM
PDT
gpuccio:
That is exactly the improbability of getting a functional sequence by a random search. It is exactly CSI. The simple truth is that CSI, or any of its subsets, like dFSCI, measures the improbability of the target state.
But how do we define the target state, and under what hypothesis do we calculate the improbability? Does it qualify as CSI if we choose any target state and any hypothesis we like? Dembski's current CSI measure is an upper bound on the probability of E, or any event more simply describable than E, occurring anywhere at any time. To calculate this upper bound, you have to factor in the replicational and specificational resources relevant to E's occurrence, which Durston does not do in his FSC measure. If you think that definitional details like this are unimportant, consider the amount disagreement over CSI just among IDists. You say that CSI is found in biology -- Ewert says we don't know if there's CSI in biology or not. jerry says that "specified" has no agreed-upon meaning -- others obviously disagree. Eric Anderson says that a sequence of 1000 coin flips that's all heads has no complexity -- Sal disagrees. I submit that disputes like these are resolved by cranking up the rigor, which is what needs to happen in CSI discussions.R0bb
November 28, 2013
November
11
Nov
28
28
2013
08:40 AM
8
08
40
AM
PDT
Winston, from your OP:
For Dembski’s formulation of specified complexity, the law of the conservation of information is a mathematical fact.
Naturally I disagree. This would imply that the law is defined rigorously enough to allow for mathematical proof, which it certainly is not. See the ambiguities and problems pointed out by Tellgren and Elsberry & Shallit. #38
By “mechanisms in operation”, I was referring the natural laws that operate in a system. I’m not referring the actual operations that produced the object.
I'm afraid I still don't understand. You yourself have done CSI calculations based on hypothesized processes without knowing whether those processes were actually in operation. Dembski has done the same. He based his CSI analysis of the Nicholas Caputo incident on a hypothesis of a random draw, even though there apparently was no actual random draw in operation.R0bb
November 28, 2013
November
11
Nov
28
28
2013
06:52 AM
6
06
52
AM
PDT
Please, check my long discussion with Elizabeth here: https://uncommondescent.com.....selection/ Posts 186 on.
Lizzie starts posting here and the whole exchange between you and Lizzie is informative but ultimately unsatisfactory. The essential point that Lizzie makes is that you (along with Axe, Abel, Trevors, Durston) are assuming about the rarity of unknown protein sequences and making an unjustified extrapolation. That is also my view and is so far unchanged by what I have read that you have written.Alan Fox
November 28, 2013
November
11
Nov
28
28
2013
02:07 AM
2
02
07
AM
PDT
No hope we can get along one with the other.
Surely you don't mean that, gpuccio? I don't think you are politically motivated and I hope you subscribe to "live and let live" too. Disagreeing on matters metaphysical does not prevent peaceful coexistence.Alan Fox
November 28, 2013
November
11
Nov
28
28
2013
01:47 AM
1
01
47
AM
PDT
wd400:
Got it. You calculate CSI based on an assumption no one believes to be true, and ignore the mechanism that is actually proposed to explain protein evolution.
You got it perfectly right. I am, certainly, a minority guy. And you are, definitely, a willing conformist. No hope we can get along one with the other. Good luck.gpuccio
November 28, 2013
November
11
Nov
28
28
2013
12:55 AM
12
12
55
AM
PDT
My accusation is that Sal modified the content of some of my posts to make it appear as if I had written something which I had not in fact written.
Not true, I modified them trying to make it evident as possible that it wasn't you that said those words. I thought people would figure out after the post that thanked me for editorial improvements, in addition to your complaints, I thought it was common knowledge that your modified posts were simply taken as counter measures for your bad behavior. I was wrong. There was a post that said, "I'm not responsible for what appears in this post. I'm bipolar and schizophrenic." I thought people would know you wouldn't possibly say that. The problem is they totally found it plausible you were biopolar and schizophrenic. I wonder why? Now that stuff about you drinking and people thinking you are alcoholic? That is your doing, those are your words. You're the one that insinuated about your own self that you drink till you feel like everything is spinning. If you have a drinking problem, all the more reason I want you out of my discussions. Even if you don't, please stay away -- you're wasting my time and yours. And from now, on, stop turning discussions at UD into your forum to complain about me. It's extremely rude to the other authors that you are spamming their threads with your personal vendetta. Even if I'm guilty as you say, you have no business impinging on the other UD authors by turning their threads into your private litany against me. So please, stop bringing it up on their threads. Set up your own website and whine all you like, but stop spamming other UD author's threads.scordova
November 27, 2013
November
11
Nov
27
27
2013
08:20 PM
8
08
20
PM
PDT
I repent of modifying Mung's posts. As penance, in the future, I'll just erase or delete them. I might leave an explanation like "banned for trolling".scordova
November 27, 2013
November
11
Nov
27
27
2013
07:44 PM
7
07
44
PM
PDT
Salvador:
I modified Mung’s posts.
True. On multiple occasions. You deleted my posts. You deleted the content of my posts. You changed the content of my posts. Salvador:
I thought it would be obvious to all that they were modified...
False. So now Salvador knows at least one reason why I think he is a liar. But this is progress, imo. Salvador has finally admitted to modifying the content of my posts, not just deleting the content. My accusation is that Sal modified the content of some of my posts to make it appear as if I had written something which I had not in fact written. Sal now admits the truth of this fact. His excuse?
I thought it would be obvious to all that they were modified…
Really? Does that somehow justify what was done? Admission of wrongdoing does not constitute repentance. Do you repent, Sal?Mung
November 27, 2013
November
11
Nov
27
27
2013
07:28 PM
7
07
28
PM
PDT
The topic was the tragedy of two CSIs. Let me state where I believe all or most IDists agree. In the coin+robot system there was no net increase in algorithmic information in the coin+robot system after the robot ordered the coins from a random state to all heads. This is analogous to bacteria evolving from one bacterium to a colony -- there is no net increase in algorithmic information. The only way a baceterium can gain algorithmic information is via genetic or other kind of information exchange (like redesign by a designer like Craig Ventner or God). The bacterial colony can augment it's database of information by measuring the environment, but substantial increase in its capabilities must come from an outside source. Most ID proponents, myself included, do not believe there is any empirical or theoretical evidence an information poor environment that only says "live or die" can provide much input in increasing the algorithmic information in bacteria. Algorithmic information can include (but is not limited to): 1. blueprints for new proteins 2. blue prints for regulation of proteins 3. blue prints for using the proteins I'm using the phrase "algorithmic information" because it is used in industry. It is generally well understood what it signifies.scordova
November 27, 2013
November
11
Nov
27
27
2013
07:19 PM
7
07
19
PM
PDT
how did you manage to infer that I do not know how to program or do not understand compilers?
Never said that Mung, you're making stuff up again. What I did point out is you made stuff up about me and my understanding of compilers. You did so by misrepresenting what I said. Gee, Mung, now the conversation has gotten way off topic. You post garbage about me, then I have to try to set the record straight. See the pattern? You're a waste of time.scordova
November 27, 2013
November
11
Nov
27
27
2013
07:01 PM
7
07
01
PM
PDT
Salvador:
You accused me of not knowing how to program, not understanding compilers etc. Then you confess that you don’t even have a computer science degree (I do).
Great. You have a computer science degree. I never said you didn't, right? I guess from the fact that you have a CS degree we're supposed to infer that you know how to program and that you understand compilers. But given that I do not have a computer science degree how did you manage to infer that I do not know how to program or do not understand compilers? Does the fact that I do not have a computer science degree mean that I cannot call BS on things you post with regard to compilers and programming? I honestly believe that I have written more programs in actual use than you have. Want to bet? How many programs have you written that you've managed to sell? IOW, you have a degree, I have actual practice and experience. It's my actual experience in the real world that allows me to call BS on your claims. But in threads you author, no one would be the wiser.Mung
November 27, 2013
November
11
Nov
27
27
2013
06:58 PM
6
06
58
PM
PDT
I modified Mung's posts. I thought it would be obvious to all that they were modified when I wrote something to the effect (in CAPS):
I'D LIKE TO THANK SALVADOR FOR ALL HIS EDITORIAL IMPROVEMENTS TO MY POSTS. I SAY REALLY STUPID THINGS AND MAKE STUFF UP ABOUT SAL BECAUSE I HATE SAL SO MUCH. I WANT TO THANK HIM FOR CLEANING UP MY TROLL POSTS.....
Apparently some did not get the memo. Since that time, I've just deleted what you wrote and left the post empty. You are permanently banned from my discussions, and future uninvited visits will be dealt by erasure of what your write. As an amends, any such posts that had an editorial improvement has been removed if I find it (except maybe a note pointing out you are trolling). So now you can't say that if you said something at UD, it was because I modified your posts. All the stuff you've said that remains is yours, not something I put in your posts. From the comment policy:
moderators are editors and it’s their job to make people’s words disappear before anyone else sees them. The second thing to remember is that we don’t have the time or inclination to get into debates over our editing decisions. Nagging us about a comment that didn’t get approved is only going to make us even less likely to approve your future comments.
All Mung's comments are subject to deletion on my discussions on the grounds it wastes time and detracts from more interesting matters than his vendetta to get me to beg for his forgiveness. PS Apologies to Winston that his thread has to be derailed by a confrontation between Mung and I. Mung should take it up elsewhere, instead of throwing his off-topic protests against me every chance he gets.scordova
November 27, 2013
November
11
Nov
27
27
2013
06:58 PM
6
06
58
PM
PDT
Mung:
The reason you “toss me” is because I expose you for what you are.
Salvador:
In some of the posts I’ve deleted you’ve called me liar, hypocrite, and other names.
If the shoe fits... "The sting of any rebuke is the truth." - Benjamin Franklin But sure, better to delete the accusations than deal with them. Better to delete the evidence than admit it exists. You've modified the content of some of my posts to make it appear that I wrote something which I did not in fact write? True or false?Mung
November 27, 2013
November
11
Nov
27
27
2013
06:41 PM
6
06
41
PM
PDT
The reason you “toss me” is because I expose you for what you are.
Baloney. In some of the posts I've deleted you've called me liar, hypocrite, and other names. In https://uncommondescent.com/philosophy/the-shallowness-of-bad-design-arguments/ You accused me of not knowing how to program, not understanding compilers etc. Then you confess that you don't even have a computer science degree (I do). Then you question my background in thermodynamics, on what basis? You have no background in physics either, you, even by your own admission can't comprehend the math. Then you come along to one of my discussion and ask me to write a tutorial on math. What's the matter, is simple algebra over your head? I end up wasting more time responding to your trolling, your false accusations than actually discussion the topic at hand. As to your waste of time comment:
If the coins were not flipped, whether the coins are “fair” or not is irrelevant.
The reason the coins are stated as fair is that determines the a priori probability which is important in scoring the CSI content of the coin configuration. Also I make that statement lest any one say the coins might not be fair, I provide that they are fair as part of the hypothesis under consideration so as to clarify the points. But you're so bent to disagree with everything I say, you'll dredge up stupid arguments just to troll my discussions. Here are you in the discussion that made me decide it is best to dispense with your posts:
Beg my forgiveness... https://uncommondescent.com/philosophy/the-shallowness-of-bad-design-arguments/
Beg your forgiveness, Mung, as if you are God? You ought to thank me for deleting and editing your posts lest the readers conclude you are getting loony or alcoholic.scordova
November 27, 2013
November
11
Nov
27
27
2013
06:15 PM
6
06
15
PM
PDT
Salvador:
Example of why Mung is a waste of time, and why I toss him from my discussions.
The reason you "toss me" is because I expose you for what you are. Even here you reveal your true character through your selective quoting of what I wrote. My point in your original thread was completely on topic and relevant. But that's not what matters to you. Let me quote you:
He’ll occasionally try to say something useful just to sneak in and participate, knowing I’ll remove his comment or edit it.
I have a question for you Sal, a simple one. What do you think motivated Winston's recent posts here at UD? Do you think it was something I wrote, and if so, why? CSI Confusion 1 CSI Confusion 2 The Tragedy of Two CSIs As near as I can tell you're the only one (other than Winston) here at UD authoring posts about CSI recently. His posts seem to be offered as correctives. Correctives to what?Mung
November 27, 2013
November
11
Nov
27
27
2013
05:52 PM
5
05
52
PM
PDT
Got it. You calculate CSI based on an assumption no one believes to be true, and ignore the mechanism that is actually proposed to explain protein evolution. I think I've hear enough...wd400
November 27, 2013
November
11
Nov
27
27
2013
02:42 PM
2
02
42
PM
PDT
wd400: a) I don't include NS in calculations of CSI because NS is a necessity mechanism. If someone can show that such a mechanism acted in some specific path to a basic protein domain, I am ready to include that in my calculations, and I have shown how to do that. Please, check my long discussion with Elizabeth here: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/ Posts 186 on. b) You say: they needn’t be selected for, btw, just tolerated That is simply wrong. Only positive selection increases the probabilistic resource that are in favour of the ultimate result. Any other mechanism, including the famous genetic drift. do not increase the probabilities of any specific outcome, and therefore are irrelevant to the computation of dFSCI. That should be very obvious to anyone who understands probabilities, and yet darwinists find it so difficult to understand! So, you are wrong. The intermediate must be positively selected and expanded in the population, otherwise we are still in a purely random search (all unrelated outcomes are equiprobable). c) You say: So while you ignore natural selection you rely utterly on the idea there are no viable intermediates. That’s what you need to prove. Absolutely not. It's you who must prove that they exist. I can simply say that no one has ever been found. That's enough to make your hypothesis a myth, unsupported by facts. You must find the facts to support your hypothesis. Moreover, I have added that not only no such intermediates were ever found, but there is no reasonable argument to expect that they exist at all. IOW, your hypothesis is both logically unfounded and empirically unsupported.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
01:58 PM
1
01
58
PM
PDT
We collectively hold our breath. Assertion or engagement. What will it be?Upright BiPed
November 27, 2013
November
11
Nov
27
27
2013
01:51 PM
1
01
51
PM
PDT
Alan Fox: My compliments! Your post #63 is a true masterpiece of non sequitur and divagation. I have stated that Durston's FSC and Dembski's CSI, and my dFSCI are the same thing. And I am going to show that it is that way. You know, I use to give support to my arguments in my discussions. So, let's take Durston's numbers and see what they mean. I will refer, again, to Table 1 in his paper. Let's take just one example out of 35. Ribosomal S12, a component of the 30S subunit of the ribosome. The length os the sequence is 121 AAs (not too long, indeed, and it is only a part of a much more complex structure). The analysis has been performed on 603 different sequences of the family. The null state has a complexity of 523 bits. That is the complexity of a random sequence of that length. IOWs, a complexity of 2^523 (which is approximately the same as 20^121). That is the complexity, and the improbability (as 1:2^523) of each specific sequence in the search space. In the following column, we can see that Durston's calculation, applying Shannon's principles to the comparison of the sequences in the set, gets a functional sequence complexity (FSC) for that family of 359 Fits (which means, functional bits). I will not discuss now if the calculation is right, or precise, or how he gets that number. I will just discuss what it means. Durston explain it very clearly, if only you take the time to read and understand it:
The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribosomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space.
(Please, note that in this paragraph there is a typo, the Fits for ribosomal S12 are 359, and not 379, as can be checked in Table 1, and as is obvious by the computation. I have used the correct value of 359 in the following discussion) IOWs, the target space, the number of sequences of that length, is calculated here to be of 10^49 sequences that are functional. IOWs 2^164. That is to say that the functional space (the target space) is made of approximately 2^164 (or 10^49) sequences. Therefore, the ratio of the target space to the search space is 2^164 : 2^523, that is 2^-359 (or 10^-108). (That is the same as 10^-106 percent of the search space). IOWs, the probability of finding a functional S12 sequence in the search space, by random search, is 1:10^108 (in one attempt). That is exactly the p(T|H) in Dembski's definition. (wd400, where are you?) So, they are the same thing. QED.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
01:43 PM
1
01
43
PM
PDT
why these concepts are not the same. Durston at least calculates his “functional sequence omplexity”.
as predictedUpright BiPed
November 27, 2013
November
11
Nov
27
27
2013
01:14 PM
1
01
14
PM
PDT
gpuccio Your English is fine, it's what's between the lines that I find interesting. For anyone who can read, my “major claim” is that there are no “selectable intermediates”, and that CSI is of fundamental relevance, both to evaluate the improbability of the whole sequence for RV, or (if and when selectable intermediates will be shown) to evaluate the improbability of the role of RV before and after the expansion of the selectable ... So, you don't included natural selection in your CSI calculations. You says this is because there are no "selectable" intermediates (they needn't be selected for, btw, just tolerated). So while you ignore natural selection you rely utterly on the idea there are no viable intermediates. That's what you need to prove. But then, that's just "what use if half an eye" for proteins dressed up in some math. If you had good evidence that there were not tolerable intermediates between protein families you wouldn't need CSI. So why bother?wd400
November 27, 2013
November
11
Nov
27
27
2013
01:12 PM
1
01
12
PM
PDT
1 2 3 4

Leave a Reply