Uncommon Descent Serving The Intelligent Design Community

ID and Common Descent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Many, many people seem to misunderstand the relationship between Intelligent Design and Common Descent. Some view ID as being equivalent to Progressive Creationism (sometimes called Old-Earth Creationism), others seeing it as being equivalent to Young-Earth Creationism. I have argued before that the core of ID is not about a specific theory of origins. In fact, many ID’ers hold a variety of views including Progressive Creationism and Young-Earth Creationism.

But another category that is often overlooked are those who hold to both ID and Common Descent, where the descent was purely naturalistic. This view is often considered inconsistent. My goal is to show how this is a consistent proposition.

I should start by noting that I do not myself hold to the Common Descent proposition. Nonetheless, I think that the relationship of ID to Common Descent has been misunderstood enough as to warrant some defense.

The issue is that most people understand common descent entirely from a Darwinian perspective. That is, they assume that the notion of natural selection and gradualism follow along closely to the notion of common descent. However, there is nothing that logically ties these together, especially if you allow for design.

In Darwinism, each feature is a selected accident. Therefore, Darwinian phylogenetic trees often use parsimony as a guide, meaning that it tries to construct a tree so that complex features don’t have to evolve more than once.

The ID version of common descent, however, doesn’t have to play by these rules. The ID version of common descent includes a concept known as frontloading – where the designer designed the original organism so that it would have sufficient information for its later evolution. If one allows for design, there is no reason to assume that the original organism must have been simple. It may in fact have been more complex than any existing organism. There are maximalist versions of this hypothesis, where the original organism had a superhuge genome, and minimalist versions of this hypothesis (such as from Mike Gene) where only the basic outlines of common patterns of pathways were present. Some have objected to the idea of a superhuge genome, on the basis that it isn’t biologically tenable. However, the amoeba has 100x the number of base pairs that a human has, so the carrying capacity of genetic information for a single-cell organism is quite large. I’m going to focus on views that tend towards the maximalist.

Therefore, because of this initial deposit, it makes sense that phylogenetic change would be sudden instead of gradual. If the genetic information already existed, or at least largely existed in the original organism, then time wouldn’t be the barrier for it to come about. It also means that multiple lineages could lead to the same result. There is no reason to think that there was one lineage that lead to tetrapods, for instance. If there were multiple lineages which all were carrying basically the same information, there is no reason why there weren’t multiple tetrapod lineages. It also explains why we find chimeras much more often than we find organs in transition. If the information was already in the genome, then the organ could come into existence all-at-once. It didn’t need to evolve, except to switch on.

Take the flagellum, for instance. Many people criticize Behe for thinking that the flagellum just popped into existence sometime in history, based on irreducible complexity. That is not the argument Behe is making. Behe’s point is that the flagellum, whenever it arose, didn’t arise through a Darwinian mechanism. Instead, it arose through a non-Darwinian mechanism. Perhaps all the components were there, waiting to be turned on. Perhaps there is a meta-language guided the piecing together of complex parts in the cell. There are numerous non-Darwinian evolutionary mechanisms which are possible, several of which have been experimentally demonstrated. [[NOTE – (I would define a mechanism as being non-Darwinian when the mechanism of mutation biases the mutational probability towards mutations which are potentially useful to the organism)]]

Behe’s actual view, as I understand it, actually pushes the origin of information back further. Behe believes that the information came from the original arrangement of matter in the Big Bang. Interestingly, that seems to comport well with the original conception of the Big Bang by LeMaitre, who described the universe’s original configuration as a “cosmic egg”. We think of eggs in terms of ontogeny – a child grows in a systematic fashion (guided by information) to become an adult. The IDists who hold to Common Descent often view the universe that way – it grew, through the original input of information, into an adult form. John A. Davison wrote a few papers on this possibility.

Thus the common ID claim of “sudden appearance” and “fully-formed features” are entirely consistent both with common descent (even fully materialistic) and non-common-descent versions of the theory, because the evolution is guided by information.

There are also interesting mixes of these theories, such as Scherer’s Basic Type Biology. Here, a limited form of common descent is taken, along with the idea that information is available to guide the further diversification of the basic type along specific lines (somewhat akin to Vavilov’s Law). Interestingly, there can also be a common descent interpretation of Basic Type Biology as well, but I’ll leave that alone for now.

Now, you might be saying that the ID form of common descent only involves the origin of life, and therefore has nothing to do with evolution. As I have argued before, abiogenesis actually has a lot to do with the implicit assumptions guiding evolutionary thought. And, as hopefully has been evident from this post, the mode of evolution from an information-rich starting point (ID) is quite different from that of an information-poor starting point (neo-Darwinism). And, if you take common descent to be true, I would argue that ID makes much better sense of what we see (the transitions seem to happen with some information about where they should go next).

Now, you might wonder why I disagree with the notion of common descent. There are several, but I’ll leave you with one I have been contemplating recently. I think that agency is a distinct form of causation from chance and law. That is, things can be done with intention and creativity which could not be done in complete absence of those two. In addition, I think that there are different forms of agency in operation throughout the spectrum of life (I am undecided about whether the lower forms of life such as plants and bacteria have anything which could be considered agency, but I think that, say, most land animals do). In any case, humans seem to engage in a kind of agency that is distinct from other creatures. Therefore, we are left with the question of the origin of such agency. While common descent in combination with ID can sufficiently answer the origin of information, I don’t think it can sufficiently answer the origin of the different kinds of agency.

Comments
CJYman at 250, Well, it seems we have come to the end of this road, and it has been a pleasure having this discussion with you. And you as well. I appreciate you taking the time to present your remarkably nuanced view. I look forward to discussions on other topics here with you -- perhaps we'll even end up on the same side! Regards.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
10:08 AM
10
10
08
AM
PDT
Hello Mustela, Well, it seems we have come to the end of this road, and it has been a pleasure having this discussion with you. The measurement for CSI that I use is based on the most recent work on the subject by Dr. Dembski "Specifications: the Patterns which Signify Intelligence." CSI can be measured at different points and with different givens ... ie: from earth, given a full genome; or from our solar system, given nucleic acids; or our universe, given only matter and energy. No matter what the crowd states, and Dr. Dembski has explained this (I'm very sure I've read him explain somewhere) and I believe I have adequately explained above, there is no *requirement/necessity*, in the measurement for CSI itself as based on the equation given by DR. Dembski, for direct intervention -- that is, unless evolution can be ruled out, but I see no way that one can rule out evolution for two reasons ... 1. the alternative, direct intervention, as it relates to the origin of CSI within our universe is about as useful as "last Thursdayism." It may be true, but going as far back in time as we can until we get to the first CSI event, the only possible solution would be for a hand to "rip through space and time" and fashion the first instance of CSI. Is this scientifically useful? I vote, not at all! 2. no matter the improbability, there is always a way to set up a program to generate an event. Look at a car factory and the robots assembling the car in stages. That's all an EA really is ... a program (robot) used to generate a highly improbable and specified event efficiently. ... Oh, least I forget, there is another possibility ... life has always existed in an infinite universe. But that is definitely for another discussion. Since I am by no means an "interventionist," I will not be able to defend the "interventionist" position which is what you seem to be asking of me. In fact, I will help you argue against such a position as it pertains to life. All I have done (I believe effectively) is defend the position that CSI is indicative of previous intelligent causation -- which is all that the equation for CSI, along with an understanding of organization, can tell us. Furthermore, that hypothesis has not yet been refuted through experimentation with computer simulation. The equation for CSI can't tell us exactly "how" the artifact was generated -- through evolution or through a robotic warehouse, or through direct intervention, etc. It merely, literally tells us in a mathematical form, that we are not dealing with a uniform probability distribution ... that is it. Then, the NFL Theorems take over and the argument for CSI as a reliable indicator of previous intelligence continues, which I have briefly touched on above. So, I'm off to get me some more edumacation today, so I'll be busy for the next couple days. I hope to hear from you again either in this thread or a future one. ...later... PS. my position is somewhere in between/ a combination of Dr. Behe's and Denton's viewpoint.CJYman
January 21, 2010
January
01
Jan
21
21
2010
06:36 AM
6
06
36
AM
PDT
CJYman at 244, I'm afraid that we're discussing two different concepts. I'll pick a few lines from your post to explain why I've come to this conclusion. Please let me know if you think my excerpts leave out relevant context. “When you say “chance hypothesis” do you mean “de novo creation”?” What else can you offer that utilizes only chance? Evolutionary algorithms sure don’t work only by chance. . . . CSI rules out chance very effectively . . . And yes, CSI does not take evolutionary mechanisms into account. . . . “We’re talking about evolution, not physics. I think I mentioned during one of our last discussions that you might be able to make a case for cosmological ID using the NFL theorems, but they are completely inapplicable when discussing the evolution that is observed in the one biosphere we know about.” I have been making a case for cosmological ID this whole time . . . “CSI is supposed to show that intelligent intervention is required for a particular biological artifact to exist.” I emphatically disagree!!!!!!! CSI requires no intervention as long as an evolutionary algorithm of any type can account for said pattern. . . . Simply, CSI provides evidence for evolution; it does not “rule out” evolution. . . . “…and changes in the length of a genome, for example, can be explained by known types of mutations.” And because they are “known types of mutations” means what exactly in light of calculating for CSI? It seems to me that your version of CSI is sufficiently different from that described by Dembski in No Free Lunch and in other papers as to be a completely different concept. Dembski's CSI is supposed to demonstrate that intelligent intervention is required to explain how evolution occurred. It also is supposed to take into account known natural mechanisms; it is not merely a measurement of the probability of de novo creation. I'm looking to gain a better understanding of CSI as described by Dembski, since it seems to be one of the primary positive claims in support of ID. Your cosmological CSI approach is interesting, and frankly I suspect easier to defend both scientifically and theologically, but it doesn't shed light on CSI as described in No Free Lunch. If you disagree or, even better, if you'd like to take a stab at a mathematically rigorous calculation of CSI, as described in No Free Lunch, for a real biological artifact, taking into consideration known physics, chemistry, and evolutionary mechanisms, I'd love to continue the discussion.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
04:44 AM
4
04
44
AM
PDT
CJYman at 244, “This isn’t my understanding from No Free Lunch, other papers, and other ID proponents on this site.” You expect everyone to have the exact same understanding? For such a core concept, one of the primary pieces of what is claimed to be positive evidence for ID, I definitely expect there to be agreement on the definition. You do realize not everyone understands or has the same hypothesis for evolution, right? There is ongoing research in numerous areas, but the core concepts of the theory are agreed upon and everyone uses the same definitions.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
04:01 AM
4
04
01
AM
PDT
In fairness to the original post, most of the calculations here seem to take modern protein catalysts of many amino acids spontaneously developing. We might assume something with a minimal genome, or mimi-virus like as the original design. But what if it is even simpler? It might not even 'look' designed. For example, It has been shown short peptides and RNA are catalytic. I don't think the CSI for a single proline or Val-Val dipeptide would be very high, yet they achieve remarkable catalysis and sterospecificity. It is possible that CSI started very low, and information was added with selection, energy, etc. over time. http://www.scripps.edu/newsandviews/e_20020311/enlarge.html http://www.pnas.org/content/103/34/12713.longREC
January 20, 2010
January
01
Jan
20
20
2010
09:04 PM
9
09
04
PM
PDT
CJYman: "Furthermore, I can also show that events known to have required foresight in their construction are definable in terms of organized CSI." Mustela: "I would like to see that calculation." Excellent. Let's look at an oil refinery, courtesy of wikipedia: http://en.wikipedia.org/wiki/Oil_refinery Let's begin with defining the connections between each "unit" in the diagram in terms of CSI. We have 20 "units" or stations which can be connected in any way, since there is no regularity in the connections which can be defined by mathematical formula, and the connections themselves are not attracted to any specific station at the exclusion of the others through the physical properties of either the connectors or the stations. So, we can eliminate law and as long as the pattern is indeed not random, then we can say it is organized. We are definitely looking at a specified event, since f(pattern of connections and specific stations)= functional event of a usable product. So, yes the pattern is functionally organized. If we can calculate a > 1 value for CSI, the oil refinery would be an example of FSCI. Please excuse any small math errors as I am going through this relatively quickly. So, there are 20 units, and thus approx 1.57 * 10^57 possible ways of connecting them. Thus P(T|H) = 1/(1.57*10^57) In order to calculate the specificity, we need to know, utilizing the same stations, how many possible combinations of connections will also produce a usable product. This is where feedback from a professional would be useful. However, if this is a very specific configuration, and any improper connection will shut down the process ... then S(t) = 1 Now, we could use 10^120 as our probabilistic resources, but to get a more accurate calculation we should take into consideration what would be necessary for this plant to operate in space. Would more components be required? If so, then our calculation of probabilistic resources should take into consideration that this configuration could only operate on certain planets. After calculating this, which I will provide as an estimate when I have some time to research the variables, we will arrive at an accurate value for the probabilistic resources. Then, put it together: CSI = -log2[M*N*s(T)*P(T|H), and we have the connection specified complexity. But, there would of course be more to do. We need to figure out the CSI of every station/unit utilizing the above approach and then combine that value with the connection specified complexity to finally arrive at the CSI of the oil refinery depicted at the wikipedia link. For whomever is interested, go ahead and do some research and make the calculations. I will do the same when and if I have time and we can compare results. For now, though, I have shown that it is definitely theoretically possible to get a lower limit at least on the amount of CSI present in the above configuration. And I have already previously calculated a lower limit of CSI for the protein Titin in a likewise manner to show that it is indeed possible to do so. So there you have it -- an explanation, with some preliminary calculation showing that an oil refinery (which definitely requires foresight on the part of engineers) is definable in terms of organized CSI. As a comparison, check out the schematic for this: http://www.sciencemusings.com/blog/uploaded_images/Metabolism-733633.jpgCJYman
January 20, 2010
January
01
Jan
20
20
2010
03:21 PM
3
03
21
PM
PDT
This is way off topic but since the phrase No Free Lunch was just mentioned, here are a couple things someone just sent me. ----------------- Who said there was no such thing as a Free Lunch http://www.youtube.com/watch?v=pQ50PYMXDCQ --------- It's all in the numbers The Beauty of Mathematics and the Love of God! I bet you will NOT be able to read it without sending it on or telling at least one other person! Beauty of Mathematics!!!!!!! It was always fun to wow students via principles of 8 & 9: 1 x 8 + 1 = 9 12 x 8 + 2 = 98 123 x 8 + 3 = 987 1234 x 8 + 4 = 9876 12345 x 8 + 5 = 98765 123456 x 8 + 6 = 987654 1234567 x 8 + 7 = 9876543 12345678 x 8 + 8 = 98765432 123456789 x 8 + 9 = 987654321 0 x 9 + 1 = 1 1 x 9 + 2 = 11 12 x 9 + 3 = 111 123 x 9 + 4 = 1111 1234 x 9 + 5 = 11111 12345 x 9 + 6 = 111111 123456 x 9 + 7 = 1111111 1234567 x 9 + 8 = 11111111 12345678 x 9 + 9 = 111111111 123456789 x 9 +10= 1111111111 9 x 9 + 7 = 88 98 x 9 + 6 = 888 987 x 9 + 5 = 8888 9876 x 9 + 4 = 88888 98765 x 9 + 3 = 888888 987654 x 9 + 2 = 8888888 9876543 x 9 + 1 = 88888888 98765432 x 9 + 0 = 888888888 Brilliant, isn't it? And look at this symmetry: 1 x 1 = 1 11 x 11 = 121 111 x 111 = 12321 1111 x 1111 = 1234321 11111 x 11111 = 123454321 111111 x 111111 = 12345654321 1111111 x 1111111 = 1234567654321 11111111 x 11111111 = 123456787654321 111111111 x 111111111 = 12345678987654321 Mind Boggling... Now, take a look at this... 101% >From a strictly mathematical viewpoint: What Equals 100%? What does it mean to give MORE than 100%? Ever wonder about those people who say they are giving more than 100%? We have all been in situations where someone wants you to GIVE OVER 100%... How about ACHIEVING 101%? What equals 100% in life? Here's a little mathematical formula that might help answer these questions: If: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Is represented as: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26. Then: H-A-R-D-W-O- R- K 8+1+18+4+23+15+18+11 = 98% And: K-N-O-W-L-E- D-G-E 11+14+15+23+12+5+4+7+ 5 = 96% But: A-T-T-I-T-U- D-E 1+20+20+9+20+ 21+4+5 = 100% THEN, look how far the love of God will take you: L-O-V-E-O-F- G-O-D 12+15+22+5+15+ 6+7+15+4 = 101% Therefore, one can conclude with mathematical certainty that: While Hard Work and Knowledge will get you close, and Attitude will get you there, It's the Love of God that will put you over the top! Have a nice day & God bless youjerry
January 20, 2010
January
01
Jan
20
20
2010
03:00 PM
3
03
00
PM
PDT
Mustela: "This isn’t my understanding from No Free Lunch, other papers, and other ID proponents on this site." You expect everyone to have the exact same understanding? You do realize not everyone understands or has the same hypothesis for evolution, right? So, why demand anything different as it pertains to ID Theory? At the end of the day, whichever hypothesis works the best will be used. Mustela: "When you say “chance hypothesis” do you mean “de novo creation”?" What else can you offer that utilizes only chance? Evolutionary algorithms sure don't work only by chance. Mustela: "If so, that’s not particularly interesting because no biologist claims that real biological artifacts arise de novo." Now, it looks like we are on the same page. The really interesting thing is not that biologists already intuitively know what the measurement for CSI tells us -- although the fact that the specificity and complexity can be quantified is indeed interesting -- but that no one to date has falsified or provided any theoretical underpinning to show that organized CSI can be generated absent previous intelligence through computational simulations. Mustela: "If by “chance hypothesis” you mean “non-intelligent causes” then your assumption of a uniform probability distribution ignores known physics, chemistry, and evolutionary mechanisms and so is also not applicable to the real world." First, it should be obvious by now that I don't see law as a subset of chance, yet law is a non-intelligent cause. I thought I already defined chance earlier a few times ... you know ... uniform probability distribution, statistical randomness, lacking correlation, and unguided by rules or lawful constraints. Mustela: "If the Explanatory Filter is where known physics, chemistry, and evolutionary mechanisms are taken into consideration, it cannot use a uniform probability distribution either." I'm sorry, but now this is turning into ID 101 and you should really do some research on the basic concepts of ID Theory. I'd love to explain ID Theory and my own hypothesis, but I just don't have the time. Very briefly, the EF doesn't "use a uniform probability distribution." The EF merely states that one must effectively rule out chance and then law in order to label a given event as intelligently designed. CSI rules out chance very effectively, and my previous explanation of organization rules out law -- same type of idea as "the physics and chemistry (laws) of the ink and paper do not define the pattern and meaning within an essay." Mustela: "I didn’t see anything in your calculations that make them applicable in the real world." You didn't notice how the calculation rules out a uniform probability distribution if a value > 1 results? Mustela: "Are we using the word differently? I am using “assumption” in the mathematical sense. When you choose to measure against it, you are most definitely assuming a uniform probability distribution." Ah, yes, I see now. In a mathematical sense, we are "assuming" a uniform probability distribution in order to see if our assumption is correct. Given a uniform probability distribution, we will not get a value > 1, so if that > 1 value results, we know that we are not dealing with a uniform probability distribution. Simple as that. Mustela: "It’s deeper than that, though. By measuring against a uniform probability distribution, you are implicitly assuming that such a distribution reflects some aspect of the real world." Yes, there are such thing as uniform probability distributions. Mustela: "As I’ve repeatedly pointed out, it does not because it fails to take into account known physics, chemistry, and evolutionary mechanisms. Those factors make the actual probability distribution of particular biological artifacts far from uniform." I've already dealt with this repeated "fact" of yours and you seem to be ignoring my response, or at least not understanding it. If I am being unclear, please ask for clarification on the issue. Your understanding of the problem is lacking. Physics and chemistry on their own do no such thing, since the patterns we are discussing are not defined by the laws of physics and chemistry. If law did effect the sequencing of nucleotides, then they would not be able to carry information. We would merely be stuck with a repeating structure such as ACTGACTGACTG, which would then be defined and explained by chance + law, much like a crystal or a snowflake. And yes, CSI does not take evolutionary mechanisms into account. As I already stated before, CSI itself is evidence of evolution for the main fact that CSI tells us that (barring direct intervention) we are dealing with a non-uniform probability distribution being matched with a correct search algorithm for efficient search. Mustela: "We already know the search space isn’t a uniform probability distribution because we know about physics, chemistry, and a number of evolutionary mechanisms." So, your main point is that CSI is redundant? I can see why you might say that, however, CSI actually gives us the quantification of an event that requires either direct intervention or an evolutionary algorithm. So, as far as providing a measurement, and providing evidence for evolution, CSI is not redundant. Mustela: "I’m afraid I must disagree. What I wrote is exactly correct. You can read the papers (sorry, link didn’t work) to confirm that." I've already read through the paper a few times and the authors explicitly state what I said. I can provide a few quotes upon request. Mustela: "Our search space is a given: the real world. Observation shows that evolutionary mechanisms work well in that environment." Yes, because, according to the NFL Theorems, there has been a "fortuitous" matching between a non-uniform probability distribution and search algorithm (relying upon the structure and information processing ability of life itself). If you want to provide evidence against the ID position, merely show that the above can be accomplished through only law + chance, absent intelligence. IOW, merely show me that CSI can be generated absent previous intelligent (foresighted for a future target) input. Mustela: "We’re talking about evolution, not physics. I think I mentioned during one of our last discussions that you might be able to make a case for cosmological ID using the NFL theorems, but they are completely inapplicable when discussing the evolution that is observed in the one biosphere we know about." I have been making a case for cosmological ID this whole time, and it is wholly dependent on the CSI calculated of systems within our universe -- ie: the biosphere that we know about. If CSI was not calculated in living systems, then an evolutionary algorithm wouldn't be necessary and there would be no argument for ID Theory in life based on specificity and complexity (improbability) at the biosphere level or the cosmological level. NFL Theorems are absolutely applicable to life, and evolution on our planet, since they imply accounting principles when it comes to the generation of efficiently produced [and specified or pre-specified] events -- CSI. IMO, there is no good case for direct intervention in abiogenesis, life, or evolution. Mustela: "CSI is supposed to show that intelligent intervention is required for a particular biological artifact to exist." I emphatically disagree!!!!!!! CSI requires no intervention as long as an evolutionary algorithm of any type can account for said pattern. Although, I am beginning to realize that a CSI calculation > 1 between two functional patterns, with a sea of "nonfunction" in between them, would definitely provide evidence against a purely darwinian account of evolution. But, this is really old hat, now that modern evolutionary research seems to be providing evidence that self-guided genetic engineering within life is most likely also responsible, along with darwinian random mutations, for evolution. Furthermore, there seem to be a fair amount of scientists who argue that natural selection hinders rather than helps evolution in certain contexts. But that is definitely for another discussion. Back to CSI ... Mustela: "In order to do that, it must rule out known natural causes such as known evolutionary mechanisms. When a uniform probability distribution is assumed, those known natural causes are ignored, making any calculation inapplicable to the real world." I've adequately responded to that line of incorrect understanding above in previous comments (I'm quite sure) and definitely above within this comment. We seem to be going around in a circle now, and I'm beginning to notice that I'm repeating myself and you seem to be ignoring some of my explanations and clarifications. For the record, I hate having to repeat myself if my responses are being ignored. It wastes too much of my time and tells me that we aren't going to get any further in our discussion. Simply, CSI provides evidence for evolution; it does not "rule out" evolution. Mustela: "My understanding is that the presence of CSI is supposed to be a clear and unambiguous indication that non-intelligent mechanisms are insufficient to explain the artifact under consideration." Almost bang on, except that could be misinterpreted. A robot in a car factory could be seen as non-intelligent. That robot is sufficient to a certain extent in explaining the existence of cars. However, cars would not exist without intelligence, since that robot would also not exist without intelligence. IOW, that robot is neither defined by nor best explained by chance + law, absent intelligence. An evolutionary algorithm is really just an efficient search robot. A simple way to understand the implication of CSI is that it provides a clear and unambiguous indication that the artifact under consideration can not be generated by chance + law, absent previous intelligence in that artifact's causal chain. Mustela: "While some non-intelligent mechanisms may be involved, CSI is supposed to show that something else is also required." Correct ... Intelligence is required somewhere down the causal chain. Mustela: "With all due respect, I don’t think you have. Your calculations don’t correspond to CSI as described in No Free Lunch..." My calculations are based on Demsbki's definition of a specified event being an event which can be formulated as an independent pattern, and all calculations coincide with his most up to date explanation of CSI as posted on his website under the paper "Specifications: the Patterns which Signify Intelligence." A more in depth look at why CSI effectively eliminates chance can be found at http://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#fsciis, especially in table of contents points A] and B]. Mustela: "...and changes in the length of a genome, for example, can be explained by known types of mutations." And because they are "known types of mutations" means what exactly in light of calculating for CSI? Again, if you don't think that CSI points to intelligence, I suggest you use less words and merely provide evidence that chance + law absent previous intelligence will generate CSI. Until then, that ID hypothesis is standing strong.CJYman
January 20, 2010
January
01
Jan
20
20
2010
02:10 PM
2
02
10
PM
PDT
ROb @ 240, I've already addressed where I disagree with Dembski on law being a subset of chance. Law can't be a subset of chance if Dembski is correct that CSI adequately refutes the chance hypothesis. That is because law is defined by compressibility and a long enough compressible pattern will produce CSI. Thus, the EF is required to arrive at what I have dubbed organized CSI -- designed objects (melding Dembski's CSI with Treveor and Abel's "organization.") The main point is extremely simple to follow and understand... ahh0duhdu';g nznb -- defined and thus best explained by chance. Algorithmically complex and statistically random. aaaaaaaaaaaa or afdsafdsafdsafds -- defined and thus best explained by chance + law. Algorithmically compressible with an element of chance for the set of laws utilized and which letter(s) is/are utilized. "DO you understand me?" -- defined and thus best explained by chance + law + intelligence. Algorithmically complex, specified, sufficiently complex (improbable) in the context of this comment; foresight for communication required; historical contingency of the English language and our conversation shows how chance is included in the explanation; able to have arrived via evolutionary algorithm, thus the potential for law as an included explanation.CJYman
January 20, 2010
January
01
Jan
20
20
2010
11:55 AM
11
11
55
AM
PDT
CJYman at 238, “Is your definition of CSI different from Dembski’s, then? ROb quotes Dembski in 234 as requiring that “Darwinian and other material mechanisms” be taken into account. The assumption of a uniform probability distribution fails to do that.” There do seem to be a few areas where I disagree with Dembski … or at least some of his viewpoints in the past. I think that quote above is from before Dembski realized that evolution itself — even of the technically Darwinian type — is evidence of previous intelligence, which is something he definitely believes now as shown in his recent works. Could you point me to a cite? This sounds more like a cosmological ID argument than an evolutionary ID argument. Showing that intelligence is required for evolution to result in the diversity of life we see is different from showing that evolutionary mechanisms (and presumably the underly physics and chemistry) require intelligent intervention. I have a question for you. Are all those “material mechanisms” you cite above, chance based? IE: are HGT, point mutations, duplicate genes, etc. random occurences? Or are they sometimes guided by the structure of life — from the laws that emerge from life’s organization? Mutations are generally assumed to be random with respect to fitness, and bad mutations are culled by natural selection, which is not a random process. There is some interesting work being done on the evolution of evolvability and variable evolvability, which is quite interesting but doesn't negate the basic concepts. Looking forward to hearing from you again. Creating a CSI calculator would actually be quite simple. Collecting the data for the variables is the hard, yet very interesting, part. I hope you're right, but I think we still a ways off from a mathematically rigorous algorithm.Mustela Nivalis
January 20, 2010
January
01
Jan
20
20
2010
07:33 AM
7
07
33
AM
PDT
CJYman at 237, “My understanding from No Free Lunch and other reading is that CSI is supposed to be a unique characteristic that identifies designed objects. Is that your understanding as well?” Technically, and this is what needs to be understood in order to develop an ID argument, CSI only effectively rules out the chance hypothesis This isn't my understanding from No Free Lunch, other papers, and other ID proponents on this site. When you say "chance hypothesis" do you mean "de novo creation"? If so, that's not particularly interesting because no biologist claims that real biological artifacts arise de novo. If by "chance hypothesis" you mean "non-intelligent causes" then your assumption of a uniform probability distribution ignores known physics, chemistry, and evolutionary mechanisms and so is also not applicable to the real world. Then, upon applying the EF in the mode that I’ve also explained above via the given link, one can identify a designed object. If the Explanatory Filter is where known physics, chemistry, and evolutionary mechanisms are taken into consideration, it cannot use a uniform probability distribution either. I didn't see anything in your calculations that make them applicable in the real world. “I’m confused, you state that assumption explicitly in your definition.” There is no assumption of a uniform probability distribution — there is a measurement against a uniform probability distribution. Did I not explain this adequately? Are we using the word differently? I am using "assumption" in the mathematical sense. When you choose to measure against it, you are most definitely assuming a uniform probability distribution. The only “assumption,” or rather valid representation, is that chance is modeled by statistical randomness and that a uniform probability distribution = a random/chance distribution. It's deeper than that, though. By measuring against a uniform probability distribution, you are implicitly assuming that such a distribution reflects some aspect of the real world. As I've repeatedly pointed out, it does not because it fails to take into account known physics, chemistry, and evolutionary mechanisms. Those factors make the actual probability distribution of particular biological artifacts far from uniform. Are you seeing yet how a value > 1 shows us that the search space is not uniform? We already know the search space isn't a uniform probability distribution because we know about physics, chemistry, and a number of evolutionary mechanisms. “The NFL theorems say that any particular search algorithm is no better than random search when averaged over all possible domains.” Yes … “It says nothing about the efficacy of a particular search algorithm in a particular domain.” Incorrect. I'm afraid I must disagree. What I wrote is exactly correct. You can read the papers (sorry, link didn't work) to confirm that. An efficient search requires a fortuitous matching between search space and search algorithm. Our search space is a given: the real world. Observation shows that evolutionary mechanisms work well in that environment. “The real world is a particular domain, and the only one of interest.” That depends on what you mean by “only one of interest.” Physicists have been and are interested in discovering how fine tuned our laws are to support any type of living and furthermore intelligent organism, in relation to all possible mathematical values for the equations which describe our laws. We're talking about evolution, not physics. I think I mentioned during one of our last discussions that you might be able to make a case for cosmological ID using the NFL theorems, but they are completely inapplicable when discussing the evolution that is observed in the one biosphere we know about. “Therefore, assuming a uniform probability distribution ignores what we know about both the algorithm and the domain and is hence inapplicable to observed phenomena.” Incorrect, as it is the measurement of CSI which provides evidence that (barring direct intervention) we require an evolutionary algorithm to produce the given event. CSI is supposed to show that intelligent intervention is required for a particular biological artifact to exist. In order to do that, it must rule out known natural causes such as known evolutionary mechanisms. When a uniform probability distribution is assumed, those known natural causes are ignored, making any calculation inapplicable to the real world. Did I not explain this earlier? You seem to be operating under some sort of assumption that CSI = no evolution is possible. Is this true? My understanding is that the presence of CSI is supposed to be a clear and unambiguous indication that non-intelligent mechanisms are insufficient to explain the artifact under consideration. While some non-intelligent mechanisms may be involved, CSI is supposed to show that something else is also required. I can and have provided evidence in the above links that chance + law, absent intelligence, will not produce organized CSI. With all due respect, I don't think you have. Your calculations don't correspond to CSI as described in No Free Lunch and changes in the length of a genome, for example, can be explained by known types of mutations. Furthermore, I can also show that events known to have required foresight in their construction are definable in terms of organized CSI. I would like to see that calculation.Mustela Nivalis
January 20, 2010
January
01
Jan
20
20
2010
07:26 AM
7
07
26
AM
PDT
CJYman:
Technically, and this is what needs to be understood in order to develop an ID argument, CSI only effectively rules out the chance hypothesis which is what I’ve been trying to explain to you above.
To say "the chance hypothesis," as if there is only one, makes no sense in Dembski's framework. As I quoted in 179, Dembski explains, "Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination." And as I showed in 234, Dembski says that all relevant chance hypotheses need to be eliminated.
Then, upon applying the EF in the mode that I’ve also explained above via the given link, one can identify a designed object.
According to Dembski, traversing the EF is the same as ascertaining specified complexity. (Quote available on request.) They are not two different steps in detecting design.
I think that quote above is from before Dembski realized that evolution itself — even of the technically Darwinian type — is evidence of previous intelligence, which is something he definitely believes now as shown in his recent works.
Actually, the quote is from his 2005 paper, which he called his "most up-to-date treatment of CSI" in December 2008. His belief that Darwinian evolution is evidence of design goes back to at least 2002. I don't have NFL on hand, but speaking of that book, Dembski said, "Why is ours a world where the Darwinian mechanism works (if indeed it works)? In NFL I contend that design looms here as well." When we get into the logic of using the NFL principle for inferring design, I'll have more to say about it. For now, I'll just point out that Dembski and Marks' reasoning hinges on the assumption that all causal chains are ultimately rooted in uniform distributions over certain spaces, and that the only support they offer for this assumption is Bernoulli's PrOIR.R0b
January 19, 2010
January
01
Jan
19
19
2010
09:44 PM
9
09
44
PM
PDT
"Creating a CSI calculator would actually be quite simple" Cool! Maybe someone can set it up as a server on this site. It would be a great resource for those of us less code-minded.....REC
January 19, 2010
January
01
Jan
19
19
2010
07:33 PM
7
07
33
PM
PDT
Mustela: "Is your definition of CSI different from Dembski’s, then? ROb quotes Dembski in 234 as requiring that “Darwinian and other material mechanisms” be taken into account. The assumption of a uniform probability distribution fails to do that." There do seem to be a few areas where I disagree with Dembski ... or at least some of his viewpoints in the past. I think that quote above is from before Dembski realized that evolution itself -- even of the technically Darwinian type -- is evidence of previous intelligence, which is something he definitely believes now as shown in his recent works. I have a question for you. Are all those "material mechanisms" you cite above, chance based? IE: are HGT, point mutations, duplicate genes, etc. random occurences? Or are they sometimes guided by the structure of life -- from the laws that emerge from life's organization? Looking forward to hearing from you again. Creating a CSI calculator would actually be quite simple. Collecting the data for the variables is the hard, yet very interesting, part.CJYman
January 19, 2010
January
01
Jan
19
19
2010
06:00 PM
6
06
00
PM
PDT
Mustela: "My understanding from No Free Lunch and other reading is that CSI is supposed to be a unique characteristic that identifies designed objects. Is that your understanding as well?" Technically, and this is what needs to be understood in order to develop an ID argument, CSI only effectively rules out the chance hypothesis which is what I've been trying to explain to you above. Then, upon applying the EF in the mode that I've also explained above via the given link, one can identify a designed object. Mustela: "I’m confused, you state that assumption explicitly in your definition." There is no assumption of a uniform probability distribution -- there is a measurement against a uniform probability distribution. Did I not explain this adequately? The only "assumption," or rather valid representation, is that chance is modeled by statistical randomness and that a uniform probability distribution = a random/chance distribution. Are you seeing yet how a value > 1 shows us that the search space is not uniform? Mustela: "That’s exactly why the assumption of a uniform probability distribution is invalid." The only assumption is that chance hypothesis = uniform probability distribution. But, as I've stated above, that's not really "just an assumption," since a uniform probability distribution is the concept of statistical randomness, and thus chance, applied to a search space. Mustela: "The NFL theorems say that any particular search algorithm is no better than random search when averaged over all possible domains." Yes ... Mustela: "It says nothing about the efficacy of a particular search algorithm in a particular domain." Incorrect. An efficient search requires a fortuitous matching between search space and search algorithm. That's pretty much the whole point of the NFL Theorems when I last read through them. But this is going beyond CSI. Let's clear up our understanding of CSI first before we head out into deeper waters. Mustela: "The real world is a particular domain, and the only one of interest." That depends on what you mean by "only one of interest." Physicists have been and are interested in discovering how fine tuned our laws are to support any type of living and furthermore intelligent organism, in relation to all possible mathematical values for the equations which describe our laws. Remember, there is as of yet no reason why our laws have to, as a matter of necessity, have the values which they do have. You may not find this, and the resulting calculations which only match intelligently produced events, interesting but myself and others sure do find this extremely interesting and amenable to research utilizing computers and math. In fact, Seth Lloyd has hypothesized that our universe is literally a quantum computer ("Programming the Universe"). But, yes, we only have a sample size of 1 for universes. That is the size which is used for the probabilistic resources in the calculation for CSI, since anything further out from our light cone could not effect events here if state changes (information) can indeed not travel faster than light. Mustela: "Known evolutionary mechanisms are observed to be an effective search algorithm (leaving aside the discussion of whether or not this is a good model for what we’re discussing) in the real world." No, we must not "leave that aside." It is integral to our discussion. But, as I said, let's pin down our understanding of CSI first before delving deeper. Mustela: "Therefore, assuming a uniform probability distribution ignores what we know about both the algorithm and the domain and is hence inapplicable to observed phenomena." Incorrect, as it is the measurement of CSI which provides evidence that (barring direct intervention) we require an evolutionary algorithm to produce the given event. Did I not explain this earlier? You seem to be operating under some sort of assumption that CSI = no evolution is possible. Is this true? Mustela: "That inapplicability eliminates the ability of CSI, formulated as above, to uniquely characterize a designed object." What inapplicability? CSI is merely a measurement of probabilistic resources, and specificity against a uniform probability distribution. The uniform probability distribution merely acts as a sort of "zeroing;" a calibration. Thus, if we can measure for probabilistic resources and specificity, we can measure the event against a uniform probability distribution. Is it getting clear now how a value > 1 only tells us that the event is generated from a non-uniform probability distribution? Thus, there is some type of stepping/ratcheting/evolutionary process which can be utilized if the proper search procedure can be matched to the underlying ordered/organized/non-uniform search space. Are you saying that you can provide evidence that chance + law absent intelligent foresight, fine tuning for future results, will produce CSI? On the other hand, I can and have provided evidence in the above links that chance + law, absent intelligence, will not produce organized CSI. Furthermore, I can also show that events known to have required foresight in their construction are definable in terms of organized CSI. Science works by putting 2 and 2 together. After all, it is through that type of observation, inference, and extrapolation into the past that also provides evidence for the evolution of life.CJYman
January 19, 2010
January
01
Jan
19
19
2010
05:46 PM
5
05
46
PM
PDT
CJYman at 233, It appears that we are speaking past each other, since I’ve adequately answered the questions that you continue to post. It seems that you have some idea of what CSI is “supposed” to tell which is not in fact what CSI *does actually* tell us. My understanding from No Free Lunch and other reading is that CSI is supposed to be a unique characteristic that identifies designed objects. Is that your understanding as well? CJYman: “CSI merely combines the number of specified patterns in a space of possibilities with the maximum number of bit operations that had passed until a specified pattern was discovered with the probability of finding that specified pattern given a uniform probability distribution.” Mustela: “Your assumption of a uniform probability distribution is invalid.” There is no such assumption. I'm confused, you state that assumption explicitly in your definition. The uniform probability distribution is included so that if we arrive at a > 1 value, we have strong evidence that we are indeed *not* dealing with a uniform probability distribution. Then, the NFL Theorems take over to continue the argument. That's exactly why the assumption of a uniform probability distribution is invalid. The NFL theorems say that any particular search algorithm is no better than random search when averaged over all possible domains. It says nothing about the efficacy of a particular search algorithm in a particular domain. The real world is a particular domain, and the only one of interest. Known evolutionary mechanisms are observed to be an effective search algorithm (leaving aside the discussion of whether or not this is a good model for what we're discussing) in the real world. Therefore, assuming a uniform probability distribution ignores what we know about both the algorithm and the domain and is hence inapplicable to observed phenomena. That inapplicability eliminates the ability of CSI, formulated as above, to uniquely characterize a designed object. Mustela: “Dembski recognizes this in No Free Lunch where he mentions the need to take all known natural causes into account.” You’re correct in a sense. Natural causes are reducible to law and chance and, I would add, intelligence. But, I differ with what appears to be Dembski’s position at the time he wrote NFL. I don’t see law as a category within chance because of how the patterns I showed in #127 can be categorized. Is your definition of CSI different from Dembski's, then? ROb quotes Dembski in 234 as requiring that "Darwinian and other material mechanisms" be taken into account. The assumption of a uniform probability distribution fails to do that. Mustela: “I do hope you’ll continue the discussion as time permits. It might be most productive to start with the specification and then work on the actual calculation.” I’m enjoying this discussion as you seem to be one of the rare ID critics, along with ROb and Nakashima (off the top of my head), who is actually interested in discussing ID rather than muddying the conversation with obfuscation. Thanks for the kind words. Given both of our schedules, I suspect we'll be chasing this issue around multiple threads before we're done. You're my best hope for implementing a CSI calculator in software! Give me an event, and we can start by discussing the specification if it does indeed exist. From another thread, I'd suggest the specification of citrate digestion in Lenski's e. coli, if that makes sense. Determining the CSI before and after that functionality manifested would be very interesting.Mustela Nivalis
January 19, 2010
January
01
Jan
19
19
2010
10:04 AM
10
10
04
AM
PDT
Joseph, at the very most we observe the functionality of a few of the 2^500 possibilities inherent in a 500-bit piece of DNA. But we can't know the ?S(T) if we can't determine how useful the other combinations are. I'll use an analogy to Battleship. We're trying to find out how good our odds are for hitting a battleship. We've got a space to guess that's 2^500 big (about 10^150 possible guesses). We know a few of the spaces where we've "hit", but we don't know anything about the rest of the map. It's a pretty big search space. Now, Dembski says that because it's such a big search space, the first few "hits" we got were placed by someone with knowledge of our opponent's board, that you could never get these by random chance, not even over the history of the universe. But Dembski also includes that term phi*S(T), which is the number of spaces on the opponent's board that is filled with battleships. For obvious reasons, this number affects the probability calculation. It'll be much easier to hit a battleship by random chance if the opponent's board is 99% full vs. 1% full. Heck, if even 1 spot in 10^8 had a battleship, it'd still be both rare and well within the range of evolution - since evolution also has time on its side. But nobody really knows how big Phi*S(T) is, including Dembski. Savvy?Ben W
January 19, 2010
January
01
Jan
19
19
2010
10:01 AM
10
10
01
AM
PDT
CJYman, thank you. In fairness, I think that many ID opponents are interested in clarifying rather than obfuscating the issues. In my opinion, critics such as Wein, Elsberry & Shallit, Tellgren, and Haggstrom epitomize lucidity. WRT CSI, Dembski's current definition is: –log2[10^120 * ΦS(T) * P(T|H)] When he applies this to bacterial flagella, he said that H is "the relevant chance hypothesis that takes into account Darwinian and other material mechanisms." Dembski also says: Suppose the relevant collection of chance hypotheses that we have good reason to think were operating if the event E happened by chance is some collection of chance hypotheses {Hi}i ∈ I indexed by the index set I. Then, to eliminate all these chance hypotheses, Χi = –log2[10^120 * ΦS(T)*P(T|Hi)] must be greater than 1 for each Hi. I don't know how it could be any clearer that we need to look at all relevant chance hypotheses, not just uniform randomness.R0b
January 19, 2010
January
01
Jan
19
19
2010
09:27 AM
9
09
27
AM
PDT
Mustela @226, It appears that we are speaking past each other, since I've adequately answered the questions that you continue to post. It seems that you have some idea of what CSI is "supposed" to tell which is not in fact what CSI *does actually* tell us. CJYman: "You seem to not yet understand that there is no need, for the purpose of what is being calculated, for CSI to take into account the chemistry, physic, etc of our universe." Mustela: "Actually, I would argue that it is you who does not understand that it is essential for CSI to take known physics, chemistry, and evolutionary mechanisms into account if it is to be a useful measure in the real world. The reason for this, as I explained in our previous discussion, is where ignoring these factors leads:" Well then go ahead and edumacate me. Argue for your understanding of CSI and explain to me each variable in the equation for CSI and tell me what combining the variables tells us. This will probably be a more profitable way to continue our discussion. CJYman: "CSI merely combines the number of specified patterns in a space of possibilities with the maximum number of bit operations that had passed until a specified pattern was discovered with the probability of finding that specified pattern given a uniform probability distribution." Mustela: "Your assumption of a uniform probability distribution is invalid." There is no such assumption. The uniform probability distribution is included so that if we arrive at a > 1 value, we have strong evidence that we are indeed *not* dealing with a uniform probability distribution. Then, the NFL Theorems take over to continue the argument. Mustela: "The only way to make such an assumption is to ignore known physics, chemistry, and evolutionary mechanisms, which makes the result of any calculation inapplicable to the real world we live in." What assumption??? It appears that you indeed do not understand CSI. Mustela: "Dembski recognizes this in No Free Lunch where he mentions the need to take all known natural causes into account." You're correct in a sense. Natural causes are reducible to law and chance and, I would add, intelligence. But, I differ with what appears to be Dembski's position at the time he wrote NFL. I don't see law as a category within chance because of how the patterns I showed in #127 can be categorized. CSI only take the natural cause "chance" into consideration. Since statistical randomness is equated with chance, the measure against a uniform probability distribution is included in the formula. Again, as I've stated a few time already, a > 1 value provides strong evidence that a uniform probability distribution, and thus chance, can be effectively eliminated. As I wrote above, and you either ignored, didn't read, or didn't understand, CSI does not rule out law without the application of the Explanatory Filter. CSI only rules out chance, thus providing evidence against those who think that life started out as some sort of chance arrangement of molecules. Mustela: "Just to give an example of why it is essential to take known science into account, consider the issue of identifying the change in CSI due to a mutation. Using a naive, two to the power of the length of the genome calculation, point mutations result in no additional CSI, even though they may change the result of transcribing the genome, while frame shifts, insertions, and deletions change the length, and therefore the CSI, without necessarily changing the function of the genome." Tell me, Mustela, can you provide any evidence that the system required to utilize all those processes can be generated absent intelligence? Of course, CSI can be measured at any point, so give me an example of what you are talking about, including all the "givens." Ie: given a living system, x amount of time, and gene x, what transformations can be reasonably arrived at by chance. BTW, there seems to be evidence that evolution can be guided by laws that emerge from the organization of living systems. Organized (as opposed to "ordered" -- as explained in the link I've already included for you above) CSI can not reasonably be arrived at by chance + law absent intelligence, but that by no means indicates that chance + law are not utilized in the process of arrive at CSI. Is this making sense? Remember that evolutionary algorithm are the best example of law + chance + intelligence working together. Intelligence provides the foresight by utilizing knowledge of the future target to fine tune parameters in the present, and law and chance do the dirty work by directing the search through the possibilities toward targets as the program unfolds. Mustela: "So, I’m afraid you still haven’t provided an example of how to calculate CSI for a real biological system. " I'm not sure what type of "CSI" you are talking about, but I most definitely have provided an estimated value for the lowest amount of CSI possible for the protein Titin. The probabilistic resources are actually horribly inflated in favor of the ID critic, however that is for another discussion. So, back to what we are talking about, it seems that your qualm is not in merely applying the formula for CSI but in the implications of CSI. Furthermore, you seem to think it implies much more than it really does, which I have attempted to clear up earlier in this comment. Mustela: "I do hope you’ll continue the discussion as time permits. It might be most productive to start with the specification and then work on the actual calculation." I'm enjoying this discussion as you seem to be one of the rare ID critics, along with ROb and Nakashima (off the top of my head), who is actually interested in discussing ID rather than muddying the conversation with obfuscation. Give me an event, and we can start by discussing the specification if it does indeed exist.CJYman
January 18, 2010
January
01
Jan
18
18
2010
06:01 PM
6
06
01
PM
PDT
Ben W- We observe the specification/ functionality.Joseph
January 18, 2010
January
01
Jan
18
18
2010
05:01 PM
5
05
01
PM
PDT
Tribune, I accept the existence of designed objects, but I don't see that there's any sure way to tell what parts of DNA are designed or not. All of the experience of archaeologists and SETI-astronomers are useless here; they're dealing with quite different situations. SETI experts are looking for the signs of alien life with the assumption that they *want* to be found and are broadcasting radio waves in regions of the spectrum where there would be no natural noise. Archaeologists look for well-established antecedents to today's technology. This is trivial, since we *know* what designed pottery looks like and have never seen any natural processes that could produce painted and fired ceramics. Biology's a different story, though, since we don't know what designed DNA would look like. Of course, a designer could have made it obvious, by putting in a nice sequence of prime numbers written in our DNA, or something similar to say "this was designed". But there's nothing like that that we've found.Ben W
January 18, 2010
January
01
Jan
18
18
2010
02:21 PM
2
02
21
PM
PDT
(long time lurker here). Joseph, I've read Dembski's work backwards and forwards, but I've never seen him offer a workable criteria for CSI. Notably, in his equation for CSI,
? = – log2[10^120 ·?S(T)·P(T|H)]
he has the term ?S(T), where
?S(T) is a multiplier based on the number of similarly simply and independently specifiable targets
So, say we were trying to look at the probability of a simple amoeba occurring "by chance" (this is a bit of a strawman for evolution's position, but just for the sake of discussion). We'd need to know not only the length of the chromosome, but the number of different ways the genes on the chromosome could be arranged, as well as the number of variants each gene could have. For any gene, there are many alternate base code sequences that would produce the same protein, and many proteins are also somewhat robust to having their amino acids switched. So this term, ?S(T), makes the equation impractical, as there is no practical way to know just how many possible variants there are. This is why Dembski's definition of CSI is useless - if I were to give you a 500 bit sequence of DNA, would you be able to tell me whether it was functional and useful? Could anyone do this? Because without this, there's no way of telling just how "specified" a DNA sequence needs to be useful, and so you can't know if it violates the UPB.Ben W
January 18, 2010
January
01
Jan
18
18
2010
02:04 PM
2
02
04
PM
PDT
<Joseph, Douglas Axe and the Explanatory Filter at ARNZach Bailey
January 18, 2010
January
01
Jan
18
18
2010
01:23 PM
1
01
23
PM
PDT
Cabal, "But was VC’s last post really that bad?" He shouldn't have been posting here at all, I had previously banned him under Diffaxial. I don't care for sock puppets, and I don't care for the martyrdom of sock puppets who continually merit banning. I've noticed a tendency to this behavior, mostly by folks from that absurd "after the bar closes" site, to push the envelope, sometimes subtly, all the while expecting great accolades from their cleverness with their dull cohorts at that other site, and then cry "unfairness" when they are no longer allowed to be mocking here. I have no respect for this very strange, and quite honestly cowardly, behavior. They're like bad seed children, who intentionally cause mischief and then cry when they're punished, as if to say "Why did you punish me, you know I'm only a child" and then play the victim role. I've seen it over and over.Clive Hayden
January 18, 2010
January
01
Jan
18
18
2010
10:12 AM
10
10
12
AM
PDT
So, I’m afraid you still haven’t provided an example of how to calculate CSI for a real biological system.
Other people, including myself, have done just that. That you choose to ignore what has already been said just proves that you are not worth the effort. When people talk of "evolutionary mechanisms" it is a sure sign that they don't know what they are talking about. Ya see evolutionary mechanisms did not exist until living organisms appeared. Also, as far as anyone knows, the bulk of evolutionary mechanisms are in fact design mechanisms. Dr Spetner went over that 13 years ago. Also CSI does take into account physics, chemistry and evolutionary mechansims. To say otherwise is to prove your ignorance of the topic. But anyway the laws of physics and chemistrt are part of what needs explaining. IOW by weasel man trying to use them to weasel through an argument just further exposes the ignorance.Joseph
January 18, 2010
January
01
Jan
18
18
2010
07:06 AM
7
07
06
AM
PDT
CJYman at 223, My comment #116 — especially the last para — was a response to our last conversation where I explained to you how to calculate CSI. You seem to not yet understand that there is no need, for the purpose of what is being calculated, for CSI to take into account the chemistry, physic, etc of our universe. Actually, I would argue that it is you who does not understand that it is essential for CSI to take known physics, chemistry, and evolutionary mechanisms into account if it is to be a useful measure in the real world. The reason for this, as I explained in our previous discussion, is where ignoring these factors leads: "CSI merely combines the number of specified patterns in a space of possibilities with the maximum number of bit operations that had passed until a specified pattern was discovered with the probability of finding that specified pattern given a uniform probability distribution. Your assumption of a uniform probability distribution is invalid. The only way to make such an assumption is to ignore known physics, chemistry, and evolutionary mechanisms, which makes the result of any calculation inapplicable to the real world we live in. Dembski recognizes this in No Free Lunch where he mentions the need to take all known natural causes into account. Just to give an example of why it is essential to take known science into account, consider the issue of identifying the change in CSI due to a mutation. Using a naive, two to the power of the length of the genome calculation, point mutations result in no additional CSI, even though they may change the result of transcribing the genome, while frame shifts, insertions, and deletions change the length, and therefore the CSI, without necessarily changing the function of the genome. So, I'm afraid you still haven't provided an example of how to calculate CSI for a real biological system. I do hope you'll continue the discussion as time permits. It might be most productive to start with the specification and then work on the actual calculation.Mustela Nivalis
January 18, 2010
January
01
Jan
18
18
2010
06:41 AM
6
06
41
AM
PDT
Cabal -- I guess that means once banned, forever banned. Apparently not, as Clive pointed out in Post 218. Also, check out post 197. There were apparently things going on behind the scene in the mod queue.tribune7
January 18, 2010
January
01
Jan
18
18
2010
05:10 AM
5
05
10
AM
PDT
I guess that means once banned, forever banned. The door to heaven forever closed. No second chances here, nosiree! But was VC's last post really that bad? One thing is not granting further access, but even disappearing what to me looked like a great post? Are we suffering from a culture collision?Cabal
January 18, 2010
January
01
Jan
18
18
2010
03:12 AM
3
03
12
AM
PDT
Mustela @143: My comment #116 -- especially the last para -- was a response to our last conversation where I explained to you how to calculate CSI. You seem to not yet understand that there is no need, for the purpose of what is being calculated, for CSI to take into account the chemistry, physic, etc of our universe. CSI merely combines the number of specified patterns in a space of possibilities with the maximum number of bit operations that had passed until a specified pattern was discovered with the probability of finding that specified pattern given a uniform probability distribution. If a value > 1 results, we effectively have a "needle in a haystack" problem where, with the number of bit operations/flips available, we would not expect to find a specified event of that size. Add on to that the fact that we are indeed dealing with an event correlated to pattern, and we can see that chance (as explained in #127 above) -- defined as lacking correlation -- is not the best explanation. That is all that CSI is able to calculate. The calculation for CSI does not rule out law. That is why the Explanatory Filter is required. Refer to #127 above to see examples of chance, chance + law, and chance + law + intelligence. Then, refer to https://uncommondescent.com/intelligent-design/polanyi-and-ontogenetic-emergence/#comment-337588 for a better explanation of the difference between law and organization such as CSI and how to objectively determine a specified pattern. I'll end how I ended comment #116 above, since it appears to me that the last para in comment #116 should answer your question re: the relation between CSI and the laws of nature that exist within which we are measuring CSI ... "CSI doesn’t “take into consideration” that our universe is fine tuned for life, since it is indeed the type of calculation that *shows us* that our universe is fine tuned for the evolution of life. CSI is the evidence of evolution (barring direct intervention) and evolution is the evidence of previous intelligence." I'm looking forward to continuing this discussion with you, but I am "on and off" busy so I can't guarantee time to set aside. To conclude for now, in the very least ID Theorists hypothesize that CSI will not be generated by law + chance absent intelligent fine tuning for future results. In order to falsify this, merely generate CSI or an evolutionary algorithm from a random set of laws without any intelligent input for future results. Random.org could come in very helpful for providing the random variables for the search space; and for the variables and formulas/algorithms for the laws.CJYman
January 17, 2010
January
01
Jan
17
17
2010
07:15 PM
7
07
15
PM
PDT
tribune7,
material.infantacy sums it up well at 197
I think so too.Clive Hayden
January 17, 2010
January
01
Jan
17
17
2010
01:47 PM
1
01
47
PM
PDT
1 2 3 9

Leave a Reply