Uncommon Descent Serving The Intelligent Design Community

Why there’s no such thing as a CSI Scanner, or: Reasonable and Unreasonable Demands Relating to Complex Specified Information

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

It would be very nice if there was a magic scanner that automatically gave you a readout of the total amount of complex specified information (CSI) in a system when you pointed it at that system, wouldn’t it? Of course, you’d want one that could calculate the CSI of any complex system – be it a bacterial flagellum, an ATP synthase enzyme, a Bach fugue, or the faces on Mt. Rushmore – by following some general algorithm. It would make CSI so much more scientifically rigorous, wouldn’t it? Or would it?

This essay is intended as a follow-up to the recent thread, On the calculation of CSI by Mathgrrl. It is meant to address some concerns about whether CSI is sufficiently objective to qualify as a bona fide scientific concept.

But first, some definitions. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define complex specified information (or CSI) as follows (p. 311):

Information that is both complex and specified. Synonymous with SPECIFIED COMPLEXITY.

Dembski and Wells then define specified complexity on page 320 as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY).

In this post, I’m going to examine seven demands which Intelligent Design critics have made with regard to complex specified information (CSI):

(i) that it should be calculable not only in theory but also in practice, for real-life systems;
(ii) that for an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system;
(iii) that it should be calculable by independent agents, in a consistent manner;
(iv) that it should be knowable with absolute certainty;
(v) that it should be precisely calculable (within reason) by independent agents;
(vi) that it should be readily computable, given a physical description of the system;
(vii) that it should be computable by some general algorithm that can be applied to an arbitrary system.

I shall argue that the first three demands are reasonable and have been met in at least some real-life biological cases, while the last four are not.

Now let’s look at each of the seven demands in turn.

(i) CSI should be calculable not only in theory but also in practice, for real-life systems

This is surely a reasonable request. After all, Professor William Dembski describes CSI as a number in his writings, and even provides a mathematical formula for calculating it.

On page 34 of his essay, Specification: The Pattern That Signifies Intelligence, Professor Dembski writes:

In my present treatment, specified complexity … is now … an actual number calculated by a precise formula (i.e., Chi=-log2[10^120.Phi_s(T).P(T|H)]). This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification. (Emphases mine – VJT.)

The reader will recall that according to the definition given in The Design of Life (The Foundation for Thought and Ethics, Dallas, 2008), on page 311, specified complexity is synonymous with complex specified information (CSI).

On page 24 of his essay, Professor Dembski defines the specified complexity Chi of a pattern T given chance hypothesis H, minus the tilde and context sensitivity, as:

Chi=-log2[10^120.Phi_s(T).P(T|H)]

On page 17, Dembski defines Phi_s(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.

P(T|H) is defined throughout the essay as a probability: the probability of a pattern T with respect to the chance hypothesis H.

During the past couple of days, I’ve been struggling to formulate a good definition of “chance hypothesis”, because for some people, “chance” means “totally random”, while for others it means “not directed by an intelligent agent possessing foresight of long-term results” and hence “blind” (even if law-governed), as far as long-term results are concerned. In his essay, Professor Dembski is quite clear in his essay that he means to include Darwinian processes (which are not totally random, because natural selection implies non-random death) under the umbrella of “chance hypotheses”. So here’s how I envisage it. A chance hypothesis describes a process which does not require the input of information, either at the beginning of the process or during the process itself, in order to generate its result (in this case, a complex system). On this definition, Darwinian processes would qualify as a chance hypotheses, because they claim to be able to grow information, without the need for input from outside – whether by a front-loading or a tinkering Designer of life.

CSI has already been calculated for some quite large real-life biological systems. In a post on the recent thread, On the calculation of CSI, I calculated the CSI in a bacterial flagellum, using a naive provisional estimate of the probability P(T|H). The numeric value of the CSI was calculated as being somewhere between 2126 and 3422. Since this is far in excess of 1, the cutoff point for a specification, I argued that the bacterial flagellum was very likely designed. Of course, a critic could fault the naive provisional estimate I used for the probability P(T|H). But my point was that the calculated CSI was so much greater than the minimum value needed to warrant a design inference that it was incumbent on the critic to provide an argument as to why the calculated CSI should be less than or equal to 1.

In a later post on the same thread, I provided Mathgrrl with the numbers she needed to calculate the CSI of another irreducibly complex biological system: ATP synthase. As far as I am aware, Mathgrrl has not taken up my (trivially easy) challenge to complete the calculation, so I shall now do it for the benefit of my readers. The CSI of ATP synthase can be calculated as follows. The shortest semiotic description of the specific function of this molecule is: “stator joining two electric motors” which is five words. If we imagine (following Dembski) that we have a dictionary of basic concepts, and assume (generously) that there are no more than 10^5 (=100,000) entries in this dictionary, then the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T is (10^5)^5 or 10^25. This is Phi_s(T). I then quoted a scientifically respectable source (see page 236) which estimated the probability of ATP synthase forming by chance, under the most favorable circumstances (i.e with a genetic code available), at 1 in 1.28×10^266. This is P(H|T). Thus Chi=-log2[10^120.Phi_s(T).P(T|H)]=-log2[(10^145)/(1.28×10^266)]
=-log2[1/(1.28×10^121)]=log2[1.28×10^121]
=log2[1.28x(2^(3.321928))^121]=log2[1.28×2^402],
or about 402, to the nearest whole number.
Thus for ATP synthase, the CSI Chi is 402. 402 is far greater than 1, the cutoff point for a specification, so we can safely conclude that ATP synthase was designed by an intelligent agent.

[Note: Someone might be inclined to argue that conceivably, other biological structures might perform the same function as ATP synthase, and we’d have to calculate their probabilities of arising by chance too, in order to get a proper figure for P(T|H) if T is the pattern “stator joining two electric motors.” In reply: any other structures with the same function would have a lot more components – and hence be much more improbable on a chance hypothesis – than ATP synthase, which is a marvel of engineering efficiency. See here and here. As ATP synthase is the smallest biological molecule – and hence most probable, chemically speaking – that can do the job that it does, we can safely ignore the probability of any other more complex biological structures arising with the same functionality, as negligible in comparison.]

Finally, in another post on the same thread, I attempted to calculate the CSI in a 128×128 Smiley face found on a piece of rock on a strange planet. I made certain simplifying assumptions about the eyes on the Smiley face, and the shape of the smile. I also assumed that every piece of rock on the planet was composed of mineral grains in only two colors (black and white). The point was that these CSI calculations, although tedious, could be performed on a variety of real-life examples, both organic and inorganic.

Does this mean that we should be able to calculate the CSI of any complex system? In theory, yes; however in practice, it may be very hard to calculate P(T|H) for some systems. Nevertheless, it should be possible to calculate a provisional upper bound for P(T|H), based on what scientists currently know about chemical and biological processes.

(ii) For an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system.

This is an essential requirement for any meaningful discussion of CSI. What it means in practice is that if a team of aliens were to visit our planet after a calamity had wiped out human beings, they should be able to conclude, upon seeing Mt. Rushmore, that intelligent beings had once lived here. Likewise, if human astronauts were to discover a monolith on the moon (as in the movie 2001), they should still be able to calculate a minimum value for its CSI, without knowing its history. I’m going to show in some detail how this could be done in these two cases, in order to convince the CSI skeptics.

Aliens visiting Earth after a calamity had wiped out human beings would not need to have a detailed knowledge of Earth history to arrive at the conclusion that Mt. Rushmore was designed by intelligent agents. A ballpark estimate of the Earth’s age and a basic general knowledge of Earth’s geological processes would suffice. Given this general knowledge, the aliens should be able to roughly calculate the probability of natural processes (such as wind and water erosion) being able to carve features such as a flat forehead, two eyebrows, two eyes with lids as well as an iris and a pupil, a nose with two nostrils, two cheeks, a mouth with two lips, and a lower jaw, at a single location on Earth, over 4.54 billion years of Earth history. In order to formulate a probability estimate for a human face arising by natural processes, the alien scientists would have to resort to decomposition. Assuming for argument’s sake that something looking vaguely like a flat forehead would almost certainly arise naturally at any given location on Earth at some point during its history, the alien scientists would then have to calculate the probability that over a period of 4.54 billion years, each of the remaining facial features was carved naturally at the same location on Earth, in the correct order and position for a human face. That is, assuming the existence of a forehead-shaped natural feature, scientists would have to calculate the probability (over a 4.54 billion year period) that two eyebrows would be carved by natural processes, just below the forehead, as well as two eyes below the eyebrows, a nose below the eyes, two cheeks on either side of the nose, a mouth with two lips below the nose, and a jawline at the bottom, making what we would recognize as a face. The proportions would also have to be correct, of course. Since this probability is order-specific (as the facial features all have to appear in the right place), we can calculate it as a simple product – no combinatorics here. To illustrate the point, I’ll plug in some estimates that sound intuitively right to me, given my limited background knowledge of geological processes occurring over the past 4.54 billion years: 1*(10^-1)*(10^-1)*(10^-10)*(10*-10)*(10^-6)*(10^-1)*(10^-1)*(10*-4)*(10^-2), for the forehead, two eyebrows, two eyes, nose, cheeks, mouth and jawline respectively, giving a product of 10^(-36) – a very low number indeed. Raising that probability to the fourth power – giving a figure of 10^(-144) – would enable the alien scientists to calculate the probability of four faces being carved at a single location by chance, or P(T|H). The alien scientists would then have to multiply this number (10^(-144)) by their estimate for Phi_s(T), or the number of patterns for which a speaker S’s semiotic description of them is at least as simple as S’s semiotic description of T. But how would the alien scientists describe the patterns they had found? If the aliens happened to find some dead people or dig up some human skeletons, they would be able to identify the creatures shown in the carvings on Mt. Rushmore as humans. However, unless they happened to find a book about American Presidents, they would not know who the faces were. Hence the aliens would probably formulate a modest semiotic description of the pattern they observed on Mt. Rushmore: four human faces. A very generous estimate for Phi_s(T) is 10^15, as the description “four human faces” has three words (I’m assuming here that the aliens’ lexicon has no more than 10^5 basic words), and (10^5)^3=10^15. Thus the product Phi_s(T).P(T|H) is (10^15)*(10^(-144)) or 10^(-129). Finally, after multiplying the product Phi_s(T).P(T|H) by 10^120 (the maximum number of bit operations that could have taken place within the entire observable universe during its history, as calculated by Seth Lloyd), taking the log to base 2 of this figure and multiplying by -1, the alien scientists would then be able to derive a very conservative minimum value for the specified complexity Chi of the four human faces on Mt. Rushmore, without knowing anything specific about the Earth’s history. (I say “conservative” because the multiplier 10^120 is absurdly large, given that we are only talking about events occurring on Earth, rather than the entire universe.) In our worked example, the conservative minimum value for the specified complexity Chi would be -log2(10^(-9)), or approximately -log2(2^(-30))=30. Since the calculated specified complexity value of 30 is much greater than the cutoff level of 1 for a specification, the aliens could be certain beyond reasonable doubt that Mt. Rushmore was designed by an intelligent agent. They might surmise that this intelligent agent was a human agent, as the faces depicted are all human, but they could not be sure of this fact, without knowing the history of Mt. Rushmore.

Likewise, if human astronauts were to discover a monolith on the moon (as in the movie 2001), they should still be able to calculate a minimum value for its CSI, without knowing its history. Even if they were unable to figure out the purpose of the monolith, the astronauts would still realize that the likelihood of natural processes on the moon being able to generate a black cuboid figure with perfectly flat faces, whose lengths were in the ratio of 1:4:9, is very low indeed. To begin with, the astronauts might suppose that at some stage in the past, volcanic processes on the moon, similar to the volcanic processes that formed the Giants’ Causeway in Ireland, were able to produce a cuboid with fairly flat faces – let’s say to an accuracy of one millimeter, or 10^(-3) meters. However, the probability that the sides’ lengths would be in the exact ratio of 1:4:9 (to the level of precision of human scientists’ instruments) would be astronomically low, and the probability that the faces of the monolith would be perfectly flat would be infinitesimally low. For instance, let’s suppose for simplicity’s sake that the length of each side of a naturally formed cuboid has a uniform probability distribution over a finite range of 0 to 10 meters, and that the level of precision of scientific measuring instruments is to the nearest nanometer (1 nanometer=10^(-9) meters). Then the length of one side of a cuboid can assume any of 10×10^9=10^10 possible values, all of which are equally probable. Let’s also suppose that the length of the shortest side just happens to be 1 meter, for simplicity’s sake. Then the probability that the other two sides would have lengths of 4 and 9 meters would be 6*(10^(-10))*(10^(-10)) (as there are six ways in which the sides of a cube can have lengths in the ratio of 1:4:9), or 6*10^(-100). Now let’s go back to the faces, which are not fairly flat but perfectly flat, to within an accuracy of one nanometer, as opposed to one millimeter (the level of accuracy achieved by natural processes). At any particular point on the monolith’s surface, the probability that it will be accurate to that degree is (10^(-9))/(10^(-3)) or 10^(-6). The number of distinct points on the surface of the monolith which scientists can measure at nanometer accuracy is (10^9)*(10^9)*(surface area in square meters), or 98*(10^81) or about 10^83. Thus the probability that each and every point on the monolith’s surface will perfectly flat, to within an accuracy of one nanometer, is (10^(-6))^(10^83), or about 10^(-10^84), which dwarfs 10^-100, so we’ll let 10^(-10^84) be our P(T|H), as a ballpark approximation. This probability would then need to be multiplied by Phi_s(T). The simplest semiotic description of the pattern observed by the astronauts would be: flat-faced cuboid, sides’ lengths 1, 4, 9. Treating “flat-faced” as one word, this description has seven terms, so Phi_s(T) is (10^5)^7=10^35. Next, the astronauts would multiply the product Phi_s(T).P(T|H) by 10^120, but because the index 10^84 is so much greater in magnitude than the other indices (120 and 35), the overall result will still be about 10^(-10^84). Thus the specified complexity Chi=-log2[10^120.Phi_s(T).P(T|H)]=3.321928*10^84, or about 3*(10^84). This is an astronomically large number, much greater than the cutoff point of 1, so the astronauts could be certain that the monolith was made by an intelligent agent, even if they knew nothing about its history and had only a basic knowledge of lunar geological processes.

Having said that, it has to be admitted that sometimes, a lack of knowledge about the history of a complex system can skew CSI calculations. For example, if a team of aliens visiting Earth after a nuclear holocaust found the body of a human being buried in the Siberian permafrost, and managed to sequence the human genome using cells taken from that individual’s body, they might come across a duplicated gene. If they did not know anything about gene duplication – which might not occur amongst organisms on their planet – they might at first regard the discovery of two neighboring genes having virtually the same DNA sequence as proof positive that the human genome was designed – like lightning striking in the same place twice – causing them to arrive at an inflated estimate for the CSI in the genome. Does this mean that gene duplication can increase CSI? No. All it means is that someone (e.g. a visiting alien scientist) who doesn’t know anything about gene duplication, will overestimate the CSI of a genome in which a gene is duplicated. But since modern scientists know that gene duplication does occur as a natural process, and since they also know the rare circumstances that make it occur, they also know that the probability of duplication for the gene in question, given these circumstances, is exactly 1. Hence, the duplication of a gene adds nothing to the probability of the original gene occurring by chance. P(T|H) is therefore the same, and since the verbal descriptions of the two genomes are almost exactly the same – the only difference, in the case of a gene duplication, being “x2” plus brackets that go around the duplicated gene – the CSI will be virtually the same. Gene duplication, then does not increase CSI.

Even in this case, where the aliens, not knowing anything about gene duplication, are liable to be misled when estimating the CSI of a genome, they could still adopt a safe, conservative strategy of ignoring duplications (as they generate nothing new per se) and focusing on genes that have a known, discrete function, which is capable of being described concisely, thereby allowing them to calculate Phi_s(T) for any functional gene. And if they also knew the exact sequence of bases along the gene in question, the number of alternative base sequences capable of performing the same function, and finally the total number of base sequences which are physically possiblefor a gene of that length, the aliens could then attempt to calculate P(T|H), and hence calculate the approximate CSI of the gene, without a knowledge of the gene’s history. (I am of course assuming here that at least some genes found in the human genome are “basic” in their function, as it were.)

(iii) CSI should be calculable by independent agents, in a consistent manner.

This, too, is an essential requirement for any meaningful discussion of CSI. Beauty may be entirely in the eye of the beholder, but CSI is definitely not. The following illustration will serve to show my point.

Supose that three teams of scientists – one from the U.S.A, one from Russia and one from China – visited the moon and discovered four objects there that looked like alien artifacts: a round mirror with a picture of what looks like Pinocchio playing with a soccer ball on the back; a calculator; a battery; and a large black cube made of rock whose sides are equal in length, but whose faces are not perfectly smooth. What I am claiming here is that the various teams of scientists should all be able to rank the CSI of the four objects in a consistent fashion – e.g. “Based on our current scientific knowledge, object 2 has the highest level of CSI, followed by object 3, followed by object 1, followed by object 4” – and that they should be able to decide which objects are very likely to have been designed and which are not – e.g. “Objects 1, 2 and 3 are very likely to have been designed; we’re not so sure about object 4.” If this level of agreement is not achievable, then CSI is no longer a scientific concept, and its assessment becomes more akin to art than science.

We can appreciate this point better if we consider the fact that three art teachers from the same cultural, ethnic and socioeconomic backgrounds (e.g. three American Hispanic middle class art teachers living in Miami and teaching at the same school) might reasonably disagree over the relative merits of four paintings by different students at their school. One teacher might discern a high degree of artistic maturity in a certain painting, while the other teachers might see it as a mediocre work. Because it is hard to judge the artistic merit of a single painting by an artist, in isolation from that artist’s body of work, some degree of subjectivity when assessing the merits of an isolated work of art is unavoidable. CSI is not like this.

First, Phi_s(T) depends on the basic concepts in your language, which are public and not private, as you share them with other speakers of your language. These concepts will closely approximate the basic concepts of other languages; again, the concepts of other languages are shareable with speakers of your language, or translation would be impossible. Intelligent aliens, if they exist, would certainly have basic concepts corresponding to geometrical and other mathematical concepts and to biological functions; these are the concepts that are needed to formulate a semiotic description of a pattern T, and there is no reason in principle why aliens could not share their concepts with us, and vice versa. (For the benefit of philosophers who might be inclined to raise Quine’s “gavagai” parable: Quine’s mistake, in my view, was that he began his translation project with nouns rather than verbs, and that he failed to establish words for “whole” and “part” at the outset. This is what one should do when talking to aliens.)

Second, your estimate for P(T|H) will depend on your scientific choice of chance hypothesis and the mathematics you use to calculate the probability of T given H. A scientific hypothesis is capable of being critiqued in a public forum, and/or tested in a laboratory; while mathematical calculations can be checked by anyone who is competent to do the math. Thus P(T|H) is not a private assessment; it is publicly testable or checkable.

Let us now return to our illustration regarding the three teams of scientists examining four lunar artifacts. It is not necessary that the teams of scientists are in total agreement about the CSI of the artifacts, in order for it to be a meaningful scientific concept. For instance, it is possible that the three teams of scientists might arrive at somewhat different estimates of P(T|H), the probability of a pattern T with respect to the chance hypothesis H, for the patterns found on the four artifacts. This may be because the chance hypotheses considered by the various teams of scientists may be subtly different in their details. However, after consulting with each other, I would expect that the teams of scientists should be able to resolve their differences and (eventually) arrive at an agreement concerning the most plausible chance hypothesis for the formation of the artifacts in question, as well as a ballpark estimate of its magnitude. (In difficult cases, “eventually” might mean: over a period of some years.)

Another source of potential disagreement lies in the fact that the three teams of scientists speak different languages, whose basic concepts are very similar but not 100% identical. Hence their estimates of Phi_s(T), or the number of patterns for which a speaker S’s semiotic description is at least as simple as S’s semiotic description of a pattern T identified in a complex system, may be slightly different. To resolve these differences, I would suggest that as far as possible, the scientists should avoid descriptions which are tied to various cultures or to particular individuals, unless the resemblance is so highly specific as to be unmistakable. Also, the verbs employed should be as clear and definite as possible. Thus a picture on an alien artifact depicting what looks like Pinocchio playing with a soccer ball would be better described as a long-nosed boy kicking a black and white truncated icosahedron.

(iv) CSI should be knowable with absolute certainty.

Science is provisional. Based on what scientists know, it appears overwhelmingly likely that the Earth is 4.54 billion years old, give or take 50 million years. A variety of lines of evidence point to this conclusion. But if scientists discovered some new astronomical phenomena that could only be accounted for by positing a much younger Universe, then they’d have to reconsider the age of the Earth. In principle, any scientific statement is open to revision or modification of some sort. Even a statement like “Gold has an atomic number of 79”, which expresses a definition, could one day fall into disuse if scientists found a better concept than “atomic number” for explaining the fundamental differences between the properties of various elements.

Hence the demand by some CSI skeptics for absolute ironclad certainty that a specified complex system is the product of intelligent agency is an unscientific one.

Likewise, the demand by CSI skeptics for an absolutely certain, failproof way to measure the CSI of a system is also misplaced. Just as each of the various methods used by geologists to date rocks has its own limitations and situations where it is liable to fail, so too the various methods that Intelligent Design scientists come up with for assessing P(T|H) for a given pattern T and chance hypothesis H, will have their own limitations, and there will be circumstances when they yield the wrong results. That does not invalidate them; it simply means that they must be used with caution.

(v) CSI should be precisely calculable (within reason) by independent agents.

In a post (#259) on the recent thread, On the calculation of CSI, Jemima Racktouey throws down the gauntlet to Intelligent Design proponents:

If “CSI” objectively exists then you should be able to explain the methodology to calculate it and then expect independent calculation of the exact same figure (within reason) from multiple sources for the same artifact.

On the surface this seems like a reasonable request. For instance, the same rock dating methods are used by laboratories all around the world, and they yield consistent results when applied to the same rock sample, to a very high degree. How sure can we be that a lab doing Intelligent Design research in, say, Moscow or Beijing, would yield the same result when assessing the CSI of a biological sample as the Biologic Institute in Seattle, Washington?

The difference between the procedures used in the isochron dating of a rock sample and those used when assessing the CSI of a biological sample is that in the former case, the background hypotheses that are employed by the dating method have already been spelt out, and the assumptions that are required for the method to work can be checked in the course of the actual dating process; whereas in the latter case, the background chance hypothesis H regarding the most likely process whereby the biological sample might have formed naturally has not been stipulated in advance, and different labs may therefore yield different results because they are employing different chance hypotheses. This may appear to generate confusion; in practice, however, I would expect that two labs that yielded wildly discordant CSI estimates for the same biological sample would resolve the issue by critiquing each other’s methods in a public forum (e.g. a peer-reviewed journal).

Thus although in the short term, labs may disagree in their estimates of the CSI in a biological sample, I would expect that in the long term, these disagreements can be resolved in a scientific fashion.

(vi) CSI should be readily computable, given a physical description of the system.

In a post (#316) on the recent thread, On the calculation of CSI, a contributor named Tulse asks:

[I]f this were a physics blog and an Aristotelian asked how to calculate the position of an object from its motion, … I’d expect someone to simply post:

y = x + vt + 1/2at**2

If an alchemist asked on a chemistry blog how one might calculate the pressure of a gas, … one would simply post:

p=(NkT)/V

And if a young-earth creationist asked on a biology blog how one can determine the relative frequencies of the alleles of a gene in a population, … one would simply post:

p² + 2pq + q² = 1

These are examples of clear, detailed ways to calculate values, the kind of equations that practicing scientists uses all the time in quotidian research. Providing these equations allows one to make explicit quantitative calculations of the values, to test these values against the real world, and even to examine the variables and assumptions that underlie the equations.

Is there any reason the same sort of clarity cannot be provided for CSI?

The answer is that while the CSI of a complex system is calculable, it is not computable, even given a complete physical knowledge of the system. The reason for this fact lies in the formula for CSI.

On page 24 of his essay, Specification: The Pattern That Signifies Intelligence, Professor Dembski defines the specified complexity Chi of a pattern T given chance hypothesis H, minus the tilde and context sensitivity, as:

Chi=-log2[10^120.Phi_s(T).P(T|H)]

where Phi_s(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T, and P(T|H) is the probability of a pattern T with respect to the chance hypothesis H.

The problem here lies in Phi_s(T). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define Kolmogorov complexity and descriptive complexity as follows (p. 311):

Kolmogorov complexity is a form of computational complexity that measures the length of the minimum program needed to solve a computational problem. Descriptive complexity is likewise a form of computational complexity, but generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern. (Emphasis mine – VJT.)

In a comment (#43) on the recent thread, On the calculation of CSI, I addressed a problem raised by Mathgrrl:

While I understand your motivation for using Kolmogorov Chaitin complexity rather than the simple string length, the problem with doing so is that KC complexity is uncomputable.

To which I replied:

Quite so. That’s the point. Intelligence is non-computational. That’s one big difference between minds and computers. But although CSI is not computable, it is certainly measurable mathematically.

The reason, then, why CSI is not physically computable is that it is not only a physical property but also a semiotic one: its definition invokes both a semiotic description of a pattern T and the physical probability of a non-foresighted (i.e. unintelligent) process generating that pattern according to chance hypothesis H.

(vii) CSI should be computable by some general algorithm that can be applied to an arbitrary system.

In a post (#263) on the recent thread, On the calculation of CSI, Jemima Racktouey issues the following challenge to Intelligent Design proponents:

If CSI cannot be calculated then the claims that it can are bogus and should not be made. If it can be calculated then it can be calculated in general and there should not be a very long thread where people are giving all sorts of reasons why in this particular case it cannot be calculated. (Emphasis mine – VJT.)

And again in post #323, she writes:

Can you provide such a definition of CSI so that it can be applied to a generic situation?

I would like to note in passing how the original demand of ID critics that CSI should be calculable has grown into a demand that it should be physically computable, which has now been transformed into a demand that it should be computable by a general algorithm. This demand is tantamount to putting CSI in a straitjacket of the materialists’ making. What the CSI critics are really demanding here is a “CSI scanner” which automatically calculates the CSI of any system, when pointed in the direction of that system. There are two reasons why this demand is unreasonable.

First, as I explained earlier in part (vi), CSI is not a purely physical property. It is a mixed property – partly semiotic and partly physical.

Second, not all kinds of problems admit of a single, generic solution that can be applied to all cases. An example of this in mathematics is the Halting problem. I shall quote here from the Wikipedia entry:

In computability theory, the halting problem is a decision problem which can be stated as follows: Given a description of a program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. We say that the halting problem is undecidable over Turing machines. (Emphasis mine – VJT.)

So here’s my counter-challenge to the CSI skeptics: if you’re happy to acknowledge that there’s no generic solution to the halting problem, why do you demand a generic solution to the CSI problem – that is, the problem of calculating, after being given a complete physical description of a complex system, how much CSI the system embodies?

Comments
Thanks a bunch Kairosfocus. You helped a great deal in putting the mentioned research into a much needed perspective for me. Always a pleasure listening to your insightful comments.above
April 4, 2011
April
04
Apr
4
04
2011
08:44 AM
8
08
44
AM
PDT
PS: I add, that "template copying" is here used to suggest that there is an accounting for the step by step, information coded translation of mRNA information into a protein chain. Even if that were so, it would not account for how the DNA comes to store the relevant information, how mRNA is created by cellular machines step by step, and how the resulting proteins are so ordered that we get correct folding and function. But worse, the process that creates a protein is a step by step algorithmic one, not a case of some sort of catalysis on a template.kairosfocus
April 3, 2011
April
04
Apr
3
03
2011
02:00 AM
2
02
00
AM
PDT
Above Now that you give a more direct link, that works, here is the key excerpt from the abstract: ___________________ >> . . . Fatty acids and their corresponding alcohols and glycerol monoesters are attractive candidates for the components of protocell membranes because they are simple amphiphiles that form bilayer membrane vesicles3–5 that retain encapsulated oligonucleotides3,6 and are capable of growth and division7–9. Here we show that such membranes allow the passage of charged molecules such as nucleotides, so that activated nucleotides added to the outside of a model protocell spontaneously cross the membrane and take part in efficient template copying in the protocell interior. The permeability properties of prebiotically plausible membranes suggest that primitive protocells could have acquired complex nutrients from their environment in the absence of any macromolecular transport machinery; that is, they could have been obligate heterotrophs. >> ___________________ This is a suggestion about the chemical composition of the membrane bag for an imagined protocell. Unfortunately, it is not only speculative and uses terms like growth and division in ways that fudge the difference between chemical processes and the information controlled step by step process of cell growth and division, but ducks the material point that what is to be accounted for in the origin of observed cell based life is a metabolising entity that integrates an information-storing, von Neumann self replicator centred on DNA with the code of life in it. The paper discusses little more than one or two of the scenarios long since discussed and evaluated by Thaxton et al in ch 10 of TMLO in 1984. That you can form a "plastic bag" using a version of fatty molecules, and that these may break up into two different bags [much as a soap bubble can sometimes break into two], is utterly irrelevant to real cell division. That such globules can contain chemicals relevant to life, does not explain the origin of the observed information system based operation of life, especially the coded DNA information, the code, the algorithms, the regulation of expression of genes and so on. That, for decades, we routinely see the sort of gross exaggeration of actual results into claimed justification for a grand metaphysical story of the origin of life dressed up in a lab coat, is telling. Indeed, it is a mark of desperation. GEM of TKIkairosfocus
April 3, 2011
April
04
Apr
3
03
2011
01:55 AM
1
01
55
AM
PDT
Thanks for the help Kairosfocus. Here's the link to the article in case you wanted to have a look: http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Mansy_et_al_Nature_2008.pdf I just tried it and it works for me.above
April 2, 2011
April
04
Apr
2
02
2011
07:09 PM
7
07
09
PM
PDT
Above: First check: if there really was a solution to the OOL problem on evolutionary materialist grounds, or something that looked close, it would be all over every major news network. So, you can be sure that the claims are grossly exaggerated. Here's the lead for the Wiki article you clipped: ____________ >> Telomerase is an enzyme that adds DNA sequence repeats ("TTAGGG" in all vertebrates) to the 3' end of DNA strands in the telomere regions, which are found at the ends of eukaryotic chromosomes. This region of repeated nucleotide called telomeres contains non-coding DNA material and prevents constant loss of important DNA from chromosome ends. As a result, every time the chromosome is copied only 100-200 nucleotides are lost, which causes no damage to the organism's DNA. Telomerase is a reverse transcriptase that carries its own RNA molecule, which is used as a template when it elongates telomeres, which are shortened after each replication cycle. The existence of a compensatory shortening of telomere (telomerase) mechanism was first predicted by Soviet biologist Alexey Olovnikov in 1973,[1] who also suggested the telomere hypothesis of aging and the telomere's connections to cancer. Telomerase was discovered by Carol W. Greider and Elizabeth Blackburn in 1984 in the ciliate Tetrahymena.[2] Together with Jack W. Szostak, Greider and Blackburn were awarded the 2009 Nobel Prize in Physiology or Medicine for their discovery.[3] >> _____________ Not very promising relative to the origin of a self-replicating entity that uses a von Neumann self-replicator tied to a metabolic entity. A Nobel Prize announcement article at Harvard -- your second link will not work for me -- says in part:
Jack Szostak, a genetics professor at Harvard Medical School and Harvard-affiliated Massachusetts General Hospital (MGH), has won the 2009 Nobel Prize in physiology or medicine for pioneering work in the discovery of telomerase, an enzyme that protects chromosomes from degrading. The work not only revealed a key cellular function, it also illuminated processes involved in disease and aging . . . . The three won the prize for work conducted during the 1980s to discover and understand the operation of telomerase, an enzyme that forms protective caps called telomeres on the ends of chromosomes. Subsequent research has shown that telomerase and telomeres hold key roles in cell aging and death and also play a part in the aging of the entire organism. Research has also shown that cancer cells have increased telomerase activity, protecting them from death.
In short, the two issues -- telomerase activity and the origin of cell based life with a vNSR joined to a metabolic entity -- are almost completely irrelevant. The commenter at Amazon is plainly in gross and distractive error. GEM of TKIkairosfocus
April 2, 2011
April
04
Apr
2
02
2011
06:06 PM
6
06
06
PM
PDT
PAV: A bit woozy from a rougher than expected return ferry trip to Montserrat, Yellow Hole having lived up to its reputation. Wasn't even able to get a glimpse of the Green Flash by way of compensation on the way home due to some clouds low on the W horizon. Anyway, let's pick up quickly:
What Dembski has in mind, I believe, is the criticism leveled at ID that goes like this: “You say that life is highly improbable. But there it is. This is just like a lottery ticket. It’s likelihood is very low. Yet they have a lottery and someone always wins.”
Lotteries are winnable of course because they are designed to be winnable. There is no comparison tot he challenge for origin of FSCI by chance plus mechanical necessity without intelligent direction. For that, the infinite monkeys theorem is the killer. GEM of TKIkairosfocus
April 2, 2011
April
04
Apr
2
02
2011
05:53 PM
5
05
53
PM
PDT
@ Kairosfocus -"Until the advocates of abiogenesis can show a reasonable, empirically supported pathway to first cell based life that does not require searches deeper than 1 in 10^50 or so, and cumulate to give a system with integrated metabolism and von Neumann type informationally based self-replication, they have no root to their imagined tree of life. Yesterday I run into a poster on amazon that claimed the following: “If you’re dealing with the Origins of Life on Earth, we actually have discovered (in 2009 in fact) how life began on earth. This has been CONFIRMED in Dr. Jack Szostak’s LAB – 2009 Nobel Laurette in medicine for his work on telomerase. (http://en.wikipedia.org/wiki/Telomerase) The scientific research documentation can be read here: http://genetics.mgh.harvard.ed.....e_2008.pdf” I asked this elwhere and UprightBiped told me there's not much to the claim. I also wanted to hear what you have to say as you have helped me a lot in putting the whole darwinism/ID issue into perspective in the past. Is there any truth to the claim that Szostak's work has provided evidence of abiogenesis? Much appreciated.above
April 2, 2011
April
04
Apr
2
02
2011
04:13 PM
4
04
13
PM
PDT
markf: @[147]:
Unfortunately Dembski introduces the formula on page 18 as a general way of calculating specificity when it is not known whether n is large or small compared to p.
On page 18, in SP, Dembski uses the example of the bacterial flagellum, and he identifies N= 10^20 specification resources for its description. So we know what N is in this case. And, if p > 10^-120, then "specified complexity" is out of the question. So, it is safe to assume that p = P(T|H) is extremely small. And therefore, p^2 is of no importance to our consideration. And, of course, even if N = 1, this is hugely greater than p. @ [10]:
The formula Chi=-log2[10^120.Phi_s(T).P(T|H)] contains a rather basic error. If you have n independent events and the probability of single event having outcome x is p, then the probability of at least one event having outcome x is not np . It is (1–(1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is wrong .
You're assuming that he wants to calculate the probability of "at least one event having outcome x". That's not his intention. This is what he says about the relevant probability: Factoring in these N specificational resources then amounts to checking whether the probability of hitting any of these targets by chance is small, which in turn amounts to showing that the product Np is small. This, obviously, is not "the probability of at least one event having outcome x". What Dembski has in mind, I believe, is the criticism leveled at ID that goes like this: "You say that life is highly improbable. But there it is. This is just like a lottery ticket. It's likelihood is very low. Yet they have a lottery and someone always wins." N = the specification resources involved. So, if the probability of a single lottery ticket winning is 1 in 100 million, and you sell 100 million tickets, then the probability of someone winning is 10^8 x 10^-8 = approx 1. That is, someone is going to win. Yes, there's all kinds of variables, and this number may not be precise; but it makes clear that the more specificational resources that are available (printed lottery tickets) the less improbable it is that someone is going to hit the right "target" (the "winning" lottery numbers).PaV
April 2, 2011
April
04
Apr
2
02
2011
12:58 PM
12
12
58
PM
PDT
PAV: The basic problem with ev and similar things, is that they START on an island of function, based on intelligent design. At most, they are capable of showing how intelligent design can drive evolutionary adaptation of a base design in an established functional environment. In terms of search capacity of such a system, the Wiki infinite monkeys theorem page comments:
The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[20] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
Now, 128^25 = 4.79*10^52, i.e. we see that a feasible search can approach something that is isolated to 1 in 10^50 or so of a config space. But, when we move the space to 1 in 10^301 or more [1,000 bits worth of configs], the entire resources of the observable cosmos, working at the fastest physically plausible rates, for its thermodynamic lifespan cannot credibly sample over 1 in 10^150 of the space. A practical zero. Until the advocates of abiogenesis can show a reasonable, empirically supported pathway to first cell based life that does not require searches deeper than 1 in 10^50 or so, and cumulate to give a system with integrated metabolism and von Neumann type informationally based self-replication, they have no root to their imagined tree of life. We already have a known source of functionally specific, complex, information-rich organisation: intelligence. The infinite monkeys type analysis backs that up. So, on inference to best explanation, the best, most credible and warranted explanation for origin of life is design. When we then turn to t6he next level, origin of major body plans, we find much larger increments of integrated, regulated bio information:10's to 100's of millions of bits as a reasonable minimum, dozens of times over, not just 100 - 1,000 k bits. It is reasonable to also infer that such body plans were designed. (And no, this is not "cows are designed," but that plans ranging from arthropods to trees to banana plants, to whales, bats and birds as well as ourselves, are designed.) Let us hear from the objectors, that hey have empirically based, reasonable grounds for showing that life's origin and that of major body plans is adequately explained on blind chance plus mechanical necessity. Failing that, the inference ot design is as well warranted as any empirical inference to best explanation we make. Regardless of hair-splitting debates on quantitative models, analyses and metrics for CSI and/or FSCI. G'day GEM of TKIkairosfocus
April 2, 2011
April
04
Apr
2
02
2011
06:02 AM
6
06
02
AM
PDT
Thanks KF: Your analysis reminds me of something. When it comes to the supposed Shannon information, there is as much "Shannon Information" when the 265 bit string is selected randomly at the start, as at the finish. The real claim is that "specificity" was brought about; i.e., that the first half of the bit string, which is to represent the protein to be "bound to", matches, in places, the second half of the bit string. And, indeed, this does happen. But the complexity, as I have pointed out countless times already, does not rise to the UPB. And, the lingering question is: what influence does the "wieght matrix" Schneider use, and the fact that "mistakes" are calculated have on the true "chance" character of the final output. So, clearly "specificity" has arisen; but is it due, truly, to pure chance. Very likely not.PaV
April 1, 2011
April
04
Apr
1
01
2011
06:51 PM
6
06
51
PM
PDT
PAV, 169:
MathGrrl [151]: Schneider has demonstrated that known evolutionary mechanisms can create Shannon information. [PAV, 169:] So does flipping a coin sequentially, and generating a bit string by letting 1 equal “heads”, and 0 equal “tails”.
Shannon info is a metric of info carrying capacity with a particular code pattern and system where symbols si have probabilities of occurence pi. So, we do a sum over i of pi log pi metric, H. (Please note my summary here; which is linked from every comment-post I have ever made at UD.) That info carrying capacity metric has nothing in itself to do with the meaningful functionality of information, except that the highest case of H is with a string where there is no correlation between symbols in a string, i.e flat random distribution. A meaningful message is not going to peak out H, where the point of most communication systems is to store, carry or process just such meaningful or functional information. That is where the idea of functionally specific, complex information comes from, and it is why being able to identify its properties is important. As, we are usually interested in working -- meaningful -- information. For instance, when I prepared a tutorial note some years ago, I put the matter this way:
[In the context of computers] information is data -- i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. -- that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.]
I also note again that signal to noise ratio is an important characteristic of communication systems, and it pivots on distinct characteristics of intelligent signals vs meaningless noise. Indeed, every time one infers to signal as opposed to noise, one is making an inference to design. GEM of TKI PS: Have been having some difficulties with communication access, so pardon gappiness.kairosfocus
April 1, 2011
April
04
Apr
1
01
2011
02:23 PM
2
02
23
PM
PDT
Collin [173]:
Perhaps arrowheads and rocks that look like arrowheads.
I suspected (and hoped) you would say this. Why? Because we often contend with the Darwinist who say, "Well, who Designed the Designer?" We point out to them that if you find rocks that "look" like they've been cut to form arrowheads, then you're assuming design without knowing who designed the 'arrowheads'. Your comment suggests that they could be wrong. But nevertheless, despite the fact that they might be confusing the 'natural' for the 'designed',they will call this kind of work "science", and give it a a name, "paleontology". But, of course, ID is not a science. They just know these things! Ask them.PaV
April 1, 2011
April
04
Apr
1
01
2011
02:06 PM
2
02
06
PM
PDT
Collin, I am sure MathGrrl knows natural selection when she sees it. :cool:Joseph
April 1, 2011
April
04
Apr
1
01
2011
01:33 PM
1
01
33
PM
PDT
Re MF, 128:
Dembski’s paper and definition of CSI makes no references to outcomes being valuable (or functional). He seeks to define the specification purely in terms of KC simplicity. The issue of using function or value as a specification is a different one.
Functional specifications are of course, just that: specifications. That is, FSCI is a subset of CSI. Cf. Orgel and Wicken, in the 1970's, as repeatedly linked and excerpted. Also, cf the Abel, Trevors, Chiu and Durston et al work on FSC [cf the paper on distinguishing OSC, RSC and FSC here, especially the figure here, and the onward development and application of a metric of FSC to 35 protein families here as has been cited and/or linked repeatedly in recent discussions], which builds on the same principles Dembski uses, and focuses specifically on functionality as specification. KC complexity is a way of saying that the pattern in the specifications, is distinct from the simple reproduction of the sequence in question, by quoting it, or the mere repetition of a given block. Notice Thaxton Bradley and Olsen in TMLO 1984 [the very first modern design theory technical book; cf here -- fat pdf] in ch 8, contrasting:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein.
(Think, how the number picked as winner has to be fully quoted in a lottery. A truly random sequence has no redundancy -- no correlation between any one digit and any other -- and is therefore highly resistant to compression; the digit value at any one place in a string is maximally contingent. An orderly -- thus low contingency -- sequence will normally be highly compressible and periodic, typically: "repeat BLOCK X n times." A functional sequence will normally be aperiodic, thus fairly close to random in resistance to string compression, but will have some redundancy, reflecting the underlying linguistic rules and/or data structures required to specify a functional entity and communicate the information in a usable manner. Function may typically be algorithmic, linguistic or structural. Recall, a structural entity or mechanism can as a rule be converted into a net list with nodes, interfaces and connecting arcs; i.e. the Wicken wiring diagram.) KC complexity is an index of being simply describable/compressible, for instance, cf. what VJT has done in the worked out examples above. It is mainly a way to give an indicator of the complexity in the first instance, and does not exclude functional specificity. Describing the function to be carried out by a particular body of information, can easily be a way of specifying it, e.g. a particular mRNA gives the sequence of amino acids for a particular protein used to do X in the cell. As a second example, each of the 20 or so tRNA's will carry a particular AA, and will fit a specific codon with its anticodon sub-string. In turn, the key-lock functionality of such RNA's is required to step by step -- i.e. algorithmically, by definition -- chain a given protein in the ribosome. This brings to bear structure, function, algorithm and code aspects of functionality, and we see as well that we can give a functional description independent of quoting the strings. That the resulting protein has to fold properly and have the right functional elements in the right place, shows that we are dealing with islands of function in large configuration spaces. Relatively few of the 2.04 * 10^390 possible AA sequences for a 300 AA string will do the required job in the cell. Cells of course use hundreds of different proteins to do their jobs, and in turn the required mRNAs and regulatory controls are coded for in the DNA. That serves to indicate the particular fold domain for the protein, and the specific role it fulfills. The mRNA therefore fits on an island of function in a wider config space of possible chained codons, the vast majority of which would carry out no relevant function in a living cell. The attempt to drive a rhetorical wedge between the specification by functionality and specification by KC compressibility, reflects poorly indeed on the level of thought involved. GEM of TKIkairosfocus
April 1, 2011
April
04
Apr
1
01
2011
01:16 PM
1
01
16
PM
PDT
Joseph, I hope that Mattgrrl knows that natural selection is a heuristic, not any kind of a law or principal. It is not rigorous (mathematically or otherwise) because it depends on ever-changing environmental signals.Collin
April 1, 2011
April
04
Apr
1
01
2011
11:54 AM
11
11
54
AM
PDT
Collin, You are absoutely correct and it looks like MathGrrl "knows" blind watchmaker evolution when she sees it. :o She needs to pull her head out of her bias... :)Joseph
April 1, 2011
April
04
Apr
1
01
2011
11:48 AM
11
11
48
AM
PDT
http://en.wikipedia.org/wiki/Marine_archaeology_in_the_Gulf_of_CambayCollin
April 1, 2011
April
04
Apr
1
01
2011
11:23 AM
11
11
23
AM
PDT
Thanks Pav. I'll admit I'm not sure what could be used. Perhaps arrowheads and rocks that look like arrowheads. Or things like Saturn's rings versus ripples from astroid strikes on planets compared to city lights viewed from space. VJtorley pointed this out:http://en.wikipedia.org/wiki/Yonaguni_MonumentCollin
April 1, 2011
April
04
Apr
1
01
2011
10:54 AM
10
10
54
AM
PDT
Substitute "Jon" for "Joe" in the previous post. Oops.PaV
April 1, 2011
April
04
Apr
1
01
2011
10:43 AM
10
10
43
AM
PDT
Collin [95]:
Perhaps an experiment can be done to verify or falsify CSI. A group should gather 100 objects of known origin. 50 of them known to be man made but look like they might not be and 50 known to be natural but look like they might be designed. Then gather several individuals who independently use CSI to test which objects are artificial and which are natural. If they are consistent and correct, then CSI has resisted falsification.
Collin, I think you would have trouble finding one such object either way. I can't think of a single example, either way. So, when Joe Specter characterizes your view as: "I know it when I sees it," I don't think Joe's thought this through much, because you're actually saying "I DON'T know it when I sees it." But, of course, this example, as far as I can see, is strictly hypothetical: just like Darwinism. Darwin: "I know that most scientists see sterility in hybrids. But I think, really, it doesn't exist." "I know that fossil intermediates have not been found (seen); but I'm sure they're there. Just dig around more." "I know that scientists believe that domesticated animals can regress to wild species. But I think that's just an illusion." Let's hear it for science! Right, Joe?!PaV
April 1, 2011
April
04
Apr
1
01
2011
10:41 AM
10
10
41
AM
PDT
Joseph, So, for example, a key that fits only one lock and that lock only accepts that one key, then you have tight specification? I guess my position is that this is readibly observable and that Mathgrrl should be able to recognize it even if she can't calculate it.Collin
April 1, 2011
April
04
Apr
1
01
2011
10:34 AM
10
10
34
AM
PDT
MathGrrl [151]:
Schneider has demonstrated that known evolutionary mechanisms can create Shannon information.
So does flipping a coin sequentially, and generating a bit string by letting 1 equal "heads", and 0 equal "tails".PaV
April 1, 2011
April
04
Apr
1
01
2011
10:29 AM
10
10
29
AM
PDT
MathGrrl: An example just came into my mind. F=mgh. We know "g", and we know "m" for a baseball. We drop the baseball from a given height, h, and measure the force as it hits a measuring device. From our knowledge of "m" and "g", we calculate F. When we measure it, we find that instead of it being 5.6788 pounds of force, it's actually 5.67423 pounds. Should we then conclude that F=mg is NOT a "rigorous mathematical definition" of force in a gravitational field?PaV
April 1, 2011
April
04
Apr
1
01
2011
10:21 AM
10
10
21
AM
PDT
Collin, Thanks but there are varying degrees of specification that need to be accounted for. That is why I said if you have a 200 amino acid long polypeptide that forms functioning protein- if any arrangement gets you that function then it ain't so specified. However If only one sequence gives you that function then you have a very tight specification. And then there are degrees in between. The same for 500 bits- if that 500 bits can be arranged in any order and provide the same specification then it ain't that specified. That said if one can do those 3 steps they are on their way.Joseph
April 1, 2011
April
04
Apr
1
01
2011
10:16 AM
10
10
16
AM
PDT
MathGrrl [151]:
Your explanation demonstrates one significant problem with calculating CSI — the dependence on the knowledge of the person doing the calculation. If the strings were longer, to get past the 500 bit limit you specify, you could easily calculate that there is sufficient specified complexity (assuming you are assuming, arguendo, that these strings are somehow functional) to constitute CSI. Subsequent discoveries could lower that number to below the 500 bit threshold. That subjectivity in the calculation makes CSI prone to false positives and, again, contradicts Dembski’s claim that CSI can be calculated ahistorically.
In the particular case I've given, that isn't possible since ASCII is ASCII, and letters are letters. What I mean is that if, indeed, you have a pattern, i.e., recognizable letters constructed in a way that is specified (I can understand them), then there will only be one way of spelling it correctly. So, simple familiarity with both ASCII and English would rule anything else out. Now if someone who ONLY spoke Spanish decided to try and interpret the pattern using ASCII and putting a '1' in the middle as they went along, then, to them, this wouldn't be "specified" (it would look like gibberish), and they would conclude that it wasn't CSI. But you say "CSI [is] prone to false positives." That's a head-scratcher. Are you trying to say that if you had a bit string 500 digits long, and it inscribes some English phrase, but it's exactly 500 digits long, then if someone were to say, without knowing the 'history' of the bit string, that this constitutes CSI, you would then come along and say, e.g. "Well, 'Methinks it is a weasel' (assuming this was part of the complete phrase) can easily be written 'I think it is a weasel', which is 18 bits shorter, therefore we have a "false positive"? Do you really think this is being "prone to false positives"? Are you really going to say, "Well, just flipping a coin randomly could have produced this"? This would mean that someone would have to flip 500 coins all at once 10^149 times to reach the pattern by chance. Is this really "prone to false positives"? Maybe we can say that the UPB is 10^180. That would take care of false positives. And proteins would still be CSI based on this level of improbability/complexity. Just giving a rough guess, a protein consisting of 240 bases (4^240, or, 2^480), which is the equivalent of 80 a.a.s, would reach this higher limit. Cytochrome C, which is ESSENTIAL to cell division (i.e., no Cytochrome C, NO replication; hence, no nothing, certainly no NS) is 110 a.a.s long. I use the biological example because of your choice of words: "subsequent discoveries". This is the great argument from ignorance that Darwinists like to use: some day we'll understand just how NS is able to do this; we just haven't discovered it yet. Well, it is an argument from ignorance, while in the meantime we can calculate the tremendous improbabilities involved in cellular realities. Seth Lloyd gave 10^120 as the maximum number of quantum calculations that could take place in the entire universe. Well, using that computer to "flip coins" would not allow us to reach a binary string that could be translated by ASCII into a meaningful English phrase 234 letters long simply by chance. Isn't reality just staring us in the face? Isn't the "Design Inference" the most intelligible, reasonable, logical conclusion to make. And, if we wanted to be really logical and reasonable, we would conclude that nothing will ever be discovered that can overcome these improbabilities.PaV
April 1, 2011
April
04
Apr
1
01
2011
10:12 AM
10
10
12
AM
PDT
MathGrrl [151]:
ev is just a model of some simplified evolutionary mechanisms. It sounds like you’re saying that knowledge of how those mechanisms resulted in the pattern is necessary to calculate the CSI of the pattern. That contradicts Dembski’s claim that CSI can be calculated “even if nothing is known about how they arose”.
Nice try, but it won't work. In the case of ev we're dealing with artificial intelligence, and it is uncertain just what is "random" and what is not. To come up with a "chance hypothesis" that is realistic, and meaningful, requires digging into the programming and then determining various probability measures for the individual steps. You would have to take it step-by-step. This isn't the case with biological systems. From the time of Watson and Crick, as Steven Meyer illuminates so well in Signature in the Cell, it's been known that there are no chemical/quantum mechanical laws or forces that show any kind of bias at all when it come to nucleotide base selection. Hence, in the case of a protein sequence, each amino acid has a 1 in 21(2) chance of being selected. Hence, for the event, E, of a nucleotide sequence, the numerator would be (1/21)^N, where N = the length of the sequence. Now, because of mutations, and because there are parts of a protein sequence that aren't as essential as others, there will be more than one "T": that is, there is more than one way to arrive at a functional protein sequence for any given sequence. This then would correspond in the biological case with added "specificational resources" (which is a SP way of looking at it), and so the complexity, i.e., improbability, associated with any given functional protein would be the number of these functional sequences of length N, divided by the above numerator. Nevertheless, in the case of ev, since the binary string is less than 500 digits, it fails to rise to the needed level of improbability/complexity. So, really, why bother. If it were completely all chance events, which we know isn't the case, even then it would not constitute CSI as it is properly defined.PaV
April 1, 2011
April
04
Apr
1
01
2011
09:42 AM
9
09
42
AM
PDT
Mathgrrl, 3 questions 1. Can you recognize Shannon information when you see it? 2. Can you tell when Shannon information has meaning or function? 3. Can you count to 500? If your answer is yes to all 3, then you can recognize CSI (according to Josephs definition). If you can recognize its presence then you can introduce variables to see what happens to the CSI. you can also do correlational studies. This is science.Collin
April 1, 2011
April
04
Apr
1
01
2011
09:38 AM
9
09
38
AM
PDT
Quiet ID [149]: No, it's not problematic. I could have chosen a longer bit string---but that would have meant that I had to toss a coin five hundred times to make my point when the exchange took place that prompted these bit strings.PaV
April 1, 2011
April
04
Apr
1
01
2011
09:21 AM
9
09
21
AM
PDT
Alex73 at post 153. If she says yes, then all economists are out of a job.Collin
April 1, 2011
April
04
Apr
1
01
2011
09:19 AM
9
09
19
AM
PDT
Not every definition is rigorously and mathematically definable but is precise enough for scientific usefulness. Like I said, people are thrown in jail (or exhonerated) over concepts like schizophrenia and major depressive disorder. Pills are prescribed based on much squishier definitions than CSI. Why don't you tell me why Joseph's definition of CSI is imprescise. It may not be calculable mathematically, but it is still tightly defined.Collin
April 1, 2011
April
04
Apr
1
01
2011
09:09 AM
9
09
09
AM
PDT
1 2 3 7

Leave a Reply