A number of posts on Uncommon Descent have discussed issues surrounding specified complexity. Unfortunately, the posts and discussions that resulted have exhibited some confusion over the nature of specified complexity. Specified complexity was developed by William Dembski and deviation from his formulation has led to much confusion and trouble.
I’m picking a random number between 2 and 12. What is the probability that it will be 7? You might say it was 1 in 11, but you’d wrong because I chose that number by rolling two dice, and the probability was 1 in 6. The probability of an outcome depends on how that outcome was produced. In order to calculate a problem, you must always consider a mechanism and an outcome. Any attempt to compute a probability in the absence of a mechanism is wrong.
Specified complexity is essentially probability+. In order for an outcome to exhibit specified complexity, it must be highly improbable while also being specified. That probability, as was just discussed, depends on the mechanism. Consequently, specified complexity also depends on the mechanism. You cannot look at specified complexity in a vacuum. Specified complexity must always be considered in the context of a mechanism.
With that in mind, let’s consider a comment from a recent blog post:
For that matter, take a robot in a room full of coins that have random heads-tail configurations. The robot orders them all heads. The final CSI inside the room (the Robot’s CSI plus the coin’s CSI) is now greater than what we began with!
Remember that the CSI has to calculated based on the actual mechanism in operation. In this case, we have to calculate the CSI taking into account the actions of the robot. Assuming the robot has no chance of failing, the probability of all coins being heads is 100%. Thus there is zero bits of CSI. The robot has drastically decreased the amount of CSI, not increased it.
The purpose of CSI is not to determine whether an artefact shows signs of being designed. The purpose of CSI is to evaluate whether various proposed mechanisms can explain the artefact. If an artefact exhibits high specified complexity with respect to a mechanism, that mechanism is a poor explanation of the artefact. It would have to be very lucky to produce that artefact. In fact, one can consider the CSI as a measurement of how much luck would be required to produce the artefact.
To see this, let’s consider the example of 2000 heads up coins. We want to know how they came to be all heads up. A first hypothesis would be that they were all flipped randomly, but all just happened to have come up heads. This has an probability of 1 in 2^2000 and a specified complexity of 2000 bits. We conclude that the hypothesis is incorrect. It simply requires way too much luck for all the coins to have come up heads. A second hypothesis would be that a robot or something similar came through and turned all the coins so that they were heads up. The probability of this is 1 in 1, and thus have 0 bits of specified complexity. Thus we do not reject the hypothesis. This does not mean that the hypothesis is incorrect, but the specified complexity gives us no reason to reject it.
One might be inclined to reject to view specified complexity as useless. It seems to basically just be a probability argument. As a recent comment said:
We can simply say, that on the assumption it is a fair coin, it is improbable — 1 out of 2^2000 and it violates expectation value by many standard deviations. We can make the design inference without reference to information theories.
But the question is: what’s so special about 2000 heads? My own coin sequence: TTHHHHHTHTHTTHTHTTHTTHHHHTTTHHHH… is just as improbable. There are various possible justifications, but it comes down viewing some sequences as special and others as random noise. What does it mean for the sequence to be special? It means that it follows an independant pattern, it is specified.
Specified complexity is nothing more than a probability argument that takes specification into account. Any valid probability argument must explicitly or implicitly have a specification. All probability arguments are specified complexity arguments. All specified complexity arguments are probability arguments. They are one in the same even if you don’t call them by the same name.
Another question raised was whether two copies of War and Peace contained more CSI than one copy. By now you should know the answer: it depends on the mechanism. Let’s assume for the sake of argument that the probability of producing a single copy of War and Peace by some mechanism is 1 in 2^1000 and thus exhibits 1000 bits of specified complexity. How plausible is that both books were produced independently by the same mechanism? In this case, the probability multiply and it exhibits 2000 bits of specified complexity. On the other hand, how plausible is that one book was produced by the mechanism, and the other is a copy? The probability is 1 in 2^1000, and the copy has a probability of 1 in 1. Thus the total specified complexity is 1000 bits.
Remember, CSI is always computed in the context of a mechanism. Specified Complexity is nothing more than a more rigorous form of the familiar probability arguments. If you try to measure the specified complexity of arbitrary artefacts you will run into trouble because you are trying to use specified complexity for something it was not designed to be. Specified complexity was only intended to provide rigour to probability arguments. Anything beyond that is not specified complexity. It might be useful in its own right, but it is not specified complexity.