*The Design Inference*— 12 years after its publication!

# James Bradley disses Dembski’s *The Design Inference* — 12 years after its publication!

December 28, 2010 | Posted by Clive Hayden under Intelligent Design |

It looks as though the folks at BioLogos are targeting all the main works of design theorists, and the flavor of the month this time is William Dembski’s *The Design Inference*. Retired Calvin College mathematics professor James Bradley has been called in to do the demolition. His “scholarly” take-down of Dembski’s book is here:

http://biologos.org/uploads/projects/bradley_scholarly_essay.pdf

The first of two blog posts by him against Dembski’s book is here:

http://biologos.org/blog/why-dembskis-design-inference-doesnt-work

All of this seems quite hamfisted. Why review a book 12 years AFTER its publication? Why focus only on the book and ignore all that Dembski has subsequently written on design inferences (e.g., in his books NO FREE LUNCH, THE DESIGN REVOLUTION, and THE DESIGN OF LIFE, all of which extend and clarify his ideas in THE DESIGN INFERENCE). Dembski himself is publicly on record for saying that his most precise formulation of design inferences is in his article “Specification: The Pattern That Signifies Intelligence.” And what about Dembski’s subsequent work on active information as a marker of design through the Evolutionary Informatics Lab (www.evoinfo.org)? None of this work receives mention by Bradley.

I’ll leave it to people on this forum to look at the two links above and formulate their own thoughts (preferably in comments posted to this thread) about whether Bradley has adequately refuted Dembski.

### 13 Responses to *James Bradley disses Dembski’s **The Design Inference* — 12 years after its publication!

*The Design Inference*— 12 years after its publication!

### Leave a Reply

You must be logged in to post a comment.

There is considerable confusion arising from Dembski’s later work – particularly the article:

“Specification: The Pattern That Signifies Intelligence.”

Gpuccio (and I think others on this forum) have expressed their disquiet over this paper and referred me to Dembski’s earlier work such as The Design Inference.

So it seems reasonable to assess this earlier work as well.

I don’t undestand all of this- it is a mystery to me.

I say that because to refute what Dembski, Behe, Wells, Meyer, Minnich, et al are saying- IOW to refute the design inference- all one has to do is step up and demonstrate that blind, undrectd (chemical) processes can account for watever we inferred to be designed.

IOW they need to forget about ID, ie act as if it doesn’t exist, and actually get to work to try to support their postion.

To refute Dembski just demonstrate that blind, undirected (chemical) pocesses can produce CSI.

Anything else will fall short of a refutation…

I only got as far as his use of the Sierpinski Triangle as “a clear example of how specified, complex structures can arise by chance in unexpected ways”. The ST is ordered but it certainly isn’t complex. A simple formula generates it and that formula would have a small information content by either Shannon entropy or any measure of complex, specified, functional information. I suppose James Bradley would count “methinks it is a weasel” repeated for 900 pages as constituting a complex Shakespearean novel.

Onlookers:

Pardon, MF is simply reiterating his selective hypoerskepticism.

(If he can get away with being summary, unresponsive to substantial points and vague, so can I. To object or to becloud with such objections is not to confute. So far as I understand GP, he is trying to find the nigh impossible: a subset of CSI that cannot be gamed and obfuscated by the sufficiently determined. The closest to that is digitally coded, functionally specific complex information, such as in text in software and in DNA. And we have had folks trying to question that the DNA code is a code!!)

GEM of TKI

The probable reason for responding to stuff from 12 years ago is an inability to deal with the present day material. Every honest turkey raffle demonstrates that designers can mimic chance. The reverse proposition is not demonstrated, except for certain lawlike patterns with low information content.

There are two parts to Bradley’s argument. The first has to do with Dembski’s mathematical foundations; and the second has to do with Dembski’s view of God and chance.

First: Bradley’s argument.

To his credit, Bradley—unlike most who critique Dembski—understands specified complexity and the necessary link between a pattern and the low probability of that pattern coming about by chance.

That said, Bradley errs when he presents a case that he believes refutes Dembski’s formulation of the Design Inference.

He argues that the Sierpinski Gasket (Triangle), while fitting Dembski’s DI criteria of a pattern that is both specified and highly improbable, can be arrived at by using a strictly random process. The claim is that the existence of this example shows that Dembski’s mathematical formulation, in this instance, has failed to sweep away “all chance hypothesis”; thusly, Dembski’s theory is invalidated (a false positive).

That is the claim.

However, it is not backed up by what Bradley presents as the Sierpinski Chaos Game solution.

The Sierpinski Chaos Game solution is represented by this algorithm:

Start with any point in the original triangle; connect that point with a vertex of the larger triangle; mark the midpoint of that line segment; now connect that point to any one of the three vertices (randomly); continue this process. This process apparently will produce the Sierpinski Triangle–a fractal pattern that is highly structured and complex.

There are two problems with Bradley’s proposal:

(1) he wants to argue that since the “original” triangle has “zero” area, that the “probability” of finding a point in the “orignal” triangle is zero; and, hence, the improbability of the Sierpinski Triangle generated by the Chaos Game is zero: IOW, of very low probability. This strikes me as a bogus argument since the orginal point falls within the largest triangle with a probability of 1. But this error is almost incidental.

(2) his grevious error comes from the Chaos Game algorithm he describes. Choosing a vertex at random is obviously a chance process. But what about determining and selecting a midpoint for the line segments that are sequentially formed?

Can this be claimed to be a ‘chance’ process? Of course not. Only an intelligent agent can so determine what a midpoint is, and only an intelligent agent understands what the “vertex of a triangle” is, and, so, only an intelligent agent can then fashion the appropriate algorithm. This is clearly an instance of what Dembski and Marks call “active information”.

So, quite easily, Bradley’s argument can be swept away. This is just another instance where computer algorithms are able to ‘sneak in’ information.

As to Bradley’s claim that Dembski’s theological views preclude any chance hypothesis, thus rendering his Design Inference meaningless, I would simply state that Bradley has not fully understood Dembski’s writings.

What is chance to us is not chance to God. That is Dembski’s point. This doesn’t mean chance happenings don’t occur; only that they occur to us and not to God.

Here’s an example: Jesus tells Peter to go and fish in the Sea of Galilee and when he catches his first fish, to look inside, and there he will find a gold coin.

What are the odds of the first fish you catch having a gold coin in its mouth? Not very high. But this didn’t slow Jesus down, did it? As I say, Bradley just misunderstands Dembski’s point in all of this.

Firstly, let me say that Bradley does a great job of proving the falsifiability of Dembski’s argument. All one must do is find a chance hypothesis that accounts for the complexity of DNA, and poof Dembski’s theory is toast.

Secondly, Bradley doesn’t seem to get specificity, or he wouldn’t present a fractal as his proof case. I really like “function specifying” rather than specificity because it is easier to understand. The fact that DNA produces something that functions, that does stuff, is the key to proving its specificity. While Dembski’s term is somewhat more general, more scientifically correct, “function specifying” is a definition that us little people can more easily get our mind around.

Thirdly, Bradley isn’t brave enough to open a discussion board on his publication site. I think he is afraid of honest debate. Unfortunately for him, the internet is not easily limitable.

PaV,

this,

Jesus tells Peter to go and fish in the Sea of Galilee and when he catches his first fish, to look inside, and there he will find a gold coin.,,

reminds me of this,

Cichlid Fish (St. Peters Fish) – Evolution or Variation Within Kind? – Dr. Arthur Jones

http://www.metacafe.com/watch/4036852/

The other problem with the Sierpinski Gasket algortithm outlined above is that the specification is itself extremely simple; it consists in totality of the instructions necessary to produce it, a total of 4-5 instructions. It is very little different from the Mandlebrot set, whose enormous complexity falls out of its simple generating function. Once the instructions are set, there is no gain in information from it being “played out”. Contrast that with the information stored in DNA. No simple formula, or set of instructions, can tell you what follows. The “random” element is a smoke-screen to mask the elemental nature of the original formula, and pretty well by definition, “randomness” can add nothing to the original specification, except as a single instruction on methodology.

SC:

Actually, I would put it slightly differently.

The Gasket is made by a simple algorithm that is designed, and incorporates a [pseudo-] random element. So, it is design plus chance (and of course uses the mechanical laws that make a PC work).

As a fractal structure its appearance is indeed complex, but it is the product of programing, and the exactitude of the pattern involved shows the mechanical, programmed process involved.

But, if you look at the algorithm that makes the gasket, you see something interesting: as stated, it is made up from ASCII characters, and more than 130 or so. And, its elements are strung together in a highly specific, functional sequence that has to be just so to work.

That is FSCI, and lo and behold, it comes from a designer. So, design is in the causal chain for the gasket, and it manifests FSCI as again a reliable sign of intelligence.

GEM of TKI

There is another problem with using the Sierpinski triangle as a refutation of Dembski and that is that both methods of creating it require an infinite number of steps, taking it completely out of the realm of phenomena that can arise in nature, which are limited by the total number of possible events since the Big Bang, a large but still finite number. The random method of creating the triangle only resolves to the result of the non-random method after an infinite series of steps.

BD:

Excellent point!

G

Bradley presents two examples trying to debunk design inferences by means of the explanatory filter (EF).

In the first one he uses a sequence of ten binary outcomes (the outcomes being 0 or 1). Then he implies that the EF could infer design for the event “ten 1’s in a row”. However, the filter will avoid this in the second step because it is an event with intermediate probability, since low probability is clearly defined as the probability of events below a universal probability bound ((1/10)^120; i.e., 1/10 to the 120th power); thus any non-necessary event with probability greater than the UPB will have intermediate probability. Since the event “ten 1’s in a row” has probability (1/2)^10, which is approximately (1/10)^3, then it is a non-necessary event with intermediate probability. And the EF will assign it rightly to chance.

So the first example presented by Bradley is a typical example where a specified event with intermediate probability is assigned always to chance by the EF and never to design.

As for the Sierpinski triangles (Bradley’s second example), it is a fractal. Dembski has said repeatedly that fractals arise by necessity. Necessary events are those with probability 1. Since Bradley is giving an example of how could the Sierpinski triangles arise by an algorithm, he is not saying anything new. An algorithm always produces an event with probability 1, the same outcome. The Sierpinski triangles will be rightly assigned to necessity in the first step of the EF.

Finally, there are two conceptual mistakes, which frankly lead me to ask if Bradley understands what he is criticizing:

The first one is on mathematical foundations: he quotes Dembski’s definition of necessity —“To attribute an event to regularity is to say that the event will (almost)

always happen”— arguing that it is vague. However, almost sure events are clearly defined in probability theory as events with probability 1. There’s nothing vague on that.

As for Bradley criticism of Dembski’s (lack of) definition of chance… well, all of science has to deal with that problem. And even though we don’t know accurately the definition of chance, we are always doing inferential statistics to conclude in every aspect of scientific research. Thus Bradley criticism about the nature of chance will affect most of the current scientific knowledge.

Summing up, Bradley’s criticisms of the EF are only confirmations of its robustness. And Bradley’s conceptual flaws only allow for two options: either he doesn’t understand what he is criticizing or he understands it but is being dishonest in his criticism. Now, I actually believe that he understands what he is criticizing, so…